You might think your personality is unique, but all it takes is a two-hour interview for an AI model to create a virtual replica with your attitudes and behaviors. That’s according to a new paper published by researchers from Stanford and Google DeepMind.
What are simulation agents?
Simulation agents are described by the paper as generative AI models that can accurately simulate a person’s behavior ‘across a range of social, political, or informational contexts’.
In the study, 1,052 participants were asked to complete a two-hour interview which covered a wide range of topics, from their personal life story to their views on contemporary social issues. Their responses were recorded and the script was used to train generative AI models – or “simulation agents” – for each individual.
To test how well these agents could mimic their human counterparts, both were asked to complete a set of tasks, including personality tests and games. Participants were then asked to replicate their own answers a fortnight later. Remarkably, the AI agents were able to simulate answers with 85% accuracy compared to the human participants.
What’s more, the simulation agents were similarly effective when asked to predict personality traits across five social science experiments.
While your personality might seem like an intangible or unquantifiable thing, this research shows that it’s possible to distill your value structure from a relatively small amount of information, by capturing qualitative responses to a fixed set of questions. Fed this data, AI models can convincingly imitate your personality – at least, in a controlled, test-based setting. And that could make deepfakes even more dangerous.
Double agent
The research was led by Joon Sung Park, a Stanford PhD student. The idea behind creating these simulation agents is to give social science researchers more freedom when conducting studies. By creating digital replicas which behave like the real people they’re based on, scientists can run studies without the expense of bringing in thousands of human participants every time.
They may also be able to run experiments which would be unethical to conduct with real human participants. Speaking to MIT Technology Review, John Horton, an associate professor of information technologies at the MIT Sloan School of Management, said that the paper demonstrates a way you can “use real humans to generate personas which can then be used programmatically/in-simulation in ways you could not with real humans.”
Whether study participants are morally comfortable with this is one thing. More concerning for many people will be the potential for simulation agents to become something more nefarious in the future. In that same MIT Technology Review story, Park predicted that one day “you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made.”
For many, this will set dystopian alarm bells ringing. The idea of digital replicas opens up a realm of security, privacy and identity theft concerns. It doesn’t take a stretch of the imagination to foresee a world where scammers – who are already using AI to imitate the voices of loved-ones – could build personality deepfakes to imitate people online.
This is particularly concerning when you consider that the AI simulation agents were created in the study using just two hours of interview data. This is much less than the amount of information currently required by companies such as Tavus, which create digital twins based on a trove of user data.