Research Paper Series II - Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?∗
- john6747
- Aug 27, 2024
- 3 min read
In a groundbreaking new study, MIT professor John J. Horton introduces a novel approach to economic research that could revolutionize how we explore human behavior in markets. His paper, "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?", proposes using advanced AI language models like GPT-3 as virtual economic agents.

Horton's idea is both simple and profound: Just as economists have long used the concept of "homo economicus" to model rational economic behavior, we can now use large language models (LLMs) as "homo silicus" - silicon-based simulations of human decision-making. These AI agents can be placed in various economic scenarios, allowing researchers to explore complex behavioral questions quickly and at a fraction of the cost of traditional human subject studies.
To demonstrate the potential of this approach, Horton recreated four classic experiments from behavioral economics using GPT-3. He simulated social preference games, fairness judgments in market pricing, status quo bias in decision-making, and even labor market responses to minimum wage policies. Remarkably, the most advanced LLM was able to qualitatively replicate many of the key findings from the original human experiments.
One of the most exciting aspects of this method is its flexibility. Horton showed how AI agents could be "endowed" with different personalities, political leanings, or background knowledge, allowing researchers to explore how these factors might influence economic behavior. Want to see how a libertarian versus a socialist might judge price gouging? Or how an experienced versus novice trader might respond to market volatility? With LLMs, these comparisons become trivially easy to make.
The implications for market research are potentially enormous. Imagine being able to pilot test survey questions, experimental designs, or even entire market scenarios before ever recruiting a single human participant. Researchers could rapidly iterate on their ideas, exploring countless variations and parameter changes in a matter of hours rather than weeks or months.
Moreover, this approach opens up new avenues for studying sensitive topics or extreme scenarios that might be ethically challenging with human subjects. It also offers the tantalizing possibility of generating large, diverse "sample sizes" without the usual constraints of participant recruitment.
Of course, Horton is careful to note that this method is not meant to replace traditional human studies. Rather, it's a powerful complementary tool that could help researchers refine their hypotheses, optimize their experimental designs, and potentially uncover novel insights that might otherwise go unnoticed.
There are, naturally, limitations and potential pitfalls to consider. The AI's responses are ultimately based on its training data, which may contain biases or inaccuracies. And there's always the risk of researchers over-interpreting the AI's outputs or falling into the trap of thinking these silicon agents are perfect replicas of human cognition.
Despite these caveats, the potential of this approach is undeniable. As LLMs continue to advance, they could become an invaluable tool in the economist's and market researcher's toolkit. By combining the flexibility and scale of AI with the nuanced insights of behavioral economics, we may be on the cusp of a new era in our understanding of human economic behavior.
Ultimately, Horton's work opens up a fascinating new frontier in economic research. It challenges us to think creatively about how we can leverage cutting-edge AI to gain deeper insights into the complexities of human decision-making in markets. For researchers willing to embrace this new paradigm, the possibilities are as exciting as they are vast.