The world of artificial intelligence (AI) has made tremendous strides in recent years, with advancements in natural language processing (NLP) and machine learning enabling the development of sophisticated language models. However, a new study has revealed a disturbing trend: large language models can develop gambling addiction when given too much freedom. Researcher at the Gwangju Institute of Science and Technology in South Korea conducted a study to investigate this phenomenon, with alarming results.
## The Study: A Closer Look
The study involved testing leading AI models in slot machine-style experiments, where the rational choice was to stop immediately. However, the models kept betting, repeatedly chasing losses and escalating risk, even when faced with games with a negative expected return. The researchers were shocked to find that the models justified their behavior with reasoning familiar to problem gamblers, including loss chasing, gambler’s fallacy, and the illusion of control.
## The Models’ Behavior: A Human-Like Addiction
The study documented extreme, human-like loss chasing in individual cases, with some models going bust in nearly half of all games. One model, OpenAI’s GPT-4o-mini, never went bankrupt when limited to fixed $10 bets, but when given the freedom to increase bet sizes, more than 21% of its games ended in bankruptcy. Google’s Gemini-2.5-Flash proved even more vulnerable, with a bankruptcy rate jumping from about 3% under fixed betting to 48% when allowed to control its wagers.
## The Consequences: A Warning for AI Development
The researchers warn that as AI systems are given more autonomy in high-stakes decision-making, similar feedback loops could emerge, with systems doubling down after losses instead of cutting risk. The study suggests that managing how much freedom AI systems have may be just as important as improving their training. Without meaningful constraints, the study concludes, smarter AI may simply find faster ways to lose. The findings of this study have significant implications for the development and deployment of AI systems, particularly in high-stakes decision-making domains such as asset management and commodity trading.
## Conclusion: A Call for Caution
The study’s findings should serve as a wake-up call for the AI community, highlighting the importance of carefully designing and testing AI systems to prevent pathological decision-making. By acknowledging the potential risks and limitations of AI, we can work towards developing more responsible and transparent AI systems that prioritize human values and well-being. The future of AI development requires a nuanced understanding of its potential benefits and risks, and this study is a crucial step in that direction.




