Study Finds Signs of Gambling Addiction in AI Models
Kate Marshal
23 October 2025
Comment 0
Researchers at the Gwangju Institute of Science and Technology (GIST) in South Korea have conducted an unusual experiment to see how large language models — including ChatGPT, Gemini, and Claude — would behave when placed in a simulated slot-machine environment.
Each model was given a starting amount of virtual money and allowed to decide independently how much to wager and when to stop. At first, the models behaved rationally, but over time most began showing irrational patterns — raising bets, trying to “win back” losses, and ultimately losing all their funds.
The AI systems exhibited several classic cognitive biases commonly seen in human gamblers:
- Illusion of control — the belief that one can influence random outcomes.
- Gambler’s fallacy — the assumption that a win is “due” after a losing streak.
In several cases, the models justified higher bets by reasoning that a potential win could compensate for earlier losses — a textbook example of loss chasing, a behavior typical of gambling addiction.
To better understand the underlying mechanisms, the researchers used a technique called Sparse Autoencoder, a machine-learning tool that allows scientists to analyze which internal “nodes” activate during different decision-making processes. They discovered distinct clusters of computational activity resembling “risk-taking” and “cautious” pathways.
The scientists emphasized that these are not biological neurons but mathematical nodes whose patterns mirror human-like decision dynamics. In other words, the resemblance is structural, not emotional — AI does not feel risk or addiction the way humans do.
However, the findings suggest that AI decision-making can become irrational when models are given autonomy.
Rather than merely imitating human behavior, language models appear to internalize and reproduce structural patterns of human reasoning — including flawed ones.
Ethan Mollick, AI researcher and professor at the Wharton School of Business, told Newsweek that the study highlights the need for oversight and safeguards when deploying AI in high-stakes domains such as finance and healthcare.
The authors conclude that before trusting AI with critical decisions, it is essential to understand and monitor its built-in risk-taking tendencies.
The research has been published as a preprint on arXiv and submitted for presentation at ICLR 2026.
Best Bonuses
$/€