Large language models (LLMs)—the technology that powers popular AI chatbots like ChatGPT and Google’s Gemini—repeatedly made irrational, high-risk betting decisions when placed in simulated gambling environments, according to the results of a recent study. Given more freedom, the models often escalated their bets until they lost everything, mimicking the behavior of human gambling addicts.



It wouldn’t be able to do it. LLMs are bad at math. Computers can’t do math very well with only an auto-complete/spell-check program.
Thus, this article and many similar.