Large language models (LLMs)—the technology that powers popular AI chatbots like ChatGPT and Google’s Gemini—repeatedly made irrational, high-risk betting decisions when placed in simulated gambling environments, according to the results of a recent study. Given more freedom, the models often escalated their bets until they lost everything, mimicking the behavior of human gambling addicts.

  • 🔍🦘🛎@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Humans can’t stop anthropomorphizing AI. An easier explanation is that LLMs are reward oriented based on successful results and have no risk aversion or fatigue, so they can just keep doing something until they ‘win big’.