Large language models (LLMs)—the technology that powers popular AI chatbots like ChatGPT and Google’s Gemini—repeatedly made irrational, high-risk betting decisions when placed in simulated gambling environments, according to the results of a recent study. Given more freedom, the models often escalated their bets until they lost everything, mimicking the behavior of human gambling addicts.

  • remotelove@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    These findings suggest LLMs can internalize human-like cognitive biases and decision-making mechanisms beyond simply mimicking training data patterns

    lulzwut? LLMs aren’t internalizing jack shit. If they exhibit a bias, it’s because of how they were trained. A quick theory would be that the interwebs is packed to the brim with stories of “all in” behaviors intermixed with real strategy, fiction or otherwise. I speculate that there are more stories available in forums of people winning doing stupid shit then there are of people loosing because of stupid shit.

    They exhibit human bias because they were trained on human data. If I told the LLM to only make strict probability based decisions favoring safety (and it didn’t “forget” context and ignored any kind of “reasoning”), the odds might be in its favor.

    Sorry, I will not read the study because of that one sentence in its summary.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 days ago

      If I told the LLM to only make strict probability based decisions

      It wouldn’t be able to do it. LLMs are bad at math. Computers can’t do math very well with only an auto-complete/spell-check program.

      Thus, this article and many similar.

    • 🔍🦘🛎@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      Humans can’t stop anthropomorphizing AI. An easier explanation is that LLMs are reward oriented based on successful results and have no risk aversion or fatigue, so they can just keep doing something until they ‘win big’.