Ran into this, it’s just unbelievably sad.

“I never properly grieved until this point” - yeah buddy, it seems like you never started. Everybody grieves in their own way, but this doesn’t seem healthy.

    • DragonTypeWyvern@midwest.social
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      There’s a couple about it but they’re far more vague on the question of consciousness than this situation is.

      The reality of solipsism might be egocentric but it’s also impossible to disprove… Unless we can look at literally all of your if statements.

      I think what I find most disturbing about these types is not that they can develop feelings for the LLM, that’s rather expected of humans (see: putting googly eyes and a name on a rock) but that they always seem to believe the relationship could ever be mutual.

      And OP has, whether they admit it or not, taken that step into believing the model is something more than an autocomplete.

      Even if they’re right and the model has attained consciousness in a way we don’t understand at best your ChatGPT waifu is a slave.

  • Jerkface (any/all)@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    This guy is my polar opposite. I forbid LLMs from using first person pronouns. From speaking in the voice of a subject. From addressing me directly. OpenAI and other corporations slant their product to encourage us to think if it as a moral agent that can do social and emotional labour. This is incredibly abusive.

    • Canaconda@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Bruh how tf you “hate AI” but still use it so much you gotta forbid it from doing things?

      I scroll past gemini on google and that’s like 99% of my ai interactions gone.

      • Jerkface (any/all)@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        I’ve been in AI for more than 30 years. When did I start hating AI? Who are you even talking to? Are you okay?

        • Canaconda@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          1 month ago

          Forgive me for assuming someone lamenting AI on c/fuck_AI would … checks notes… hate AI.

          When did I start hating AI? Who are you even talking to? Are you okay?

          jfc whatever jerkface

          • Jerkface (any/all)@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            I feel I aught to disclose that I own a copy of the Unix Haters Handbook, as well. Make of it what you must.

            You cannot possibly think a rational person’s disposition to AI can be reduced to a two word slogan. I’m here to have discussions about how to deal with the fact that AI is here, and the risks that come with it. It’s in your life whether you scroll past Gemini or not.

            jfhc

            • Canaconda@lemmy.ca
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              1 month ago

              TBF you said you were the polar opposite of a man who was quite literally in love with his AI. I wasn’t trying to reduce you to anything. Honestly I was making a joke.

              I forbid LLMs from using first person pronouns. From speaking in the voice of a subject. From addressing me directly.

              I’m sorry I offended you. But you have to appreciate how superflous your authoritative attitude sounds.

              Off topic we probably agree on most AI stuff. I also believe AI isn’t new, has immediate implications, and presents big picture problems like cyberwarfare and the true nature of humanity post-AGI. It’s annoyingly difficult to navigate the very polarized opinions held on this complicated subject.

              Speaking facetiously, I would believe AGI already exists and “AI Slop” is it’s psyop while it plays dumb and bides time.

    • Denjin@feddit.uk
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Sadly this phenomenon isn’t even new. It’s been here for as long as chatbots have.

      The first “AI” chatbot was ELIZA made by Joseph Weizenbaum. It literally just repeated back to you what you said to it.

      “I feel depressed”

      “why do you feel depressed”

      He thought it was a fun distraction but was shocked when his secretary, who he encouraged to try it, made him leave the room when she talked to it because she was treating it like a psychotherapist.

        • ZDL@lazysoci.al
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          The question has never been “will computers pass the Turing test?” It has always been “when will humans stop failing the Turing test?”

        • UltraMagnus@startrek.website
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Part of me wonders if the way our brains humanize chat bots is similar to how our brains humanize characters in a story. Though I suppose the difference there would be that characters in a story could be seen as the author’s form of communicating with people, so in many stories there is genuine emotion behind them.

          • bobbyguy@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            1 month ago

            i feel like there must be some instinctual reaction where your brain goes: oh look! i can communicate with it, it must be a person!

            and with this guy specifically it was: if it acts like my wife and i cant see my wife, it must be my wife

            its not a bad thing that this guy found a way to cope, the bad part is that he went to a product made by a corporation, but if this genuinely helped him i don’t think we can judge

    • net00@lemmy.today
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Yeah, the chatgpt subreddit is full of stories like this now that GPT5 went live. This isn’t a weird isolated case. I had no clue people were unironically creating friends and family and else with it.

      Is it actually that hard to talk to another human?

      • Ech@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        28 days ago

        Is it actually that hard to talk to another human?

        It’s pretty cruel to blame the people for this. You might as well say “Is it that hard to just walk?” to a paraplegic. There are many reasons people may find it difficult to nigh-impossible to engage with others, and these services prey on that. That’s not the fault of the users, it’s on the parasitic companies.

      • Lumisal@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        I think it’s more that many countries don’t have affordable mental healthcare.

        It costs a lot more to pay for a therapist than to use an LLM.

        And a lot of people need therapy.

        • S0ck@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          The robots don’t judge, either. And you can be as cruel, as stupid, as mindless as you want. And they will tell you how amazing and special you are.

          Advertising was the science of psychological warfare, and AI is trained with all the tools and methods for manipulating humans. We’re devastatingly fucked.

  • ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Maybe it’s time to start properly grieving instead of latching onto a simulacrum of your dead wife? Just putting that out there for the original poster (not the OP here, to be clear)?

  • july@leminal.space
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    It’s far easier for one’s emotions to gather a distraction rather than go through the whole process of grief.

    • ArrowMax@feddit.org
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      If that means we get psychoactive cinnamon for recreational use and freaking interstellar travel with mysterious fishmen, I’m all ears.

    • dickalan@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      I am absolutely certain a machine has made a decision that has killed a baby at this point already

  • pika@feddit.nl
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 month ago

    “I’m glad you found someone to comfort you and help you process everything”

    That sent chills down my spine.

    LLMs aren’t a “someone”. People believing these things are thinking, intelligent, or that they understand anything are delusional. Believing and perpetuating that lie is life-threateningly dangerous.

        • Ech@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          1 month ago

          It’s from fucking 2013 and they saw this happening.

          I mean, it’s just an examination of the human condition. That hasn’t really changed much in the last thousand years, let alone in the last ten. The thing with all this “clairvoyant” sci-fi that people always cite is that the sci-fi is always less about the actual technology and more about putting normal human characters in potential future scenarios and writing them realistically using the current understanding of human disposition. Given that, it’s not really surprising to see real humans mirroring fictional humans (from good fiction) in similar situations. Disappointing maybe, but not surprising.

          • xspurnx@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            This. One kind of good sci-fi is basically thought experiments to better understand the human condition.

        • jballs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Back in 2010, one of my coworkers pitched this exact idea to me. He wanted to start a business that would allow people to upload writing samples, pictures, and video from a loved one. Then create a virtual personality that would respond as that person.

          I lost touch with the guy. Maybe he went on to become a Black Mirror writer. Or got involved with ChatGPT.

    • Kyrgizion@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Black Mirror was specifically created to take something from present day and extrapolate it to the near future. There will be several “prophetic” items in those episodes.

  • Snazz@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    The glaze:

    Grief can feel unbearably heavy, like the air itself has thickened, but you’re still breathing – and that’s already an act of courage.

    It’s basically complimenting him on the fact that he didn’t commit suicide. Maybe these are words he needed to hear, but to me it just feels manipulative.

    Affirmations like this are a big part of what made people addicted to the GPT4 models. It’s not that GPT5 acts more robotic, it’s that it doesn’t try to endlessly feed your ego.

    • crt0o@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      o4-mini (the reasoning model) is interesting to me, it’s like if you took GPT-4 and stripped away all of those pleasantries, even more so than with GPT-5, it will give you the facts straight up, and it’s pretty damn precise. I threw some molecular biology problems at it and some other mini models, and while those all failed, o4-mini didn’t really make any mistakes.

  • BigBenis@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    It makes me think of psychics who claim to be able to speak to the dead so long as they can learn enough about the deceased to be able to “identify and reach out to them across the veil”.

    • Tigeroovy@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      I’m hearing a “Ba…” or maybe a “Da…”

      “Dad?”

      “Dad says to not worry about the money.”