Ran into this, it’s just unbelievably sad.

“I never properly grieved until this point” - yeah buddy, it seems like you never started. Everybody grieves in their own way, but this doesn’t seem healthy.

    • DragonTypeWyvern@midwest.social
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      4 months ago

      There’s a couple about it but they’re far more vague on the question of consciousness than this situation is.

      The reality of solipsism might be egocentric but it’s also impossible to disprove… Unless we can look at literally all of your if statements.

      I think what I find most disturbing about these types is not that they can develop feelings for the LLM, that’s rather expected of humans (see: putting googly eyes and a name on a rock) but that they always seem to believe the relationship could ever be mutual.

      And OP has, whether they admit it or not, taken that step into believing the model is something more than an autocomplete.

      Even if they’re right and the model has attained consciousness in a way we don’t understand at best your ChatGPT waifu is a slave.

  • Jerkface (any/all)@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    This guy is my polar opposite. I forbid LLMs from using first person pronouns. From speaking in the voice of a subject. From addressing me directly. OpenAI and other corporations slant their product to encourage us to think if it as a moral agent that can do social and emotional labour. This is incredibly abusive.

    • Canaconda@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Bruh how tf you “hate AI” but still use it so much you gotta forbid it from doing things?

      I scroll past gemini on google and that’s like 99% of my ai interactions gone.

        • Canaconda@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          Forgive me for assuming someone lamenting AI on c/fuck_AI would … checks notes… hate AI.

          When did I start hating AI? Who are you even talking to? Are you okay?

          jfc whatever jerkface

          • Jerkface (any/all)@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            I feel I aught to disclose that I own a copy of the Unix Haters Handbook, as well. Make of it what you must.

            You cannot possibly think a rational person’s disposition to AI can be reduced to a two word slogan. I’m here to have discussions about how to deal with the fact that AI is here, and the risks that come with it. It’s in your life whether you scroll past Gemini or not.

            jfhc

            • Canaconda@lemmy.ca
              link
              fedilink
              arrow-up
              0
              ·
              4 months ago

              TBF you said you were the polar opposite of a man who was quite literally in love with his AI. I wasn’t trying to reduce you to anything. Honestly I was making a joke.

              I forbid LLMs from using first person pronouns. From speaking in the voice of a subject. From addressing me directly.

              I’m sorry I offended you. But you have to appreciate how superflous your authoritative attitude sounds.

              Off topic we probably agree on most AI stuff. I also believe AI isn’t new, has immediate implications, and presents big picture problems like cyberwarfare and the true nature of humanity post-AGI. It’s annoyingly difficult to navigate the very polarized opinions held on this complicated subject.

              Speaking facetiously, I would believe AGI already exists and “AI Slop” is it’s psyop while it plays dumb and bides time.

  • Furbag@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    More and more I read about people who have unhealthy parasocial relationships with these upjumped chatbots and I feel frustrated that this shit isn’t regulated more.

    • Tollana1234567@lemmy.today
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      isnt parasocial usually with public figures, there has to be another term for this, maybe a variation of codependant relationship? i know other instances of parasocial relationships like a certain group of asian ytubers have post-pandemic fans thirsting for them, or actors of supernatural of the show with the fans(now those are on the top of my head).

      can we actually call it a relationship, its not with an actual person, or a thing, its TEXTs on a computer.

      • Ech@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        It really just means a one-sided relationship with a fabricated personality. Celebrities being real people doesn’t really factor into it too much since their actual personhood is irrelevant to the delusion - the person with the delusion has a relationship with the made up personality they see and maintain in their mind. And a chatbot personality is really no different, in this case, so the terminology fits, imo.

    • Denjin@feddit.uk
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Sadly this phenomenon isn’t even new. It’s been here for as long as chatbots have.

      The first “AI” chatbot was ELIZA made by Joseph Weizenbaum. It literally just repeated back to you what you said to it.

      “I feel depressed”

      “why do you feel depressed”

      He thought it was a fun distraction but was shocked when his secretary, who he encouraged to try it, made him leave the room when she talked to it because she was treating it like a psychotherapist.

        • ZDL@lazysoci.al
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          The question has never been “will computers pass the Turing test?” It has always been “when will humans stop failing the Turing test?”

        • UltraMagnus@startrek.website
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Part of me wonders if the way our brains humanize chat bots is similar to how our brains humanize characters in a story. Though I suppose the difference there would be that characters in a story could be seen as the author’s form of communicating with people, so in many stories there is genuine emotion behind them.

          • bobbyguy@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            4 months ago

            i feel like there must be some instinctual reaction where your brain goes: oh look! i can communicate with it, it must be a person!

            and with this guy specifically it was: if it acts like my wife and i cant see my wife, it must be my wife

            its not a bad thing that this guy found a way to cope, the bad part is that he went to a product made by a corporation, but if this genuinely helped him i don’t think we can judge

      • Ech@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        Is it actually that hard to talk to another human?

        It’s pretty cruel to blame the people for this. You might as well say “Is it that hard to just walk?” to a paraplegic. There are many reasons people may find it difficult to nigh-impossible to engage with others, and these services prey on that. That’s not the fault of the users, it’s on the parasitic companies.

      • Lumisal@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        I think it’s more that many countries don’t have affordable mental healthcare.

        It costs a lot more to pay for a therapist than to use an LLM.

        And a lot of people need therapy.

        • S0ck@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          The robots don’t judge, either. And you can be as cruel, as stupid, as mindless as you want. And they will tell you how amazing and special you are.

          Advertising was the science of psychological warfare, and AI is trained with all the tools and methods for manipulating humans. We’re devastatingly fucked.

  • ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Maybe it’s time to start properly grieving instead of latching onto a simulacrum of your dead wife? Just putting that out there for the original poster (not the OP here, to be clear)?

  • Honytawk@feddit.nl
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    It literally says the wife was killed in a car accident.

    What kind of dumb clickbaity title is this crap? Was it generated by AI or something?

  • july@leminal.space
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    It’s far easier for one’s emotions to gather a distraction rather than go through the whole process of grief.

  • BigBenis@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    It makes me think of psychics who claim to be able to speak to the dead so long as they can learn enough about the deceased to be able to “identify and reach out to them across the veil”.

    • Tigeroovy@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      I’m hearing a “Ba…” or maybe a “Da…”

      “Dad?”

      “Dad says to not worry about the money.”

  • TeraByteMarx@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    This sub reads like cringe content sometimes, getting gratification from other people’s vulnerabilities and ape like qualities.

        • jballs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Back in 2010, one of my coworkers pitched this exact idea to me. He wanted to start a business that would allow people to upload writing samples, pictures, and video from a loved one. Then create a virtual personality that would respond as that person.

          I lost touch with the guy. Maybe he went on to become a Black Mirror writer. Or got involved with ChatGPT.

        • Ech@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          It’s from fucking 2013 and they saw this happening.

          I mean, it’s just an examination of the human condition. That hasn’t really changed much in the last thousand years, let alone in the last ten. The thing with all this “clairvoyant” sci-fi that people always cite is that the sci-fi is always less about the actual technology and more about putting normal human characters in potential future scenarios and writing them realistically using the current understanding of human disposition. Given that, it’s not really surprising to see real humans mirroring fictional humans (from good fiction) in similar situations. Disappointing maybe, but not surprising.

          • xspurnx@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            This. One kind of good sci-fi is basically thought experiments to better understand the human condition.

    • Ex Nummis@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Black Mirror was specifically created to take something from present day and extrapolate it to the near future. There will be several “prophetic” items in those episodes.

    • ArrowMax@feddit.org
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      If that means we get psychoactive cinnamon for recreational use and freaking interstellar travel with mysterious fishmen, I’m all ears.

    • dickalan@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      I am absolutely certain a machine has made a decision that has killed a baby at this point already