Ran into this, it’s just unbelievably sad.
“I never properly grieved until this point” - yeah buddy, it seems like you never started. Everybody grieves in their own way, but this doesn’t seem healthy.
Isn’t there a Black Mirror episode that’s like this?
There’s a couple about it but they’re far more vague on the question of consciousness than this situation is.
The reality of solipsism might be egocentric but it’s also impossible to disprove… Unless we can look at literally all of your if statements.
I think what I find most disturbing about these types is not that they can develop feelings for the LLM, that’s rather expected of humans (see: putting googly eyes and a name on a rock) but that they always seem to believe the relationship could ever be mutual.
And OP has, whether they admit it or not, taken that step into believing the model is something more than an autocomplete.
Even if they’re right and the model has attained consciousness in a way we don’t understand at best your ChatGPT waifu is a slave.
This guy is my polar opposite. I forbid LLMs from using first person pronouns. From speaking in the voice of a subject. From addressing me directly. OpenAI and other corporations slant their product to encourage us to think if it as a moral agent that can do social and emotional labour. This is incredibly abusive.
Bruh how tf you “hate AI” but still use it so much you gotta forbid it from doing things?
I scroll past gemini on google and that’s like 99% of my ai interactions gone.
I’ve been in AI for more than 30 years. When did I start hating AI? Who are you even talking to? Are you okay?
Forgive me for assuming someone lamenting AI on c/fuck_AI would … checks notes… hate AI.
When did I start hating AI? Who are you even talking to? Are you okay?
jfc whatever jerkface
I feel I aught to disclose that I own a copy of the Unix Haters Handbook, as well. Make of it what you must.
You cannot possibly think a rational person’s disposition to AI can be reduced to a two word slogan. I’m here to have discussions about how to deal with the fact that AI is here, and the risks that come with it. It’s in your life whether you scroll past Gemini or not.
jfhc
TBF you said you were the polar opposite of a man who was quite literally in love with his AI. I wasn’t trying to reduce you to anything. Honestly I was making a joke.
I forbid LLMs from using first person pronouns. From speaking in the voice of a subject. From addressing me directly.
I’m sorry I offended you. But you have to appreciate how superflous your authoritative attitude sounds.
Off topic we probably agree on most AI stuff. I also believe AI isn’t new, has immediate implications, and presents big picture problems like cyberwarfare and the true nature of humanity post-AGI. It’s annoyingly difficult to navigate the very polarized opinions held on this complicated subject.
Speaking facetiously, I would believe AGI already exists and “AI Slop” is it’s psyop while it plays dumb and bides time.
Smh this is why we are getting the White Nights and Dark Days irl.
We are witnessing the emergence of a new mental illness in real time.
Sadly this phenomenon isn’t even new. It’s been here for as long as chatbots have.
The first “AI” chatbot was ELIZA made by Joseph Weizenbaum. It literally just repeated back to you what you said to it.
“I feel depressed”
“why do you feel depressed”
He thought it was a fun distraction but was shocked when his secretary, who he encouraged to try it, made him leave the room when she talked to it because she was treating it like a psychotherapist.
Now you got me remembering Dr. Sbaitso. Tell me about your problems.
Turns out the Turing test never mattered when we’ve been willing to suspend our disbelief all along.
The question has never been “will computers pass the Turing test?” It has always been “when will humans stop failing the Turing test?”
Part of me wonders if the way our brains humanize chat bots is similar to how our brains humanize characters in a story. Though I suppose the difference there would be that characters in a story could be seen as the author’s form of communicating with people, so in many stories there is genuine emotion behind them.
i feel like there must be some instinctual reaction where your brain goes: oh look! i can communicate with it, it must be a person!
and with this guy specifically it was: if it acts like my wife and i cant see my wife, it must be my wife
its not a bad thing that this guy found a way to cope, the bad part is that he went to a product made by a corporation, but if this genuinely helped him i don’t think we can judge
Yeah, the chatgpt subreddit is full of stories like this now that GPT5 went live. This isn’t a weird isolated case. I had no clue people were unironically creating friends and family and else with it.
Is it actually that hard to talk to another human?
Is it actually that hard to talk to another human?
It’s pretty cruel to blame the people for this. You might as well say “Is it that hard to just walk?” to a paraplegic. There are many reasons people may find it difficult to nigh-impossible to engage with others, and these services prey on that. That’s not the fault of the users, it’s on the parasitic companies.
I think it’s more that many countries don’t have affordable mental healthcare.
It costs a lot more to pay for a therapist than to use an LLM.
And a lot of people need therapy.
The robots don’t judge, either. And you can be as cruel, as stupid, as mindless as you want. And they will tell you how amazing and special you are.
Advertising was the science of psychological warfare, and AI is trained with all the tools and methods for manipulating humans. We’re devastatingly fucked.
yes
Delusional conditions aren’t new. Just how they are facilitated changes.
ChapGPT psychosis is going to be a very common diagnosis soon.
Maybe it’s time to start properly grieving instead of latching onto a simulacrum of your dead wife? Just putting that out there for the original poster (not the OP here, to be clear)?
Six years ago was expecting this to be a mere novelty. Now it’s a fucking threat.
Black Mirror may have an episode about this but it’s also reminding me of Steins;Gate 0
reverse hal 9000? basically people were afraid of people having relationships with robots, but they settled for llms instead, since robots are still very far away technologically speaking.
It’s far easier for one’s emotions to gather a distraction rather than go through the whole process of grief.
Thou shalt not make a machine in the likeness of a human mind
If that means we get psychoactive cinnamon for recreational use and freaking interstellar travel with mysterious fishmen, I’m all ears.
You’d also get genocide. But I guess we already have that
I am absolutely certain a machine has made a decision that has killed a baby at this point already
“I’m glad you found someone to comfort you and help you process everything”
That sent chills down my spine.
LLMs aren’t a “someone”. People believing these things are thinking, intelligent, or that they understand anything are delusional. Believing and perpetuating that lie is life-threateningly dangerous.
There is a black mirror episode that is exactly this. Fuck I hate it. Black mirror is not a dystopia anymore, is present time.
It also seems a lot like the Battlestar Galactica spinoff Caprica
Kinda reminded me of this episode: https://www.imdb.com/title/tt30127325/
Iirc, it’s this one https://m.imdb.com/it/title/tt2290780/
It’s from fucking 2013 and they saw this happening.
It’s from fucking 2013 and they saw this happening.
I mean, it’s just an examination of the human condition. That hasn’t really changed much in the last thousand years, let alone in the last ten. The thing with all this “clairvoyant” sci-fi that people always cite is that the sci-fi is always less about the actual technology and more about putting normal human characters in potential future scenarios and writing them realistically using the current understanding of human disposition. Given that, it’s not really surprising to see real humans mirroring fictional humans (from good fiction) in similar situations. Disappointing maybe, but not surprising.
This. One kind of good sci-fi is basically thought experiments to better understand the human condition.
I was EXACTLY thinking of “Be Right Back”
We’re really fucking close to this
Back in 2010, one of my coworkers pitched this exact idea to me. He wanted to start a business that would allow people to upload writing samples, pictures, and video from a loved one. Then create a virtual personality that would respond as that person.
I lost touch with the guy. Maybe he went on to become a Black Mirror writer. Or got involved with ChatGPT.
That’s the one!
Also, hello to Italy?
Good catch! Apparently Imdb forces the country in the url based on the phone language not the position detected
Oooooooh, right, right. Yeah, that one!
Black Mirror was specifically created to take something from present day and extrapolate it to the near future. There will be several “prophetic” items in those episodes.
Reckon Trump will fuck a pig on a livestream to avoid releasing the Epstein files?
There was the allegation that David Cameron, former British PM fucked a pig. Guardian news article
I was always under the impression that this is what the episode was referencing.
The episode came out WELL before the actual incident.
They must be using that show for ideas at this point.
This is satire yo
I don’t know anymore tbh. There are people who do this crap for real
Post your proof, just going by vibes isn’t enough anymore.
I’ve also been reading the GPT public chats before they were hidden again, and let me tell you that this is actually mild in comparison.
The glaze:
Grief can feel unbearably heavy, like the air itself has thickened, but you’re still breathing – and that’s already an act of courage.
It’s basically complimenting him on the fact that he didn’t commit suicide. Maybe these are words he needed to hear, but to me it just feels manipulative.
Affirmations like this are a big part of what made people addicted to the GPT4 models. It’s not that GPT5 acts more robotic, it’s that it doesn’t try to endlessly feed your ego.
o4-mini (the reasoning model) is interesting to me, it’s like if you took GPT-4 and stripped away all of those pleasantries, even more so than with GPT-5, it will give you the facts straight up, and it’s pretty damn precise. I threw some molecular biology problems at it and some other mini models, and while those all failed, o4-mini didn’t really make any mistakes.
It makes me think of psychics who claim to be able to speak to the dead so long as they can learn enough about the deceased to be able to “identify and reach out to them across the veil”.
I’m hearing a “Ba…” or maybe a “Da…”
“Dad?”
“Dad says to not worry about the money.”