

Ran into this, it’s just unbelievably sad.
“I never properly grieved until this point” - yeah buddy, it seems like you never started. Everybody grieves in their own way, but this doesn’t seem healthy.


Ran into this, it’s just unbelievably sad.
“I never properly grieved until this point” - yeah buddy, it seems like you never started. Everybody grieves in their own way, but this doesn’t seem healthy.
Hmmmm. This gives me an idea of an actual possible use of LLMs. This is sort of crazy, maybe, and should definitely be backed up by research.
The responses would need to be vetted by a therapist, but what if you could have the LLM act as you, and have it challenge your thoughts in your own internal monologue?
Shit, that sounds so terrible and SO effective. My therapist already does a version of this and it’s like being slapped, I can only imagine how brutal I would be to me!
That would require an AI to be able to correctly judge maladaptive thinking from healthy thinking.
No, LLMs can’t judge anything, that’s half the reason this mess exists. The key here is to give the LLM enough information about how you talk to yourself in your mind for it to generate responses that sound like you do in your own head.
That’s also why you have a therapist vet the responses. I can’t stress that enough. It’s not something you would let anyone just have and run with.
Seems like you could just ditch the LLM and keep the therapist at that point.
This would be great but how do you train an LLM to act as you? You’d need to be recording your thoughts and actions, not only every bit of speech you utter and every character you type on a device.
And as far as I’m aware, we don’t know how to rapidly nor efficiently train transformer-based architectures anywhere near the size needed to act like chatgpt3.5, let alone 4o etc, so you’ll also need to be training this thing for a while before you can start using it to introspect - by which point you may already no longer behave the same.