If someone manufactured a knife that told schizophrenics they’re being followed and people are trying to kill them, then yeah. That knife shouldn’t be able to do that and the manufacturer should be held liable.
A nuclear football / knife are not stochastic parrots which capable of stringing coherent sentences together.
I agree llms are in the category of tools, but on the other hand they are no like any other tool and requires adjusting to which needs to happen too fast for most people
Imagine a knife that occasionally and automatically stabs people trying to cook with it or those near them. Not user error or clumsiness, this is just an unavoidable result of how it’s designed.
Yes, I’d blame the knife, or more realistically the company that makes it and considers it safe enough to sell.
ChatGPT is physically incapable of stabbing. It is incapable of lying since that requires intent, incapable of manipulation since that requires intent.
It is deterministic, flavoured by the temperature setting allocated to it. All of what it says depends on the input.
Funny how you creeps disregard what the obvious lunatic might have been telling ChatGPT before ChatGPT followed along.
Blame the knife
If someone manufactured a knife that told schizophrenics they’re being followed and people are trying to kill them, then yeah. That knife shouldn’t be able to do that and the manufacturer should be held liable.
If you held the nuclear football, would it speak to you too?
A nuclear football / knife are not stochastic parrots which capable of stringing coherent sentences together.
I agree llms are in the category of tools, but on the other hand they are no like any other tool and requires adjusting to which needs to happen too fast for most people
If someone programmed the nuclear football to be able to talk, we’d all recognize that as fucked up.
Allowing the unmedicated schizophrenic near the internet was the mistake.
Blaming the AI which just plays along with whatever fantasy is thrown at it is what mongoloids do.
if the knife is a possessed weapon whispering to the holder, trying to convince them to use it for murder, blaming it may be appropriate
Imagine a knife that occasionally and automatically stabs people trying to cook with it or those near them. Not user error or clumsiness, this is just an unavoidable result of how it’s designed.
Yes, I’d blame the knife, or more realistically the company that makes it and considers it safe enough to sell.
ChatGPT is physically incapable of stabbing. It is incapable of lying since that requires intent, incapable of manipulation since that requires intent.
It is deterministic, flavoured by the temperature setting allocated to it. All of what it says depends on the input.
Funny how you creeps disregard what the obvious lunatic might have been telling ChatGPT before ChatGPT followed along.
The user is not the only entity supplying input. The operators of the system provide the overwhelming majority of the input.
The operators of the system certainly possess intent, and are completely capable of manipulation.