Imagine a knife that occasionally and automatically stabs people trying to cook with it or those near them. Not user error or clumsiness, this is just an unavoidable result of how it’s designed.
Yes, I’d blame the knife, or more realistically the company that makes it and considers it safe enough to sell.
ChatGPT is physically incapable of stabbing. It is incapable of lying since that requires intent, incapable of manipulation since that requires intent.
It is deterministic, flavoured by the temperature setting allocated to it. All of what it says depends on the input.
Funny how you creeps disregard what the obvious lunatic might have been telling ChatGPT before ChatGPT followed along.
Imagine a knife that occasionally and automatically stabs people trying to cook with it or those near them. Not user error or clumsiness, this is just an unavoidable result of how it’s designed.
Yes, I’d blame the knife, or more realistically the company that makes it and considers it safe enough to sell.
ChatGPT is physically incapable of stabbing. It is incapable of lying since that requires intent, incapable of manipulation since that requires intent.
It is deterministic, flavoured by the temperature setting allocated to it. All of what it says depends on the input.
Funny how you creeps disregard what the obvious lunatic might have been telling ChatGPT before ChatGPT followed along.
The user is not the only entity supplying input. The operators of the system provide the overwhelming majority of the input.
The operators of the system certainly possess intent, and are completely capable of manipulation.