• yeahiknow3@lemmings.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    AI is digital. It is a set of instructions that can be written on paper. Biological brains are analog, by contrast, and there is currently no reason to think that digital computation can produce subjectivity.

    That’s good and bad.

    It’s good, because our approach to artificial intelligence (putting a lot of processing power into a Rube Goldberg machine) cannot “come alive.” Such a machine is like a calculator, unmotivated and inert.

    That’s bad, because it will ALWAYS do what people tell it to do, the way a printer will print whatever you tell it to print. Such an intelligence will never develop normative understanding (or morality), never know that it is alive, because it is an inanimate process.


    Economically, we are already fucked. I’m not afraid of zombie AI’s that will displace workers, because we are already in the thrall of that exact economic apocalypse.

    Zombie processes single-mindedly pursuing a self-destructive goal (e.g., a profit motive) already exist: corporations. Multinational corporate entities used to serve diverse stakeholders with competing interests, but today that is no longer the case. Tech giants want to make money. If doing so means ending democracy or killing children, then that is what these tech giants will do. There is no inherent check on their power, only external regulations. A corporation is a kind of self-reinforcing process: anyone on the board of directors at Apple undermining the profit motive gets replaced, etc.

    Anyways, what we need is systemic change. And the sort of political and philosophical re-alignment that protects humanity against the overreach of corporations will also protect us against general AI zombies. Because whether AGI comes to fruition or not, human interests are already secondary to the undermining forces of corporate venality and corruption across the board.