… Oh dear.

  • absGeekNZ@lemmy.nz
    link
    fedilink
    English
    arrow-up
    0
    ·
    20 days ago

    I don’t disagree, but my point is.

    It’s a category error, LLMs are text prediction engines. There is nothing behind the curtain, they can’t by evil, because that implies understanding and intent.

    LLMs are evil in the way that earthquakes are evil, it is pure anthropomorphism, and it’s taking the focus from were the real issues are.

    Don’t get sucked into blaming the hammer, when the one swinging it it right there.