… Oh dear.

  • thedarkfly@feddit.nl
    link
    fedilink
    arrow-up
    0
    ·
    19 days ago

    Certainly they are neutral evil, no? Can’t have market monopoly without rules. Wouldn’t go as far as lawful evil because they ain’t afraid of breaking no law.

    • ZDL@lazysoci.alOP
      link
      fedilink
      arrow-up
      0
      ·
      19 days ago

      Historically it’s been the opposite. Bell was a market monopoly until rules were introduced specifically to break it up and thus its grip on telephony, for example.

    • stabby_cicada@slrpnk.net
      link
      fedilink
      arrow-up
      0
      ·
      19 days ago

      Complex algorithms that follow rules without thought = lawful.

      Deliberately incorporating random factors into the algorithm so they don’t generate the same result every time = chaotic.

      So I’d argue the LLMs themselves are neutral evil, presuming we allow objects to have alignments. Could you argue a LLM is attuned to its corporate owner? They’d definitely be cursed.

      Then the companies would veer from lawful evil (Microsoft has been the archetype of abusing laws and regulations to its own benefit for decades) to chaotic evil (Grok has no rules, only the whims of its infantile tyrant).

      • SippyCup@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        19 days ago

        Corporate owners are currently in the Find Out stage about how they have no control over their LLMs. And so no, they do not share an alignment with their corporate owners beyond fleeting coincidence.

    • sobchak@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      19 days ago

      I think Grok is too, at least an older version. There’s also gpt-oss, and Meta has released a lot of “open source” models, but I think they use weird licences. Meta and Deepseek (and Alibaba) researchers publish papers that are actually useful, while the rest just publish marketing material, trying to keep the research itself private.

      • Sylra@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        0
        ·
        19 days ago

        Gpt oss is borderline crap, it’s not that smart, not that great and it’s pretty censored, but it can have niche uses for programming. The oss 20b in particular can be easier to run in some setups than their competitors like Qwen 3-30b. oss 120b is quite heavy: the cost to performance ratio is not good.

        Meta abandoned the open source ideal since Llama 4; they went closed source.

        Older open source versions of Grok are literally useless, no one should use them. Their cloud closed source models are decent.

        Deepseek and Alibaba’s models like Qwen are good.

    • ZDL@lazysoci.alOP
      link
      fedilink
      arrow-up
      0
      ·
      19 days ago

      Don’t know. Don’t care. It’s a Microsoft product, thus an American product, ergo something I will never use.

      “Elbows up.”

  • absGeekNZ@lemmy.nz
    link
    fedilink
    English
    arrow-up
    0
    ·
    20 days ago

    I can’t agree, the LLM’s don’t have the capacity to be evil. They may be called AI but there is no “I” anywhere in there.

    The companies however…well that is a different story.

      • GreenMartian@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        19 days ago

        Yeah this is a sore point. Whenever management says, “the company decided…” I really want to stop them and scream, “Who?! Who in the company decided?!”

    • kn0wmad1c@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 days ago

      These 9 companies have made billions by convincing thousands of other companies that their fun text generator can replace skilled workers.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 days ago

      Idk, there’s always that argument how technology is neutral. But is it? I mean tech isn’t separate from the world but embedded into a context. People use it, so I’d like to make an argument that dystopian surveillance tech, all the stuff that fuels the attention economy and industrialized warfare machinery are something alike evil. And AI, well that’s designed to reproduce stereotypes and bias. It’s almost entirely controlled by tech bros. And we have the environment footprint so it’d really need to perform better or it’s a net-negative outcome.

      • absGeekNZ@lemmy.nz
        link
        fedilink
        English
        arrow-up
        0
        ·
        19 days ago

        I don’t disagree, but my point is.

        It’s a category error, LLMs are text prediction engines. There is nothing behind the curtain, they can’t by evil, because that implies understanding and intent.

        LLMs are evil in the way that earthquakes are evil, it is pure anthropomorphism, and it’s taking the focus from were the real issues are.

        Don’t get sucked into blaming the hammer, when the one swinging it it right there.