For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.

  • DaTingGoBrrr@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Is no one going to question HOW and WHY Grok knows how to generate CSAM?

    This is fucking disgusting and both the user and X should be held accountable.

    • IchNichtenLichten@lemmy.wtf
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      I agree that it’s disgusting. To answer your question, it doesn’t know anything. It’s assigning probabilities based on its training data in order to create a response to a user prompt.