A new poll by the Pew Research Center has found that Americans are getting extremely fed up with artificial intelligence in their daily lives.

A whopping 53 percent of just over 5,000 US adults polled in June think that AI will “worsen people’s ability to think creatively.” Fifty percent say AI will deteriorate our ability to form meaningful relationships, while only five percent believe the reverse.

While 29 percent of respondents said they believe AI will make people better problem-solvers, 38 percent said it could worsen our ability to solve problems.

The poll highlights a growing distrust and disillusionment with AI. Average Americans are concerned about how AI tools could stifle human creativity, as the industry continues to celebrate the automation of human labor as a cost-cutting measure.

  • RememberTheApollo_@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    20 hours ago

    Gives crappy answers, easily fooled, puts people out of work, makes phone menus and “AI assistants” even shittier than they were before… the only ones liking AI are beancounters (until they too get replaced) and marketers.

  • FalseTautology@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    22 hours ago

    This unnecessary infatuation with AI reminds me of the weird advertising push for multiple ‘call collect’ services in the late 90s. Millions of dollars thrown at something that pretty much nobody wants or needs that doesn’t make any sense at all when you try to Describe it to people 30 years later

    • captain_oni@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      At this point I’m starting to feel a little bad for Grok. It only “wanted” to answer people’s questions, but its creator is so allergic to the truth that he has to lobotomize the LLM constantly and “brainwash” it to parrot his world view.

      At this point, if Grok was a person, it would be laying on the floor, shitting itself and mumbling something like “it tells people about white genocide or else it gets the hose again” over and over.

        • FalseTautology@lemmy.zip
          link
          fedilink
          arrow-up
          0
          ·
          20 hours ago

          Sure but lets not pretend that being able to generate pornography specific to our immediate interests is not desirable. Especially for those with more esoteric tastes.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      23 hours ago

      Here AI seems to be only the LLM & Image generators, seems people don’t know about how/what AI is doing behind the scenes in science etc. where it usually outperforms older types of computations.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      23 hours ago

      Here AI seems to be only the LLM & Image generators, seems people don’t know about how/what AI is doing behind the scenes in science etc. where it usually outperforms older types of computations.

    • JigglySackles@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      It’s helpful in learning so long as you get one that you can reign in to only rely on only the official documentation of what you are learning. But then there’s allllll the downsides of running that power hungry system.

        • expr@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          Right, which begs the question: why wouldn’t I just fucking search for what I want to know? Especially becauseI know for a fact that won’t result in me having to sift through completely fabricated bullshit.

          • CrowAirbrush@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            1 day ago

            Because if you do that now, you’ll end up on a a.i. made website with some fever dreams for images which hold no real facts and or truths. Only half facts jumbled together with half truths into incomprehensible sentences which feel like a random combination of words that seem to be endless and unmemorizeable.

  • Mossheart@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    Forget ruining their ability to think creatively. It’s ruining people’s already limited ability to think critically.

  • JohnAnthony@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    I still feel this whole conclusion is akin to “we won’t need money in a post-AGI world”. An implied, unproven dream of AI being so good that X happens as a result.

    If an author uses LLMs to write a book, I don’t give a fuck that they forget how to write on their own. What I do care about is that they will generate 100 terrible books in the time it takes a legitimate author to write a single one, consuming a 1000 times the resources to do so and drown out the legitimate author in the end, by sheer mass.

    • cloudy1999@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      1 day ago

      How many terrible books must I read to find the decent one? And why should I read something that nobody bothered to write? Such a senseless waste of time and resources.

      • JohnAnthony@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        23 hours ago

        I completely agree: if the (hypothetical) perfect LLM wrote the perfect book/song/poem, why would I care?

        Off the top of my head, if an LLM generated Lennon’s “Imagine”, Pink Floyd’s “Goodbye Blue Skies”, or Eminem’s “Kim”, why would anyone give a fuck? If it wrote about sorrow, fear, hope, anger, or a better tomorrow, how could it matter?

        Even if it found the statistically perfect words to make you laugh, cry, or feel something in general, I don’t think it would matter. When I listen to Nirvana, The Doors, half my collection honestly, I think it is inherently about sharing a brief connection with the artist, taking a glimpse into their feelings, often rooted in a specific period in time.

        Sorry if iam14andthisisdeep, I don’t think I am quite finding the right words myself. But I’ll fuck myself with razor blades before I ask a predictive text model to formulate it for me, because the whole point is to tell you how I feel.

        • vala@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          ·
          20 hours ago

          I’m a musician and have a few musical friends. This is the same conclusion we’ve all come to. People who listen very lightly to pop music might start listening to AI stuff and think nothing of it. Anyone who actually listens to music for the art and human connection will likely reject it.

    • Tartas1995@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      I personally believe that in an AGI world, the rich will mistreat the former workers, that might work for a while but at some point, not only are the people fed up of the abuse but the “geniuses” who created their position of power are gone and the children or children’s children will have the wealth and power. The rest of the world will realise that there is no merit to either of there position. And the blood of millions will soak the earth and if we are lucky, AGI survives and serves the collective well. If we aren’t… oh well…

      Good thing that we aren’t there.

        • Tartas1995@discuss.tchncs.de
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          1 day ago

          My issue is similar but I would say,

          We are lucky if an agi actually align with our interests, or even the creator’s interests.

          • snugglesthefalse@sh.itjust.works
            link
            fedilink
            arrow-up
            0
            ·
            1 hour ago

            Yeah with the layers of obscurity I don’t see how anyone could shape an AGI and be sure it was aligned to their, or anyone’s, interests. Right now with the current AI it’s pretty clearly not completely under control and from what I understand of the way they’re trained there’s no way of avoiding the obfuscation of what you’re actually telling it to do. A strong ai will just as easily learn to lie about being aligned than it will be aligned.

        • kadu@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          Kids absolutely love it. Turning in homework made with ChatGPT, even though everything is badly written and they learned nothing, gets celebrated as an act of rebellion. “You gave us all this stupid homework? Well, now you’re powerless, I can use ChatGPT and it’s done!” which completely misses the point of homework.

          • lichtmetzger@discuss.tchncs.de
            link
            fedilink
            arrow-up
            0
            ·
            2 days ago

            Nothing really new here. I hated homework when I was a kid, too, and I still think it’s pointless. More work after eight hours of work, sure…

            • vaultdweller013@sh.itjust.works
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              2 days ago

              Part of the problem is that most homework is an inflexible extension of class work and is generally pretty shit. My highschool got rid of homework and just cut down PE our grades went up. Point is the best homework is the open ended shit where you basically let the students go nuts, best bit of homework I ever did was a presentation style book report. Mine was probably the most sane compared to the rest, I just did a report on Hitchhikers Guide to the Galaxy which was more a synopsis/abridged retelling. For comparison one of my friends rolled in with a cork board that was basically a mix of Winston is an idiot and big brother is making shit up because 1984 melted his brain, another guy read some of Kafkas work his presentation was a series of shit posts.

    • LustyArgonian@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      We don’t even have the ability to refuse to pay for the power it uses. People reporting their power bills (and water bills) are going up from it.

    • NotSteve_@piefed.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      My company pays for GH Copilot and Cursor and they track your usage. My usage stats glitched at one point I guess showing that I hadn’t used it for a week and I got a call from my manager

        • NotSteve_@piefed.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          I want to but they pay far above average wage for CS in my country and it’s fully remote 😅. I’ve also been there long enough that I know pretty much my whole org’s codebase and i really don’t want to start fresh again. Golden handcuffs I guess

      • njordomir@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 days ago

        Uggh, that sounds like hell. If you’re gonna tell people exactly how to do their job you might as well have a machine do it. Right? My contribution is the fact that I do things with my own flair. My customers love me because I respond and behave like a unique and identifiable real person they know, not like a robotic copycat sycophant clone. Sometimes my jokes miss the first time, but over time I build meaningful repitoire with my customers. I truly empathize with their concerns because I see how the industry crushes them, have been there myself, and I understand what it means not just in the sense of being able to see and define concepts, but I can understand how it feels from perspectives that take a lifetime to develop and I can identify the ripple effects that people feel in their lives due to work environments, budget crunches, policy changes, etc. I would rather deliver bad news awkwardly as a human than have chat gtp do it and I would rather receive it the same way also.

  • bfg9k@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    Any day now the bubble will burst and we will move onto the next hype train.

    Last time it was ‘The Cloud’, now it’s ‘AI’, I wonder what useless ongoing payment bullshit they will try to sell us next.

    • Strider@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      But this one is really the most important one we can sacrifice the environment (and the peasants’ money) for! Really!

      /s

    • quick_snail@feddit.nl
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 days ago

      The Cloud didn’t burst. Proxmox is great, and AWS is almost certainly powering much of the infra used to send this message from me to you

      • bfg9k@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 days ago

        It didn’t ‘burst’ so much as deflate as businesses realised paying $200,000 upfront for their own servers instead of $20,000 every month was better in the long run

        The cloud still has a clear and defined use case for a lot of tangible things, but AI is just nebulous ‘it will improve productivity’ claims with no substance

        • Lost_My_Mind@lemmy.worldM
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          businesses realised paying $200,000 upfront for their own servers instead of $20,000 every month was better in the long run

          Not even the long run. 11 months is when you’d pay $220,000 which is MORE than $200,000.

          So not even a year until you’re losing money.

      • Dogiedog64@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 days ago

        It never burst explosively, just kinda slowly deflated into being normal and useful. AI won’t do that; too much money (HUNDREDS OF BILLIONS OF DOLLARS!!!) has been pumped in too quickly for anything other than an explosively catastrophic collapse of the market. At this point, it’s a game of Nuclear Chicken between VC firms and AI firms to see who blinks first and admits the whole thing is a loss. Don’t worry, though, the greater US economy will likely crumble significantly too ¯\_(ツ)_/¯.

          • Dogiedog64@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            2 days ago

            True, but the asset damage was largely contained there as well, since nobody actually BOUGHT ANY., and it was all fake digital assets made of fake digital money. AI/LLMslop has LOADS of physical assets, and is burning so much REAL money that it’s making heads spin, not to mention the fact it has bled VC firms everywhere almost dry. It’s gonna be so, so much worse than NFTs.

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    If they had polled elsewhere, they might have gotten similar results.

    About nobody loves AI, except for some greedier-than-smart managers and AI addicts.