I am wonder why leftists are in general hostile towards AI. I am not saying this is wrong or right, I just would like someone to list/summarize the reasons.
bc those who own AI are against left-leaners’ principles.
It’s doesn’t solve any problem I care about. In fact, it only worsens ones like climate change or wealth inequality.
Is your premise even correct? I don’t have any data indicating that leftists are anti-AI. Do you?
Also, I’m not sure what you mean by anti-AI. Pointing out that it is snake oil is a factual claim. In other words, if AI is a label that’s mostly devoid of meaning, then attacking it is actually attacking the sleazy opportunistic salespeople, and not necessarily the underlying tech (when it is clearly defined, which is rare). Of course one could oppose both.
Which is to say, many leftists are anti-billionaire, and many billionaires are riding the AI bubble. But you already know that, so probably that’s not what you’re asking.
Can’t speak for anyone else, but here are a few reasons I avoid Ai:
-
AI server farms consume a stupid amount of energy. Computers need energy, I get it, but Ai’s need for energy is ridiculous.
-
Most of the implementations of Ai seem to be after little to no input from the people who will interact with it and often despite their objections.
-
The push for implementing Ai seems to be based on the idea that companies might be able to replace some of their workforce compounded with the fear of being left behind if they don’t do it now.
-
The primary goal of any Ai system seems to be about collecting information about end users and creating a detailed profile. This information can then be bought and sold without the consent of the person being profiled.
-
Right now, these systems are really bad at what they do. I am happy to wait until most of those bugs are worked out.
To be clear, I absolutely want a robot assistant, but I do not want someone else to be in control of what it can or cannot do. If I am using it and giving it my trust, there cannot be any third parties trying to monetize that trust.
Well I personally also avoid using AI. I just don’t trust the results and I think using it makes mentally lazy (besides the other bad things).
-
I see two reasons. Most people that are “left leaning” value both critical thinking and social fairness. AI subverts both of those traits. Firstly by definition it bypasses the “figure it out” stage of learning. The second way is by ignoring long establish laws like copyright to train its models, but also its implementation which sees people lose their jobs
More formally, it’s probably one of the purest forms of capitalism. It’s essentially a slave laborer, with no rights of ability to complain that further concentrates wealth with the wealthy.
Counterpoint: are right leaners “pro AI”?
I feel like there’s a big distinction between tech bros and, say, MAGA diehards.
…And again, this is a very artificial polarization, as talk to any ML researcher or tinkerer, and they will hate the guts of Sam Altman or Elon Musk.
It steals from the copyright holders in order to make corporate AI money without giving back to the creators.
It uses insane amounts of water and energy to function, with demand not being throttled by these companies.
It gives misleading, misquoted, misinformed, and sometimes just flat out wrong information, but abuses its very confidence-inspiring language skills to pass it off as the correct answer. You HAVE to double check all its work.
And if you think about it, it doesn’t actually want to lick a lollipop, even if it says it does. Its not sentient. I repeat, its not alive. The current design is a tool at best.
Thank you, for the sake of completeness, I’d add something like this: https://time.com/6247678/openai-chatgpt-kenya-workers/
Because they’re obviously a tool for the rich to get more control over our lives
Yes, I’m left-leaning, and I dislike what’s currently called “ai” for a lot of the left-leaning (rational) reasons already listed. But I’m a programmer by trade, and the real reason I hate it is that it’s bullshit and a huge scam vehicle. It makes NFTs look like a carnival game. This is the most insane bubble I’ve seen in my 48 years on the planet. It’s worse than the subprime mortgage, “dot bomb”, and crypto scams combined.
It is, at best, a quasi-useful tool for writing code (though the time it has saved me is mostly offset by the time it’s been wrong and fucked up what I was doing). And this scam will eventually (probably soon) collapse and destroy our economy, and all the normies will be like “how could anybody have known!?” I can see the train coming, and CEOs, politicians, average people, and the entire press insist on partying on the tracks.
when copilot came out and it was nothing more than an extremely fancy auto complete.
that was peak, I’d still write the logic and algorithms and the important bits, it just saved time by quickly writing the line when it got it right, it all went downhill from there.
I prefer using LMMs for tech debt stuff like starting a readme and doing comments.
I do the real brain work and the end product looks nicer.
Sudo code (baseline comments), real code, dev/test, LMM to add more words after. Smack it when it touches my code.
LLMs can be a great assistant. it’s like having an intern doing the tedious work while you get to just approve and manage it.
but letting it run the show is like letting the intern manage the whole development unsupervised.
Like the AI that was given access to prod, deleted a database and lied about it. The company also didn’t have a backup.
those are the fuckups that become public. there will be a lot of major fuck up.
It’s generative and LLM AI that is the issue.
It makes garbage facsimiles of human work and for CEOs all they see is spending less money so they can horde more of it. It also puts pressure on resource usage, like water and electricity. Either by using it for cooling the massive data centers or by simply the power draw needed to compute whatever prompt.
The other main issue is that it is theft plain and simple. Artists, actors, voice actors, musicians, creators, etc are at risk of having their jobs stolen by a greedy company that only wants to pay for a thing once or not at all. You can get hired once to read or be photographed/videoed and then that data can be used to train a digital replacement without your consent. That was one of the driving forces behind the last big actor’s union protests.
For me, it’s also the lack of critical thinking skills using things like ChatGPT fosters. The thought that one doesn’t have to put any effort into writing an email, an essay, or even researching something when you can simply type in a prompt and it spits out mainly incorrect information. Even simple information. I had an AI summary tell me that 440Hz was a higher pitch than 446Hz. I wasn’t even searching for that information. So, it wasted energy and my time giving demonstrably wrong data I had no need for.
Thank you. Well, personally I do not use ChatGPT and this is one of the reasons why I asked humans this question :)
AI removes critical thinking for you.
Personally I think the environmental impact and the sycophantic responses that take away the need for one to exercise their brain are my 2 biggest gripes.
It was a fun novelty at first, I remember my first question to chat gpt was ‘how to make hamster ice cream’ and I was genuinely surprised that it gave me some frozen fruit recipe along with a plea to not harm hamsters by turning them into ice cream.
Then it got out of hand very quickly, it got added onto absolutely everything, despite the hallucinations and false facts. The intellectual property issue is also of concern.
- Runs ragged over copyright
- Enables mass layoffs
- Depresses salaries
- Ruins everything
- Degrades critical thinking
It’s being shoved at us.
Most new tech starts with a narrow legit use csse or an enthusiast culture and gradually works to a breakout moment where everyone wants it. Think of cars in 1900 vs 1925 or home computers in 1976 vs 1999. Also note plenty of new tech fails to go mainstream no matter how much effort went into it. 3D TVs, turbine locomotives, non-photovoltaic solar: they tried but didn’t really make it.
Capital has decided AI will be the next thing and they want it now, so they refuse to let the process run. They can’t wait for a product that solves real faults with the current designs (inefficiency, hallucinations) or does something people actually want (nobody asked for extra fingers) before stuffing it in everything.
I like society, i like people, and i like them getting to be craftspeople of all kinds, and relying on their expertise to make my own life more interesting by proxy while offering my own skills to the pot.