Lmfao, this is the most childish take I could possibly imagine.
You cannot avoid the problems with something by sticking your head in the sand and pretending like it doesn’t exist and will go away.
What do you propose?
For people who proclaim to care about critical active thought to think through the issue.
Lmao, how does noone see the irony of claiming to care about active thinking while boiling the issue down to an oversimplified, black and white, “all AI is bad and all uses of it are bad”.
I have seen sooo many flat earthers say this exact same thing.
Really convenient for you that you are apparently the only person who has ever achieved deep thought.
I believe AI is going to be a net negative to society for the forseeable future. AI art is a blight on artistry as a concept, and LLMs are shunting us further into search-engine-overfit post-truth world.
But also:
Reading the OOP has made me a little angry. You can see the echo chamber forming right before your eyes. Either you see things the way OOP does with no nuance, or you stop following them and are left following AI hype-bros who’ll accept you instead. It’s disgustingly twitter-brained. It’s a bullshit purity test that only serves your comfort over actually trying to convince anyone of anything.
Consider someone who has had some small but valued usage of AI (as a reverse dictionary, for example), but generally considers things like energy usage and intellectual property rights to be serious issues we have to face for AI to truly be a net good. What does that person hear when they read this post? “That time you used ChatGPT to recall the word ‘verisimilar’ makes you an evil person.” is what they hear. And at that moment you’ve cut that person off from ever actually considering your opinion ever again. Even if you’re right that’s not healthy.
You can also be right for the wrong reasons. You see that a lot in the anti-AI echo chambers, people who never gave a shit about IP law suddenly pretending that they care about copyright, the whole water use thing which is closer to myth than fact, or discussions on energy usage in general.
Everyone can pick up on the vibes being off with the mainstream discourse around AI, but many can’t properly articulate why and they solve that cognitive dissonance with made-up or comforting bullshit.
This makes me quite uncomfortable because that’s the exact same pattern of behavior we see from reactionaries, except that what weirds them out for reasons they can’t or won’t say explicitly isn’t tech bros but immigrants and queer people.
The people who hate immigrants and queer people are AI’s biggest defenders. It’s really no wonder that people who hate life also love the machine that replaces it.
A perfect example of the just completely delusional factoids and statistics that will spontaneously form in the hater’s mind. Thank you for the demonstration.
(as a reverse dictionary, for example)
Thanks for putting a name on that! That’s actually one of the few useful purposes I’ve found for LLMs. Sometimes you know or deduce that some thing, device, or technique must exist. The knowledge of this thing is out there, but you simply don’t know the term to search for. IMO, this is actually one of the killer features of LLMs. It works well because whatever the LLM is outputting is simply and instantly verifiable. You can describe the characteristics of something to the LLM and ask it what thing has those characteristics. Then once you have a possible name, you then look that name up in a reliable source and confirm it. Sometimes the biggest hurdle to figuring something out is just learning the name of a thing. And I’ve found LLMs very useful as a reverse dictionary. Thanks for putting a name on it!
Using chatGPT to recall the word ‘verisimilar’ is an absurd waste of time, energy, and in no way justifies the use of AI.
90% of LLM/GPT use is a waste or could be done with better with another tool, including non-LLM AIs. The remaining 10% are just outright evil.
Where is your source? It sounds unbelievable
Source is the commercial and academic uses I’ve personally seen as an academic-adjacent professional that’s had to deal with this sort of stuff at my job.
What was the data you saw on what volume of requests to non-llm models as they relate to utility? I can’t figure out what profession have access to this kind of statistic? It would be very useful to know, thx.
I think you’ve misunderstood what I was saying- I don’t have spreadsheets of statistics on requests for LLM AIs vs non-LLM AIs. What I have is exposure to a significant amount of various AI users, each running different kinds of AIs, and me seeing what kind of AI they’re using, and for what purposes, and how well it works or doesn’t.
Generally, LLM-based stuff is really only returning ‘useful’ results for language-based statistical analysis, which NLP handles better, faster, and vastly cheaper. For the rest, they really don’t even seem to be returning useful results- I typically see a LOT of frustration.
I’m not about to give any information that could doxx myself, but the reason I see so much of this is because I’m professionally adjacent to some supercomputers. As you can imagine, those tend to be useful for AI research :P
Ah ok that’s too bad. Super computers typically don’t have tensor cores though, and most LLM use is presumably client use on ready trained models which desktop or mobile cpus can manage now so it will be impossible to know then
yyyyes they do have tensor cores? Where did you get such an absurd idea from?
I’m a what most people would consider an AI Luddite/hater and think OOP communicates like a dogmatic asshole.
I would not to get close to bike repaired by someone who is using ai to do it. Like what the fuck xd I am not surprised he is unable to make code work then xddd
Sorry but you can deny and hate all you want, it’s not going anywhere
Texans are facing a water shortage due to over 900 MILLION gallons of water being used to cool AI datacenters. Do you think that’s sustainable?
can’t we cut down on texans if they use so much water?
I think it’s going to contract massively. Like nft’s.
Neither is climate change, but we should still combat it where possible.
Funny, that. Fighting against AI could be seen as fighting against climate change, considering the large carbon footprint it has.
sadly. I don’t have enough money to turn this shit-hose off.
Gen AI is neat, and I use it for personal processes including code, image gen, llm/chat; but it is sooooo faaaar awaaaay from being a real game changer - while all the people poised to profit off it claim it is - that it’s just insane to claim it’s the next wave. evidence: all the creative (photo/art/code/etc) people who are adamantly against it and have espoused reasoning.
There’s another story on my feed about a 10-year-old refactoring a code base with a LLM. Go look at the comments from actual experts that take into account things like unit tests, readability, manageability, security. Humans have more context than any AI will.
LLMs are not intelligent. They are patently not. They make shit up constantly, since that is exactly what they do. Sometimes, maybe even most of the time, the shit they make up is mostly accurate… but do you want to rely on them?
When a doctor prescribes you the wrong drug, you can sue them as a recourse. When a software company has a data breach, there is often a class-action (better than nothing) as a recourse. When an AI tells you to put glue on your pizza to hold the toppings, there is no recourse, since the AI is not a legal thing and the company disclaims all liability for its output. When an AI denies your health insurance claim because of inscrutable reasons, there is no recourse.
In the first two, there is a penalty for being wrong, which is in effect an incentive to be correct – to be accurate, to be responsible.
In the last, as an AI llm/agent/fuckingbuzzword, there is no penalty and no incentive. The AI just is as good as its input, and half the world is fucking stupid, so if we average out all the world’s input, we get “barely getting by” as a result. A coding AI is at least partially trained on random stackoverflow posts asking for help. The original code there is wrong!
Sadly, it’s not going anywhere. But people who rely on it will find short-term success for long-term failure. And a society relying on it is doomed. AI relies on the creative works that already exist. If we don’t make any new things, AI will stagnate and die. Where will we be then?
There are places AI/LLM/Machine-Learning can be used successfully and helpfully, but they are niche. The AI bros need to be figuring out how to quickly meet a specific need instead of trying to meet all needs at the same time. Think the early 2000-s Folding at Home, how to convince republicans to wear a fucking mask during covid, why we shouldn’t just eat the billionaires*.
*Hermes-3 says cannibalism is “barbaric” in most cultures, but otherwise doesn’t give convincing arguments.
Do you want me to list the techbrodude technologies that were “not going anywhere” in past decades that have effectively died outside of tiny die-hard communities still living a delusion?
Remember when the Metaverse was the next great thing that wasn’t going anywhere? Remember when cryptocurrency was going to wipe out banking forevermore? Remember when NFTs were going to revolutionize artists getting paid for their work? Segway or its somehow-lamer cousin “hoverboards”? Augmented Reality? 3D TVs? Theranos? Google Wave?
Hell, just go visit the Google Graveyard for a list of “hot” technologies that withered and died on the vine. (And quite a few lame technologies that shouldn’t have ever even been on the vine.)
Remember all that?
But this time the techbrodudes have it right, despite there not being a viable business model; despite every AI vendor in the world burning through money faster than dumping that same cash into a forest fire. It’s not going anywhere!
Every grift has two parties: the grifter and the sucker. You’re not the former.
AI is a marketing term. Big Tech stole ALL data. All of it. The brazen piracy is a sign they feel untouchable. We should touch them.
They only real exception I can think of would be to train an AI ENTIRELY on your own personally created material. No sources from other people AT ALL. Used purely for personal use, not used or available for use by the public.
I think the public domain would be fair game as well, and the fact that AI companies don’t limit themselves to those works really gives away the game. An LMM that can write in the style of Shakespeare or Dickens is impressive, but people will pay for an LLM that will write their White Lotus fan fiction for them.
I have same rant with "this is the only funny AI meme’ shit.
I work at a company that uses AI to detect repirstory ilnesses in xrays and MRI scans weeks or mobths before a human doctor could.
This work has already saved thousands of peoples lives.
But good to know you anti-AI people have your 1 dimensional, 0 nuance take on the subject and are now doing moral purity tests on it and dick measuring to see who has the loudest, most extreme hatred for AI.
Those are not GPTs or LLMs. Fuck off with your bullshit trying to conflate the two.
We actually do use Generative Pre-trained Transformers as the base for a lot of our tech. So yes they are GPTs.
And even if they werent GPTs this is a post saying all AI is bad and how there is literally no exceptions to that.
Again with the conflation. They clearly mean GPTs and LLMs from the context they provide, they just don’t have another name for it, mostly because people like you like to pretend that AI is shit like chatGPT when it benefits you, and regular machine learning is AI when it benefits you.
And no, GPTs are not needed, nor used, as a base for most of the useful tech, because anyone with any sense in this industry knows that good models and carefully curated training data gets you more accurate, reliable results than large amounts of shit data.
Our whole tech stack is built off of GPTs. They are just a tool, use it badly and you grt AI slop, use it well and you can save peoples lives.
As I said, anyone with sense.
And that AI has been trained on data that has been stolen, taking away the livelihood of thousands more. Further, the environmental destruction will have the capacity to destroy millions more.
I’m not lost on the benefits; it can be used to better society. However, the lack of policy around it, especially the pandering to corporations by the American judicial system, is the crux here. For me, at least.
No. Im also part of the ethics committee at my work and since we work with peoples medical data as our training sets 9/10ths of our time is about making sure that data is collected ethically and with very specific consent.
I’m fine with that. My issue is primarily theft and permissions and the way your committee is running it should be the absolute baseline of how models gather data. Keep up the great work. I hope that this practice becomes mainstream.
All this is being stoked by OpenAI, Anthropic and such.
They want the issue to be polarized and remove any nuance, so it’s simple: use their corporate APIs, or not. Anything else is ”dangerous.”
For what they’re really scared of is awareness of locally runnable, ethical, and independent task specific tools like yours. That doesn’t make them any money. Stirring up “fuck AI” does.
nobody is trashing Visual Machine Learning to assist in medical diagnostics
cool strawman though, i like his little hat
No, when you litterally say “Fuck AI, no exceptions” you are very very expliticly covering all AI in that statement.
what do you think visual machine learning applied to medical diagnostics is exactly
does it count as “ai” if i could teach an 11th grader how to build it, because it’s essentially statistically filtering legos
don’t lose the thread sportschampion
Well most of my colleagues have PHDs or MDs, so good luck teaching an 11th grader to do it.
They’re not even people. Who knows if that story was true. They’re not conscious anymore.
Frfr
Nobody has a problem with this, it’s generative AI that’s demonic
Generative AI uses the same technology. It learns when trained on a large data set.
It’s almost like it isn’t the “training on a large data set” part that people hate about generative AI
ICBMs and rocket ships both burn fuel to send a payload to a destination. Why does NASA get to send tons of satellites to space, but I’m the asshole when I nuke Europe??? They both utiluze the same technology!
So would you disagree with the OP about there being no exceptions?
Nope, all generative AI is bad, no exceptions. Something that uses the same kind of technology but doesn’t try to imitate a human with artistic or linguistic output isn’t the kind of AI we’re talking about.
-
Except clearly some people do. This post is very specifically saying ALL AI is bad and there is no exceptions.
-
Generative AI isnt a well defined concept and a lot of the tech we use is indistinguishable on a technical level from “Generstive AI”
-
sephirAmy explicitly said generative AI
-
Give me an example, and watch me distinguish it from the kind of generative AI sephirAmy is talking about
Again. Geneative AI is a meaningless term.
-
-
Generative AI is a meaningless buzzword for the same underlying technology, as I kinda ranted on below.
Corporate enshittification is what’s demonic. When you say fuck AI, you should really mean “fuck Sam Altman”
Generative AI is a meaningless buzzword for the same underlying technology
What? An AI that can “detect repirstory ilnesses in xrays and MRI scans” is not generative. It does not generate anything. It’s a discriminative AI. Sure, the theories behind these technologies have many things is common - but I wouldn’t call them “the same underlying technology”.
It is litterally the exact same technology. If i wanted to i could turn our xray product into a image generator in less than a day.
Because they are both computers and you can install different (GPU-bound) software on them?
It’s true that generative AI is uses discriminative models behind the scenes, but the layer needed on top of that is enough to classify it as a different technology.
No, I mean fuck AI. You can be included in that, if you insist.
I mean, not really? Maybe they’re both deep learning neural architectures, but one has been trained on an entire internetful of stolen creative content and the other has been trained on ethically sourced medical data. That’s a pretty significant difference.
I think DLSS/FSR/XeSS is a good example of something that is clearly ethical and also clearly generative AI. Can’t really think of many others lol
No, really. Deep learning and transformers etc. was discoveries that allowed for all of the above, just because corporate vc shitheads drag their musty balls in the latest boom abusing the piss out of it and making it uncool, does not mean the technology is a useless scam
This.
I recently attended a congress about technology applied on healthcare.
There were works that improved diagnosis and interventions with AI, generative mainly used for synthetic data for training.
However there were also other works that left a bad aftertaste in my mouth, like replacing human interaction between the patient and a specialist with a chatbot in charge of explaining the procedure and answering questions to the patient. Some saw privacy laws as a hindrance and wanted to use any kind of private data.
Both GenAI, one that improves lives and other that improves profits.
Yeah, that’s not what I was disagreeing with. You’re right about that; I’m on record saying that capitalism is our first superintelligence and it’s already misaligned. I’m just saying that it isn’t really meaningless to object to generative AI. Sure the edges of the category are blurry, but all the LLMs and diffusion-based image generators and video generators were unethically trained on massive bodies of stolen data. Seriously, talking about AI as though the architecture is the only significant element when getting good training data is like 90% of the challenge is kind of a pet peeve of mine. And seen in that light there’s a pretty significant distinction between the AI people are objecting to and the AI people aren’t objecting to, and I don’t think it’s a matter of “a meaningless buzzword.”
I like to read the anti ai stuff, because ultimativly a lot of criticism is valid. But by god is there a lot of adolescent whining and hyperbole.
the fact that it is theft
There are LLMs trained using fully open datasets that do not contain proprietary material… (CommonCorpus dataset, OLMo)
the fact that it is environmentally harmful
There are LLMs trained with minimal power (typically the same ones as above as these projects cannot afford as much resources), and local LLMs use signiciantly less power than a toaster or microwave…
the fact that it cuts back on critical, active thought
This is a usecase problem. LLMs aren’t suitable for critical thinking or decision making tasks, so if it’s cutting back on your “critical, active thought” you’re just using it wrong anyway…
The OOP genuinely doesn’t know what they’re talking about and are just reacting to sensationalized rage bait on the internet lmao
Saying it uses less power that a toaster is not much. Yes, it uses less power than a thing that literally turns electricity into pure heat… but that’s sort of a requirement for toast. That’s still a LOT of electricity. And it’s not required. People don’t need to burn down a rainforest to summarize a meeting. Just use your earballs.
Yeah man, guess show much energy it would take to draw the 4k graphics on your phone screen in 1995?
Saying it uses less power that a toaster is not much
Yeah but we’re talking a small fraction of that. Like 1% of a toaster (toaster = 800-1500 watts for minutes, local LLM < 300 watts for seconds). Your argument just doesn’t hold up and could be applied to literally anything that isn’t “required”. Toast isn’t required, you just want it. People could just stop playing video games to save more electricity, video games aren’t required. People could stop using social media to save more electricity, TikTok and YouTube’s servers aren’t required.
That’s nothing. People aren’t required to eat so much meat, it even eat so much food.
I also don’t like this energy argument of anti ai, when everything else in our lives already consumes so much.
Valid
I won’t call your point a strawman, but you’re ignoring the actual parts of LLMs that have high resource costs in order to push a narrative that doesn’t reflect the full picture. These discussions need to include the initial costs to gather the dataset and most importantly for training the model.
Sure, post-training energy costs aren’t worth worrying about, but I don’t think people who are aware of how LLMs work were worried about that part.
It’s also ignoring the absurd fucking AI datacenters that are being built with more methane turbines than they were approved for, and without any of the legally required pollution capture technology on the stacks. At least one of these datacenters is already measurably causing illness in the surrounding area.
These aren’t abstract environmental damages by energy use that could potentially come from green power sources, these aren’t “fraction of a toast” energy costs only caused by people running queries either.
Nope, I’m not ignoring them, but the post is specifically about exceptions. The OOP claims there are no exceptions and there is no ethical generative AI, which is false. Your comment only applies to the majority of massive LLMs hosted by massive corporations.
The CommonCorpus dataset is less than 8TB, so fits on a single hard drive, not a data center, and contains 2 trillion tokens, which is a relatively similar amount of tokens that small local LLMs are typically trained with (OLMo 2 7B and 13B were trained on 5 trilion tokens).
These local LLMs don’t have high electricity use or environmental impact to train, and don’t require a massive data center for training. The training cost in energy is high, but nothing like GPT4, and is only a one time cost anyway.
So, the OOP is wrong, there is ethical generative AI, trained only on data available in the public domain, and without a high environmental impact.
You’re implying the edge cases you presented are the majority being used?
No, and that’s irrelevant. Their post is not about the majority, but about exceptions.
I am showing that the statement “there is no ethical use of generative AI” and the OOP’S position that there are no exceptions is provably false.
I can get behind this clarification, so thanks for that.
I’m a realist. To that end, relevance is assigned less on the basis on pedantic deconstruction on a single post and more on the practical reality of what is unfolding around us. Are there ethical applications for generative AI? Yes. Will they become the standard? Unlikely, given incumbent power structures that are defining and dictating long term use.
As with most things stitched into the human experience, gaming human psychology/behavioral mechanics are key to trendsetting. What the majority accepts is what reality re-acclimates to. At the moment, that appears to be mass adoption of unethical AI systems.
I don’t disagree on these problems not being inherent to AI. But that sentiment has the same flavour as ‘guns don’t kill people’ ammosexuals like to bust out when confronted.
Either way, it’s clear you have a good read on what needs to happen to get all this to a better place. Hope you keep fighting to make that happen.
Yeah, agreed. But that’s not what the OOP is saying in their post and their attitude and language makes me believe they’re purposefully being wrong and outrageous for attention/trolling
Yeah, don’t blame you for cracking the whip on hyperbole. Its good to have someone doing that to keep us sane.
What OOP is reacting to is the majority sentiment thats saturating the feed they’re swimming through. It’s a messy response, but the direction they’re pointed in is generally correct - and a lot more aligned with your position than you might expect, despite fumbling the details.
I used AI to scratch my balls once. I assume this counts as ethical.
I do use AI (mostly like Google), but I don’t think it’s justified or OK, lol - I’m the problem, and I know it.
Yeah I do plenty of shit I know is a problem. Most of it just passively from living in a consumerist society.
yes, a lot of my immoral actions are because it’s hard or against the grain to be more moral (e.g. being a strict vegan even when traveling or not easily accommodated, or using cars when technicallyI could bicycle, but on dangerous roads and long distances).
I have definitely spent most of my adult life going against the grain in extreme ways to be a “better” person, but I have been left victimized and disabled for it, so I’m trying to learn to be more moderate and not take big social problems as entirely my personal responsibility. Obviously it’s not one extreme or the other, it’s an interplay between personal and social / structural.
I use it to help me solve tech and code issues, but only because searching the web for help has become so bad. LLM answers are almost always better, and I hate it.
Everything is bullshit. Everything sucks. Capitalism has ruined everything.
I’ve said it before and I’ll say it again, one of my favorite things is the AI rp chatbots. They’re stories written by me and an AI, for me, however the fuck I want to write them.
I used to do it with other people over the web - including my bestie who Ive been writing with for 20+ years now - but I don’t write with other humans anymore.
AI solves the ghosting issue, the “life got in the way” issues, the “I’m just not into it anymore” issues, and the “Oh you wanna make this smutty please for the love of god I hope you’re not lying about being 26” issue, and finally, the biggest issue for me: “Please I told you I’m happily married please stop asking for me socials or email. I just wanna write fun angsty romance stories with you.”
So I’m with you. I’m also the problem, its me. But you know what? When I discovered these AI chatbots in February of this year, my doomscrolling was cut down to a third of what it was, and I all of a sudden was sleeping better and less angry.
I’m not gonna stop.
First of all, intellectual property rights do not protect the author. I’m the author of a few papers and a book and I do not have intellectual property rights on any of these - like most of the authors I had to give them to the publishing house.
Secondly, your personal carbon footprint is bullshit.
Thirdly, everyone in the picture is an asshole.
I’d say this is also a bit about extremism. I mean it’s not wrong to be entirely against AI. I don’t think I am. For example if we managed to do it ethically, I wouldn’t have much of an issue with assistance systems in cars, smart home voice assistants and machine translation. I’m more opposed the more it gets towards generative AI. And because we do it the opposite of ethical in practice. I’m not necessarily opposed because of the thing itself or towards the science behind it, but because of all the bad consequences it comes with. But people like me aren’t allowed a more nuanced opinion or to draw the line somewhere unless it’s a perfect 0% or 100% and I feel people expect me to take some super extreme position. I still consider myself part of the anti-AI community overall, but both sides frequently misunderstand me. So I’m still subscribed to your posts and put up with the personal hate.
This is extreme
My issues are fundsmentally two fold with gen AI:
-
Who owns and controls it (billionares and entrenched corporations)
-
How it is shoehorned into everything (decision making processes, human-to-human communication, my coffee machine)
I cannot wait until finally the check is due and the AI bubble pops; folding this digital snake oil sellers’ house of cards.
You really take no issue with how they were all trained?
Solving points 1 and 2 will also address many ethical problems people create with AI.
I believe that information should be accessible to all. My issue is not with them training in the way they did, but their monopoly on this process. (In the very same vein as Sci-Hub makes pay-walled whitepapers accessible, cutting out the profiteering publishers.)
It must be democratized and distributed, not centralized and monetized!
*Not op but still gonna reply. Not really? The notion that someone can own (and be entitled to control) a portion of culture is absurd. It’s very frustrating to see so many people take issue with AI as “theft” as if intellectual property were something that we should support and defend instead of being the actual tool for stealing artists work (“Property is theft” and all such). And obviously data centers are not built to be environmentally sustainable (not an expert, but I assume this could be done if they cared to do so). That said, using AI to do art so humans can work is the absolute peek of a stupid fucking ideas.
eh, i’ll reply too. the only reason why intellectual property exists for art is because it’s essentially the only way for artists to make money under this capitalist system. while i agree that a capitalist economic system is bad and that artists should be able to make a livable wage, intellectual property on art is more of a symptom of this larger problem
I just don’t think that intellectual property really achieves that. It seems to me that it is a much better tool for corporate control of art and culture than for protecting small artist. Someone who is trying to pay bills with their art probably can’t afford lawyers to protect that work. That said, I don’t necessarily have a better solution other than just asking people to support artist directly instead of going through corporate middlemen
The way they were trained is the way they were trained.
I dont mean to say that the ethics dont matter, but you are talking as though this isnt already present tense.
The only way to go back is basically a global EMP.
What so you actually propose that is a realistic response?
This is an actual question. To this point the only advice I’ve seen to come from the anti-ai crowd is “dont use it. Its bad!” And that is simply not practical.
You all sound like the people who think we are actually able to get rid of guns entirely.
I’m not sure your “this is the present” argument holds much water with me. If someone stole my work and made billions off it, I’d want justice whether it was one day or one decade later.
I also don’t think “this is the way it is, suck it up” is a good argument in general. Nothing would ever improve if everyone thought like that.
Also, not practical? I don’t use genAI and I’m getting along just fine.
To this point the only advice I’ve seen to come from the anti-ai crowd is “dont use it. Its bad!” And that is simply not practical.
I’d argue it’s not practical to use it.
Your argument is invalid, the capitalists are making money. It will continue for as long as there is money to be made. Your agreement and my agreement is unnecessary.
How do we fix the problem that makes AI something that we have to deal with.
Sabotage, public outrage, I dunno.
If you’re arguing that people shouldn’t be upset because there’s no escaping it, this is an argument in favor of capitalism. Capitalism can’t be escaped either.
I appreciate you taking my question a face value, you’re the only one who did. Your capitalism quote worked perfectly. I was trying to use guns as my exams of shit I can’t get away from.
I guess. I’m still anti-capitalism, though, but in the plant a fig tree for your grandchildren kind of sense.
… the capitalists are making money. It will continue for as long as there is money to be made.
Nah these companies don’t even make money on the whole, they burn money. So your argument is invalid, and may God have mercy on your soul! 🙏
Deranged
When generative AI was first taking off, I saw it as something that could empower regular people to do things that they otherwise could not afford to. The problem, as is always the case, is capitalism immediately turned into a tool of theft and abuse. The theft of training data, the power requirements, selling it for profit, competing against those whose creations were used for training without permission or attribution, the unreliability and untrustworthiness, so many ethical and technical problems.
I still don’t have a problem with using the corpus of all human knowledge for machine learning, in theory, but we’ve ended up heading in a horrible, dystopian direction that will have no good outcomes. As we hurtle toward corporate controlled AGI with no ethical or regulatory guardrails, we are racing toward a scenario where we will be slavers or extinct, and possibly both.
When generative AI was first taking off, I saw it as something that could empower regular people to do things that they otherwise could not afford to.
Except, of course, you aren’t doing anything. You are no more writing, making music, or producing art than is an art director at an ad agency is. You’re telling something else to make (really shitty) art on your behalf.
yes, it’s just as bad as being a director
-

















