- 4 Posts
- 22 Comments
stabby_cicada@slrpnk.netto Fuck AI@lemmy.world•AI Podcast Start Up Plans 5,000 Shows, 3,000 Episode a Week0·5 days agoWell, that’s depressing.
stabby_cicada@slrpnk.netto Fuck AI@lemmy.world•AI Podcast Start Up Plans 5,000 Shows, 3,000 Episode a Week0·5 days agoPerhaps unsurprisingly, a lot of the people who call their political opponents “NPCs” and fantasize about replacing women with android sexbots have become AI shills (or AI doomers) in this year of our Lord 2025.
stabby_cicada@slrpnk.netto Fuck AI@lemmy.world•AI Podcast Start Up Plans 5,000 Shows, 3,000 Episode a Week0·5 days agoOkay, so I’m going to jump in and defend podcasts, because I think they’re an exception to the Dead Internet Theory.
There is (ironically, I know) a podcast I really liked on the topic:
The quick summary is, while some kinds of social media have been captured by big centralized companies, centralized, and enshittified - like microblogging with X, or videos with YouTube and TikTok - podcasting hasn’t.
Because podcasting is distributed via RSS, a free open source protocol, anyone can create and distribute a podcast and there are hundreds of podcast apps to listen with.
There’s no centralized location where you have to go to listen to podcasts - you search for podcasts on whatever app you like and you follow the podcasts you want to listen to. Apple Podcasts and Spotify have big databases of podcasts, but you don’t have to use either of them, as long as somebody has an RSS feed you can subscribe directly to their podcast without going through a gatekeeping platform of any kind.
This makes it really difficult to enshittify the podcastosphere with a ton of AI slop, because people follow the podcasts they want to follow, they don’t rely on an algorithm to feed them new podcasts the way TikTok feeds them new videos, and if their podcast app tries to promote content they don’t want, they can just switch apps.
So while this idea is shitty and a podcastosphere dominated by AI would suck I really don’t expect it to get much traction.
Which is funny to me, in the “if we don’t laugh we’ll cry” sense.
Because whenever I go to a museum and look at modern art, abstract art, and so on, I see all sorts of curators notes explaining how this mix of shapes and colors and designs actually has some sort of profound, inspirational, generally leftist or anti-establishment, message.
But people don’t see the message or understand the message right away. From a casual look, it’s just pleasing colors and shapes. You have to believe it matters and put in the effort to understand it in order to get those inspirational messages.
In other words: if you oppose the establishment, and you are looking for modern art that opposes the establishment, you will find it. But if you’re an average person that anti-establishment message will mean nothing to you. You won’t even think to look for it.
So it’s not that modern art doesn’t contain left or progressive ideologies. It’s worse. Because it does contain left and progressive ideologies in a self censored form. All the power and energy of these left and progressive artists has been captured by the establishment and harmlessly redirected into pursuits that “support” left and progressive causes but pose no threat to the powers that be.
And I’m thinking more and more this makes it bad art.
Think about advertisements. If someone in a car looks out and sees a billboard passing, they should understand, in the few seconds they see that billboard, what product it’s advertising and why they should buy that product. A billboard that doesn’t get those two points across in a matter of seconds has failed at being a billboard.
And a piece of visual art that doesn’t get those same two messages across in a matter of seconds - what message it’s sending and why you should care - has failed at being visual art.
stabby_cicada@slrpnk.netto Fediverse@lemmy.world•Microsoft doesn't understand the FediverseEnglish112·6 days agoAn attempt at censorship failed because the censors didn’t understand the system they were trying to censor. I think that’s both funny and satisfying.
stabby_cicada@slrpnk.netto Fuck AI@lemmy.world•Sam Altman Says He's Suddenly Worried Dead Internet Theory Is Coming True0·6 days agoOh no, says Sam, the eminently predictable consequences of my own actions have come to pass through no fault of my own.
And he’s not wrong. If I search for any topic on Google without narrowing my search very carefully, the first page will consist of one AI generated autoresponse and ten AI generated articles from SEO-exploiting link farms.
If I search for advice on Reddit I have to narrow it down to posts from before 2022 or comments will be full of users who see a question and think “it is both useful and appropriate for me to plug this question into ChatGPT and post its answer.”
Product reviews have been 95% spam and ad copy for decades.
I read fucking fanfiction and half the stories that started in 2025 show signs of being at least edited by LLMs.
If that’s not the shambling zombie corpse of the Internet pretending to be human I don’t know what it is.
stabby_cicada@slrpnk.netto Fediverse@lemmy.world•Statement on discourse about ActivityPub and AT ProtocolEnglish4·7 days ago* If you wanted to summarise this letter on a t-shirt, it would be “People > Protocols > Platforms”. *
Can I get this under Calvin pissing on a Disney logo?
Seriously, this is now my favorite summary of the fediverse.
stabby_cicada@slrpnk.netto Fuck AI@lemmy.world•New Group Claims AI May Be Aware and Suffering0·12 days agoIt may surprise you to learn that machines are made by people.
stabby_cicada@slrpnk.netto Fuck AI@lemmy.world•New Group Claims AI May Be Aware and Suffering0·12 days agoDon’t mistake the soil for the seed.
People have been exhausted, stressed, poor, and lonely for centuries, and, yes, those factors worsen people’s mental health.
The current Western loneliness epidemic, especially, has been worsening for decades - “Bowling Alone”, published in 2000, was one of the first popular discussions of a trend already present in the '90s - and, especially after COVID, loneliness and isolation (and fucking social media doomscrolling) have worsened people’s mental health even further. You’re not wrong. It’s a real thing.
And this may make people more vulnerable to AI-induced psychosis. If you don’t have any real people to talk to, and you rely on an AI tool for the illusion of companionship, that’s not a good sign for your mental health in general.
AND ALSO. AI-induced psychosis is, itself, a real thing, and it’s induced by people’s misunderstanding of how LLMs work (that is, thinking there’s a real mind behind the language generating algorithm) and LLM programming that’s designed to addict users by providing validation and positive feedback. And the more widely LLM tools are used, the more they’re crammed into every app, and the more their backers talk up how “smart” they are, the more common AI-induced psychosis is going to become.
I mean, back in the day, people had to be deeply mentally ill before they started imagining their dog was telling them they were God. Now you can get an LLM to tell you it’s God, or you’re God, after a few hundred hours of conversation. I think the horror stories of mental illness we’re seeing now are just going to be the tip of the iceberg.
stabby_cicada@slrpnk.netOPto Fediverse@lemmy.world•The Last Days Of Social Media | as AI slop and sexbots kill mass social media networks, "a billion little gardens" sprout in their place | NOEMAEnglish10·12 days agoBut someday after that, we’ll reach a point when the phrase “social media is all fake robo-crap” will be as common of knowledge as “cigarettes cause cancer” or “slot machines are a poor investment”. Adults can still smoke and slot, sure. But nobody in the developed world can say they weren’t warned of the risks.
Prescient.
stabby_cicada@slrpnk.netto Fuck AI@lemmy.world•New Group Claims AI May Be Aware and Suffering0·13 days agoI’m just going to rant a bit, because this exemplifies why, I think, LLMs are not just bullshit but a looming public health crisis.
Language is a tool used by humans to express their underlying thoughts.
For most of human evolution, the only entities that could use language were other humans - that is, other beings with minds and thoughts.
In our stories and myths and religions, anything that talked to us like a person - a God, a spirit, a talking animal - was something intelligent, with a mind, to some degree, like ours. And who knows how many religions were started when someone heard what sounded like a voice in the rumble of thunder or the crackling of a burning bush and thought Someone must be talking directly to them?
It’s part of the culture of every society. It’s baked into our genetics. If something talks to us, we assume it has a mind and is expressing its thoughts to us through language.
And because language is an inexact tool, we instinctively try to build up a theory of mind, to understand what the speaker is actually thinking, what they know and what they believe, as we hold a conversation with them.
But now we have LLMs, which are something entirely new to this planet - technology that flawlessly mimics language without any underlying thought whatsoever.
And if we don’t keep that in mind, if we follow our instincts and try to understand what the LLM is actually “thinking”, to build a theory of mind for a tool without any mind at all, we necessarily embrace unreason. We’re trying to rationalize something with no reasoning behind it. We are convincing ourselves to believe in something that doesn’t exist. And then we return to the LLM tool and ask it if we’re right about it, and it reinforces our belief.
It’s very easy for us to create a fantasy of an AI intelligence speaking to us through chat prompts, because humans are very, very good at rationalizing. And because all LLMs are programmed, to some degree, to generate language the user wants to hear, it’s also very easy for us to spiral down into self-reinforcing irrationality, as the LLM-generated text convinces us there’s another mind behind those chat prompts, and that mind agrees with you and assures you that you are right and reinforces whatever irrational beliefs you’ve come up with.
I think this is why we’re seeing so much irrationality, and literal mental illness, linked to overuse of LLMs. And why we’re probably going to see exponentially more. We didn’t evolve for this. It breaks our brains.
stabby_cicada@slrpnk.netto Fediverse@lemmy.world•Mississippi's age assurance law puts decentralized social networks to the testEnglish23·18 days agoBluesky is a small indie company. It can’t afford to fight the law or implement the extensive age verification the law requires. So it chose to pull the plug and leave.
FB, X, etc, have a lot more resources to implement the extensive, invasive age verification Mississippi requires and keep fighting it in court until the decision upholding it is final.
Wow, look at all those corporate buzzwords. The focus on big generic ideas and the lack of implementation discussion or specific examples. And those perfectly spaced em dashes. Chef’s kiss. Premium chum right there 😆
But AI generation aside, this article is counterintuitive in a bad way. Save a Fediverse instance by building a real life community of “handmade goods and creative projects” based around that instance? If users cared about your instance enough to have real in person events your instance wouldn’t need saving.
If anything, it should be the other way around. Real life communities can incorporate a Fediverse instance for online socializing and building community. And those instances will thrive as long as they fill a need for the community. But creating the instance first and building a community - which is several orders of magnitude harder to do - to support the instance? Sheesh.
Kevin Roose and Casey Newton are two of the most notable boosters, and — as I’ll get into later in this piece — neither of them have a consistent or comprehensive knowledge of AI. Nevertheless, they will insist that “everybody is using AI for everything” — a statement that even a booster should realize is incorrect based on the actual abilities of the models.
But that’s because it isn’t about what’s actually happening, it’s about allegiance. AI symbolizes something to the AI booster — a way that they’re better than other people, that makes them superior because they (unlike “cynics” and “skeptics”) are able to see the incredible potential in the future of AI, but also how great it is today
This is exactly how cryptocurrency/NFT/blockchain boosters were acting in the crypto boom of 2021.
Here’s hoping Grok and Gemini end up in the same dumpster as the procedurally generated racist monkey jpgs.
stabby_cicada@slrpnk.netOPto Fuck AI@lemmy.world•A hidden network handles chats for OnlyFans stars. AI could soon take over0·25 days agoBecause the people using this service don’t know. That’s the point. OF effectively sells the illusion that its “models” are there voluntarily, they enjoy their work, and they really really want to talk to lonely, horny men like you. And while most people realize this is a scam, there are enough lonely, horny men who don’t know it’s a scam, or don’t want to know it’s a scam, to keep it profitable.
This is why so many otherwise intelligent people fall for financial scams of every variety, from meme stocks to Iraqi dinars to “free money just for cashing this check for me”. Because they want it to be true. They want the opportunity to be real. And as a result their critical thinking skills fall by the wayside.
stabby_cicada@slrpnk.netto Fuck AI@lemmy.world•ChatGPT Users Hate GPT-5’s “Overworked Secretary” Energy, Miss Their GPT-4o Buddy0·1 month agoI suspect that’s why users are complaining about the new model. It’s like the article talks about - the parasocial relationship between users and their AI agents. We grew up in an era when computers were tools, but these users want computers that talk to you like a friend.
stabby_cicada@slrpnk.netto Fuck AI@lemmy.world•ChatGPT Users Hate GPT-5’s “Overworked Secretary” Energy, Miss Their GPT-4o Buddy0·1 month agoOn the OpenAI community forums and Reddit, long-time chatters are expressing sorrow at losing access to models like GPT-4o. They explain the feeling as “mentally devastating,” and “like a buddy of mine has been replaced by a customer service representative.” These threads are full of people pledging to end their paid subscriptions. It’s worth noting, though, that many of these posts look to us like they have been composed partially or entirely with AI. So even when long-time chat users are complaining, they’re still engaged with generative artificial intelligence.
Lol. How sad is it that people are using LLMs to write their fucking Reddit posts?
Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.
Yes. And. The worst-case scenario is: the black box is creating arguments deliberately designed to make you believe false things. 100% of the arguments coming out of it are false - either containing explicit falsehoods, or presenting true facts in such a way as to draw a false conclusion. If you, personally, cannot reject one of its arguments is false, it’s because you lack the knowledge rhetorical skill to see how it is false.
I’m sure you can think of individuals and groups whom this applies to.
(And there’s the opposite issue. An argument that is correct, but that looks incorrect to you, because your understanding of the issue is limited or incorrect already.)
The way to avoid this is to assess the trustworthiness and credibility of the black box - in other words, how much respect to give it - before assessing its arguments. Because if your black box is producing biased and manipulative arguments, assessing those arguments on their own merits, and assuming you’ll be able to spot any factual inaccuracies and illogical arguments, isn’t objectivity. It’s arrogance.
stabby_cicada@slrpnk.netto Fuck AI@lemmy.world•People Are Becoming "Sloppers" Who Have to Ask AI Before They Do Anything0·2 months agoChatGPT is the ultimate ‘cultural product of the postmodern era,’ and very few of us have been inoculated with a theory of mind that distinguishes language from thought," Foster concluded in his newsletter.
The best description of this distinction I’ve encountered was in a science fiction novel - Blindsight by Peter Watts.
I mean, this is a community of people bonded over their common desire to talk about how much they hate AI. Of course they need to understand whatever fresh hell new AI “capabilities” generate so they can shit talk it accurately. 😂