Love it.
Steve Jobs once called the personal computer a bicycle for the mind; ChatGPT is a wheelchair for the mind. There is no shame in using a wheelchair if you need one, but if you don’t need one and use one anyway, you will come to need it.
Steve Jobs also thought eating fruit could cure cancer…
this metaphor is ableist because nobody is pushing wheelchair use on abled people, unlike ChatGPT. and no, abled people won’t become “dependent” on wheelchairs because they’ll realize how miserable life is when you’re barred from most public establishments.
most of the people perceived as “faking it” are just disabled people who can’t afford a diagnosis or won’t be diagnosed by medics due to racism, fatphobia, etc.
But if you start consistently using a wheelchair when there is no physical reason for you to use one, will your muscles not atrophy, thereby making you need it?
I don’t think this metaphor is inherently ableist. That wheelchairs aren’t being pushed onto anyone isn’t really relevant, nor is the fact that very few people fake needing a wheelchair. I don’t think the person you replied to was shaming anyone for “faking it.” Just saying that if you don’t need a wheelchair, it’s probably a bad idea to use one.
Just saying that if you don’t need a wheelchair, it’s probably a bad idea to use one.
it is ableist though, because we get told we don’t need to use one every single day. this stems from ableds vilifying wheelchair use as a “downgrade on the human experience” as opposed to a liberation tool, which is what it actually is.
their metaphor wouldn’t even exist if this mentality wasn’t normalized.
But… the person you’re replying to didn’t say you don’t need to use a wheelchair. They said that if someone genuinely doesn’t need to use a wheelchair, using one will likely have negative effects. Which is just, like, true? In my head, it’s roughly akin to saying, “If you consistently take a medication you don’t need, you’re probably going to wind up needing a different medication to counteract the negative effects of the medication you unwisely took.”
You’re completely right that wheelchairs are liberation tools and shouldn’t be vilified. And as someone who needs medical intervention to survive, I understand your frustration with ableist rhetoric. I just think your reading of this one is a bit off the mark.
what negative effects are there for ableds using a wheelchair? gonna need a few sources besides conjecture.
the only way they’d get hurt is from other ableds assaulting them or getting a badly fitted chair, which also happens with bikes. the double standard is that bikes would never get called a downgrade outside of carbrain spaces.
I’d say chatgpt is more like a self-driving tesla stuck in huge traffic. you don’t have any control, it can break down easily, you’re moving slower than a bike, all the while thinking that people who chose the bike to avoid the traffic are losers.
Let’s be honest though if there were sex bots AI would be even more popular than it already is
Well I have news for you, my friend.
(Shows all the weird sex AI chatbots)
Nah, man. That won’t cut it.
The day real doll bots can suck dick without me doing anything but watch and enjoy, that’s the day I’ll get one and become asocial.
if that was available, i think it would be harder for alt-right extremism to take hold
I don’t want to steer the conversation towards U.S. politics. Let’s focus on making sex robots that look like Margot Robbie happen once and for all.
deleted by creator
I delivered pizza during COVID and most people I worked with couldn’t follow simple directions to an address or read a road map. If a destination didn’t show up on their cellphone’s navigation then they were immediately and hopelessly lost.
If you don’t use and exercise your brain then it atrophies and dies. AI is going turn a lot of people into conscious vegetables.
We need to teach people curiosity. I use my GPS all the time because of construction and stuff but I also look at the route before I leave so that I know where I’m headed on my own, too. Meanwhile I know people who’ve lived in a city for decades and still can’t get around it without help.
We need to teach people curiosity.
This is called being a lifelong learner. Learning something new every week, or even daily, no matter how small, will always improve your life. It keeps your mind active and it adds to your problem solving.
So tell me how using a technology that can help summarize topics, create transcripts from meetings, and act like a teacher to ask questions too prevents this from happening? We need to teach people how to use the tools at hand pretending they dont exist won’t put the genie back in the bottle it will only further excasbate the problem. Yes using gen ai to write your paper for you is a terrible use case. Feeding it a research paper and asking it to break it down into simpler topics so one can build their knowledge and asking for it to help with creating a bibliography on a paper so you can worry about the information at hand instead of trying to remember the syntax for the myriad of different ways one can site sources on the other hand is extremely useful and helps contribute to lifelong learners.
Summarizing topics is nothing more than Cliffs Notes, and if you got caught using those, you were busted. You needed to do the work and read the whole thing to complete the assignment. Shortcuts mean you lose things that may be important.
Transcripts are fine if you are actually there, voice to text is never perfect. People have accents and computers mess up words that sound alike, accent or not. People don’t always pronounce correctly.
Asking an LLM to teach you something is never going to work out until the creators specifically feed it valid, true information, not scrape the internet and people’s text messages. And then you need to teach it to think like a Human, which it never will.
Feeding it a research paper seems like it might work out, but that deprives you of the ability to problem solve. You need to learn to be organized, take notes in a structured manner, choose what you believe is pertinent information in that paper. You participate, not passively get told what it is. This is a brain expanding activity. You are connected, that’s how we learn.
I am very pro computer and automation. Computers are there to help us save time on tasks that take a lot of time, and repetitive tasks. Screwing bolts onto tires in a car factory is hard on Humans for 8 hours, robots can do it. But having AI write junk articles that make no sense to fill up websites is a greedy money grab, and distorts facts. I don’t need Google telling me to put glue in my pizza cheese, or to shove my dick in a loaf of bread to see if it’s done. And now all the ‘AI’ owners want to scan every personal thing you have on your phone, computer, social media, and here in the US, all of our private government data.
Welcome to 1984, run by clowns. No one is putting in the hard work required to make any of the public tools do what is claimed on the label. It’s just invasive technology right now that produces less than stellar products and infringes on so many Human Rights in the process.
I appreciate your thoughtful critique of AI tools and their limitations. You’re right that voice-to-text technology isn’t perfect, especially with accents and pronunciation variations. These are genuine challenges that need addressing. There are so many tools about AI transcript, such as transcriptly, those tools can extract the text subtitle by AI, but just text, no accents and pronunciation variations.
Your point about human engagement in learning is crucial. AI should augment human capabilities, not replace the critical thinking and problem-solving skills that come from active participation.
Privacy and data security concerns you raise are absolutely valid. Any AI tool worth using should prioritize user privacy with transparent policies and robust protection measures.
The key is finding the right balance - using AI for what it does well (like initial transcription) while maintaining human oversight for quality, context, and meaning. For instance, while tools can convert speech to text, humans are still needed to interpret, organize, and apply that information meaningfully.
What specific aspects of AI technology would you like to see improved to better serve human needs while addressing your concerns?
“If only I’d programmed the robot to be more careful what I wished for. Robot, experience this tragic irony for me!”
This is such a weird take.
Oh poor baby, you need a wittle spell check to make sure you don’t mess up the words in your important email?
Oh little loser, you gotta have an automatic transmission to make the car go vroom vroom?
Oh Mr. has-a-life, you have to pull out Shazam instead of knowing 8 million songs by heart?
All of us use technology to make our lives easier, to supplement skills we don’t want to sink perfectionist-level time into, to enjoy “good enough” results in one area or another.
This kind of holier-than-thou hyperbolic snobbery does nothing to generate actual thoughtful reflection of where to draw the line with technology dependence and only distracts and detracts from actually good critiques of generative AI’s ethics and other negative effects. I wish this sub didn’t allow low-effort meme posts because it’s such a brain rot circle-jerk.
The goddamn meta commercial where the dad is asking, “meta, how do I get my toddler to eat breakfast” makes me wants to implode every fucking time. Like you can’t feed your kid?





