- cross-posted to:
- memes@lemmy.world
- cross-posted to:
- memes@lemmy.world
cross-posted from: https://slrpnk.net/post/12723593
Not all ai is bad, just most of it
“AI”'s best uses are the Machine Learning aspects we were already using before we started calling things AI
It’s so painful watching this tech be forced down our throats by marketing departments despite us already discovering all of its best use cases long before the marketing teams got ahold of the tech.
Exactly this. GenAI is pretty much dogshit all around. Machine learning as a concept is nothing new at all, and shoving it under the same marketing umbrella does so much more harm than good.
People are saying you shouldn’t use AI to identify edible mushrooms, which is absolutely correct, but remember that people forage fruits and greens too. Plants are deadly poisonous at a higher rate than mushrooms, so plant ID AI has the potential to be more deadly too.
And then there’s the issue that these ID models are very America and/or Europe centric, and will fail miserably most of the time outside of those contexts. And if they do successfully ID a plant, they won’t provide information about it being a noxious invasive in the habitat of the user.
Like essentially all AI, even when it works it’s barely useful at the most surface level only. When it doesn’t work, which is often, it’s actively detrimental.
I actually think AI for mushroom identification is okay, but as a step in the process. Sometimes you see a mushroom and your like “what is that?”. Do a little scan to see what it is. Okay now you have an idea of what it is, but then comes the next part! https://mushroomexpert.com/ there you can go through the list and see if you get a positive ID.
Like if you’re not 100% positive you know what you’re foraging why would you take the risk.
AI noise reduction and spot removal tools for photo editing get a pass too.
Honestly good rule about Machine Learning is just “predicting = good, generating = bad.” Rest are case by case but usually bad.
Predict inflation in 3 years - cool
Predict chance of cancer - cool.Generate image or mail or summary or tech article - fuck you.
Generating speech from text/image is also cool but it’s kind of a special case there.
+1 for WhoBird
Definitely do not use AI or AI-written guidebooks to differentiate edible mushrooms from poisonous mushrooms
It clearly says “plant identification”~
Could still have difficulty accurately identifying the difference between a delectable tea and a deadly poison though
Or you found laborador tea and it’s both delectable and poisonous!
Honestly don’t use any guide book or advice. If you aren’t 100 percent sure on your own maybe just walk away.
My self personally… even if I was 99.999 percent sure it still wouldn’t be worth the risk. I’ll just buy some mushrooms.
It didn’t mention the star finder apps!
Save yourself the money and time. It’s Venus. That cool star you’re looking at? Yeah that’s Venus. Just trust me.
You don’t need AI toshoew a star map. This is the one and only use for Augmented Reality though.
Yeah I actually paid for the full version of mine… even though it’s always Venus
Plant ID is soooo disappointing - works sometimes though.
Always gotta run the ID, web search for images of the recommendation, compare images to plant.
Semantic search can be helpful:

Guess OP image could be about e.g. Perplexity repeatedly HAMMERING (no caching?) the beautiful open web and slopping out poor syntheses.
That’s known as image identification with ML though, not “AI”. The difference? Capitalism.
The difference is that plant identification is no longer an interesting area for AI research. It was ”AI” 10 years ago, but now it’s more or less a solved problem.
Primarily, it’s not interesting financially and therefore for marketing.
They’re all known as “apps” because that’s all they are. Like Angry Birds if we told people the piggies were “AI”.
rage against the machine learning
Good nerdcore band name
It’s a decent evolution of the search engine but you have to ask it for sources and it’s way too expensive for it’s use case.
Is it?
I’ve found so many fucking errors in AI summaries that I don’t trust shit from AI searches when a direct link to a source or wiki could give me better summarized info.
I guess it’s an evolution, but I’m really hoping these mutations prove inferior and it dies off already. But capitalism won’t have that with their sunk cost fallacy driven insistence that I just use the inferior product.
decent
You’ve misspelled “descent”
genAI is the enemy. other kinds are useful.
Just dealt with an AI bot this morning when I called a law office. They try so hard to mimic humans. They even added background people talking sounds. But was 100% easily given away when they repeated the same response to my asking to speak with a human. “I will gladly pass on your message to (insert weird pause) “Bill””
There was an interesting story on NPR last week about someone experimenting with AI agent clones of himself. Even his best attempt sounded pretty obvious thanks to stuff like that.
They even add fake keyboard typing sounds now 💀
Honestly I like when it writes for me too. I tend to be very blunt and concise in my messaging and AI just puts that corporate shine and bubbliness on my messages that everyone seems to feel is important.
One of the issues with LLM is that it attracted all attention. Classifiers are generally cool, cheap and saved us from multiple issues (ok face recognition aside 🙂)
When the AI bubble will burst (because of LLM being expensive and not good enough to replace a person even if they are good in pretending to be a person) all AI will slow down… including classifiers, NLP, etc
All this because the AI community was obsessed by the Turing test/imitation game 🙄
Turing was a genius but heck if I am upset with him for coming with this BS 🤣
I am upset with him for coming with this BS
It made sense in the context it was devised in. Back then we thought the way to build an AI was to build something that was capable of reasoning about the world.
The notion that there’d be this massive amount of text generated by a significant percentage of the world’s population all typing their thoughts into networked computers for a few decades, coupled with the digitisation of every book written, that could be stitched together in a 1,000,000,000,000-byte model that just spat out the word with the highest chance of being next based on what everyone else in the past had written, producing the illusion of intelligence, would have been very difficult for him to predict.
Remember, Moore’s Law wasn’t coined for another 15 years, and personal computers didn’t even exist as a sci-fi concept until later still.











