• SoftestSapphic@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    “AI”'s best uses are the Machine Learning aspects we were already using before we started calling things AI

    It’s so painful watching this tech be forced down our throats by marketing departments despite us already discovering all of its best use cases long before the marketing teams got ahold of the tech.

    • nfreak@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      Exactly this. GenAI is pretty much dogshit all around. Machine learning as a concept is nothing new at all, and shoving it under the same marketing umbrella does so much more harm than good.

  • punkfungus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    People are saying you shouldn’t use AI to identify edible mushrooms, which is absolutely correct, but remember that people forage fruits and greens too. Plants are deadly poisonous at a higher rate than mushrooms, so plant ID AI has the potential to be more deadly too.

    And then there’s the issue that these ID models are very America and/or Europe centric, and will fail miserably most of the time outside of those contexts. And if they do successfully ID a plant, they won’t provide information about it being a noxious invasive in the habitat of the user.

    Like essentially all AI, even when it works it’s barely useful at the most surface level only. When it doesn’t work, which is often, it’s actively detrimental.

    • greedytacothief@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      I actually think AI for mushroom identification is okay, but as a step in the process. Sometimes you see a mushroom and your like “what is that?”. Do a little scan to see what it is. Okay now you have an idea of what it is, but then comes the next part! https://mushroomexpert.com/ there you can go through the list and see if you get a positive ID.

      Like if you’re not 100% positive you know what you’re foraging why would you take the risk.

  • Soapbox@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    AI noise reduction and spot removal tools for photo editing get a pass too.

  • Jankatarch@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    Honestly good rule about Machine Learning is just “predicting = good, generating = bad.” Rest are case by case but usually bad.

    Predict inflation in 3 years - cool
    Predict chance of cancer - cool.

    Generate image or mail or summary or tech article - fuck you.

    Generating speech from text/image is also cool but it’s kind of a special case there.

  • s@piefed.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    Definitely do not use AI or AI-written guidebooks to differentiate edible mushrooms from poisonous mushrooms

    • Aeao@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      Honestly don’t use any guide book or advice. If you aren’t 100 percent sure on your own maybe just walk away.

      My self personally… even if I was 99.999 percent sure it still wouldn’t be worth the risk. I’ll just buy some mushrooms.

  • Aeao@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    It didn’t mention the star finder apps!

    Save yourself the money and time. It’s Venus. That cool star you’re looking at? Yeah that’s Venus. Just trust me.

  • brbposting@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    Plant ID is soooo disappointing - works sometimes though.

    Always gotta run the ID, web search for images of the recommendation, compare images to plant.

    Semantic search can be helpful:

    Search mockup showing difference between a lexical search of “Daniel Radcliffe” compared to semantic search of “how rich is the actor who played Harry Potter“ which translates to “net worth Daniel Radcliffe“, sourced from seobility.net

    Guess OP image could be about e.g. Perplexity repeatedly HAMMERING (no caching?) the beautiful open web and slopping out poor syntheses.

  • Lucy :3@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    That’s known as image identification with ML though, not “AI”. The difference? Capitalism.

    • magic_lobster_party@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      The difference is that plant identification is no longer an interesting area for AI research. It was ”AI” 10 years ago, but now it’s more or less a solved problem.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      They’re all known as “apps” because that’s all they are. Like Angry Birds if we told people the piggies were “AI”.

  • ILikeBoobies@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    It’s a decent evolution of the search engine but you have to ask it for sources and it’s way too expensive for it’s use case.

    • odelik@lemmy.today
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      Is it?

      I’ve found so many fucking errors in AI summaries that I don’t trust shit from AI searches when a direct link to a source or wiki could give me better summarized info.

      I guess it’s an evolution, but I’m really hoping these mutations prove inferior and it dies off already. But capitalism won’t have that with their sunk cost fallacy driven insistence that I just use the inferior product.

  • ThePantser@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    Just dealt with an AI bot this morning when I called a law office. They try so hard to mimic humans. They even added background people talking sounds. But was 100% easily given away when they repeated the same response to my asking to speak with a human. “I will gladly pass on your message to (insert weird pause) “Bill””

  • artyom@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    Honestly I like when it writes for me too. I tend to be very blunt and concise in my messaging and AI just puts that corporate shine and bubbliness on my messages that everyone seems to feel is important.

  • lavander@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    One of the issues with LLM is that it attracted all attention. Classifiers are generally cool, cheap and saved us from multiple issues (ok face recognition aside 🙂)

    When the AI bubble will burst (because of LLM being expensive and not good enough to replace a person even if they are good in pretending to be a person) all AI will slow down… including classifiers, NLP, etc

    All this because the AI community was obsessed by the Turing test/imitation game 🙄

    Turing was a genius but heck if I am upset with him for coming with this BS 🤣

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      16 days ago

      I am upset with him for coming with this BS

      It made sense in the context it was devised in. Back then we thought the way to build an AI was to build something that was capable of reasoning about the world.

      The notion that there’d be this massive amount of text generated by a significant percentage of the world’s population all typing their thoughts into networked computers for a few decades, coupled with the digitisation of every book written, that could be stitched together in a 1,000,000,000,000-byte model that just spat out the word with the highest chance of being next based on what everyone else in the past had written, producing the illusion of intelligence, would have been very difficult for him to predict.

      Remember, Moore’s Law wasn’t coined for another 15 years, and personal computers didn’t even exist as a sci-fi concept until later still.