Got into a work ‘argument’ yesterday with someone from CyberSec that would not believe a tool we use could not do the thing he wanted me to have it do. I’d researched it and had direct links from the vendor, but CoPilot told him otherwise so I had to spend half an hour rehashing the same thing over and over as he adjusted his stupidass input data until CoPilot basically told him ‘whoops I lied about this.’
Their manager was using CoPilot to check the latest version of iPadOS and also arguing with me that 18.7.1 wasn’t getting security updates anymore because CP told them only 26.0.1 was current. It’s a bottom to top issue on that whole side of the business right now and it’s driving me nuts
I’ve run into this twice now. For two different products I support, two different people sent me Claude AI slop answers where it hallucinated functionality into the product that doesn’t exist. And management still says to use AI for research, but verify its responses. What’s the point? That doesn’t save me any time. If anything, it’s wasting time.
I don’t know how these people don’t experience crippling embarrassment. I had a few people try to help me solve their issue by using ChatGPT, and of course it hallucinated options in the software, so I had to tell them that no, this does not exist. At least they apologized.
Our entire meeting was him just feeding different prompts in for stuff while I pulled up vendor pages and found the relevant info quicker and without hallucinations. There’s got to be a breaking point where people realize it’s trash, right? ……right?
And frankly even that is not a credible statement if you had to change your prompt 11 times to get it lmao. It’s all bullshit unless you already know the facts around your answer.
Got into a work ‘argument’ yesterday with someone from CyberSec that would not believe a tool we use could not do the thing he wanted me to have it do. I’d researched it and had direct links from the vendor, but CoPilot told him otherwise so I had to spend half an hour rehashing the same thing over and over as he adjusted his stupidass input data until CoPilot basically told him ‘whoops I lied about this.’
Report this incident to their manager.
Their manager was using CoPilot to check the latest version of iPadOS and also arguing with me that 18.7.1 wasn’t getting security updates anymore because CP told them only 26.0.1 was current. It’s a bottom to top issue on that whole side of the business right now and it’s driving me nuts
I’ve run into this twice now. For two different products I support, two different people sent me Claude AI slop answers where it hallucinated functionality into the product that doesn’t exist. And management still says to use AI for research, but verify its responses. What’s the point? That doesn’t save me any time. If anything, it’s wasting time.
I don’t know how these people don’t experience crippling embarrassment. I had a few people try to help me solve their issue by using ChatGPT, and of course it hallucinated options in the software, so I had to tell them that no, this does not exist. At least they apologized.
Our entire meeting was him just feeding different prompts in for stuff while I pulled up vendor pages and found the relevant info quicker and without hallucinations. There’s got to be a breaking point where people realize it’s trash, right? ……right?
And frankly even that is not a credible statement if you had to change your prompt 11 times to get it lmao. It’s all bullshit unless you already know the facts around your answer.