I’m working for a relatively big company. Not long ago, our boss sent a mail around asking for feedback on “AI”. I sent an answer saying the usual: you can’t trust LLM on basic searches so it’s useless if you have to double check, my short 2/3 bad experiences with it, and why I am concerned about water and energy usage as well as privacy problems with a nice link for each point of articles I found here (thank You people!).
But to be honest, I am still curious about this LLM thing and every once in a while try to give the thing a chance… but damn. This is really shit! Our work provides access to many different LLM models but I pretty much have had issues with all of them.
Copilot now hangs and crashes if I type more than 8 words in the prompt before sending the damn prompt. Once I manage to send the prompt and get a (relatively decent!) answer, the program crashes if I scroll too fast in the window. I mean… lmfao
In the beginning (months ago) Copilot was working in our environment but I anyway could not get what I wanted out of it, apparently my task required to much processing (making a powerpoint out of a big pdf).
With Claude V4, we are supposed to be able to upload files, but it works like one third of the time. Then I ask it some very simple thing like e.g. “how to write bold in markdown” and the thing gave me two blank responses and the thread was just fucked from here. Ok?
Previous version of Claude (V3.7 Sonnet) I was trying to get help on some basic scripts. First simple prompt, 10 words including “Hello” (Yeah I am the kind of person who writes “hello” to LLM, whatyougonnado?), answer goes full stroke with sentence parts, fucked up punctuation, repeats and ends up repeating the question. I found this one really funny.
I don’t remember what exactly but I also had similar kind of troubles with gemini.
So yeah, just saying “ok, fuck ai”, but to me this is genuinely shitty on top of all the other problems it includes. How can this be pushed so much in the corpo world? For real?


Big corporations are pushing for it like crazy, no matter the objective facts, because, either it’s just shit and it will die out, in which case you may have lost some money, or otherwise it turns out to be the technological revolution it’s promised to be and if you are not on board you are suddenly so far behind the competition that you are basically out. So yeah, I work for an industrial corporation, and we are all asked to use “AI” for everything we do, just because maybe it’s going to be important. Apart for translation to different languages (which we can reasonably hope an LLM would be good at this), I have yet to find a use case that it would be actually helpful with.