Lemmings, I was hoping you could help me sort this one out: LLM’s are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they’re hallucinating?
Disclaimer: I’m a full time senior dev using the shit out of LLM’s, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don’t see “AI” taking my job, because I think that LLM’s have already peaked, they’re just tweaking minor details now.
Please don’t ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA’s.
Please don’t kill me


I think hallucination happens most often if context or knowledge is missing. I have seen coding assistants write code that made no sense, I then helped them along go get back on the right path by providing context.
I also extensively use AI to code in a similar way you do (tbf I am, to this day, not sure if I am actually faster and how it affects my ability to code).
Overall I think the answer is somewhere in the middle, they hallucinate and need some help when they do. But with proper context they work quite well.