• 1 Post
  • 18 Comments
Joined 9 months ago
cake
Cake day: March 9th, 2025

help-circle




  • I don’t think we’re using LLM’S in the same way?

    As I’ve stated several times elsewhere in this thread, I more often than not get excellent results, with little to no hallucinations. As a matter of fact, I can’t even remember the last time it happened when programming.

    Also, they way I work, no one could ever tell that I used an LLM to create the code.

    That leaves us your point #4, and what the fuck? Why do upper management always seem to be so utterly incompetent and without a clue when it comes to tech? LLM’S are tools, not a complete solution.



  • I do wonder why so many devs seem to have so wildly different experiences? You seem to have LLM’s making up stuff as they go, while I’m over here having it create mostly flawless code over and over again.

    Is it different behavior for different languages? Is it different models, different tooling etc?

    I’m using it for C#, React (Native), Vue etc and I’m using the web interface of one of the major LLM’S to ask questions, pasting the code of interfaces, sometimes whole React hooks, components etc and I get refactored or even new components back.

    I also paste whole classes or functions (anonymized) to get them unit tested. Could yoy elaborate on how you’re using LLM’S?







  • But they’re not hallucinating when I use them? Are you just repeating talking points? It’s not like the code I write is somehow connected with an AI, I just bounce my code off of an LLM. And when I’m done reviewing each line, adding stuff, checking design docs etc, no one could tell that an LLM was ever used for creating that piece of code in the first place. To this date I’ve never failed a code review on “that’s AI slop, please remove”.

    I’d argue that greater efficiency sometimes gives me more free time, hue hue