Lemmings, I was hoping you could help me sort this one out: LLM’s are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they’re hallucinating?
Disclaimer: I’m a full time senior dev using the shit out of LLM’s, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don’t see “AI” taking my job, because I think that LLM’s have already peaked, they’re just tweaking minor details now.
Please don’t ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA’s.
Please don’t kill me


Yeah, generating test classes with AI is super fast. Just ask it, and within seconds it spits out full test classes with some test data and the tests are plenty, verbose and always green. Perfect for KPIs and for looking cool. Hey, look at me, I generated 100% coverage tests!
Do these tests reflect reality? Is the test data plausible in the context? Are the tests easy to maintain? Who cares, that’s all the next guy’s problem, because when that blows up the original programmer will likely have moved on already.
Good tests are part of the documentation. They show how a class/method/flow is used. They use realistic test data that shows what kind of data you can expect in real-world usage. They anticipate problems caused due to future refactorings and allow future programmers to reliably test their code after a refactoring.
At the same time they need to be concise and non-verbose enough that modifying the tests for future changes is simple and doesn’t take longer than the implementation of the change. Tests are code, so the metric of “lines of code are a cost factor, so fewer lines is better” counts here as well. It’s a big folly to believe that more test lines is better.
So if your goal is to fulfil KPIs and you really don’t care whether the tests make any sense at all, then AI is great. Same goes for documentation. If you just want to fulfil the “every thing needs to be documented” KPI and you really don’t care about the quality of the documentation, go ahead and use AI.
Just know that what you are creating is low-quality cost factors and technical debt. Don’t be proud of creating shitty work that someone else will have to suffer through in the future.
Has anyone even read here that I read every line of code, making sure that they’re all correct? I do also make sure that all tests are relevant, using relevant data and I make sure that the result of each test is correctly asserted.
No one would ever be able to tell what tools I used to create my code, it always passes the code reviews.
Why all the vitriol?