Excerpt:
“Even within the coding, it’s not working well,” said Smiley. “I’ll give you an example. Code can look right and pass the unit tests and still be wrong. The way you measure that is typically in benchmark tests. So a lot of these companies haven’t engaged in a proper feedback loop to see what the impact of AI coding is on the outcomes they care about. Lines of code, number of [pull requests], these are liabilities. These are not measures of engineering excellence.”
Measures of engineering excellence, said Smiley, include metrics like deployment frequency, lead time to production, change failure rate, mean time to restore, and incident severity. And we need a new set of metrics, he insists, to measure how AI affects engineering performance.
“We don’t know what those are yet,” he said.
One metric that might be helpful, he said, is measuring tokens burned to get to an approved pull request – a formally accepted change in software. That’s the kind of thing that needs to be assessed to determine whether AI helps an organization’s engineering practice.
To underscore the consequences of not having that kind of data, Smiley pointed to a recent attempt to rewrite SQLite in Rust using AI.
“It passed all the unit tests, the shape of the code looks right,” he said. It’s 3.7x more lines of code that performs 2,000 times worse than the actual SQLite. Two thousand times worse for a database is a non-viable product. It’s a dumpster fire. Throw it away. All that money you spent on it is worthless."
All the optimism about using AI for coding, Smiley argues, comes from measuring the wrong things.
“Coding works if you measure lines of code and pull requests,” he said. “Coding does not work if you measure quality and team performance. There’s no evidence to suggest that that’s moving in a positive direction.”
People delude themselves if they think LLMs are not useful for coding. People also delude themselves that all code will be AI written in the next 2 years. The reality is that it’s incredibly useful tool but with reasonable limits.
I keep trying to use the various LLMs that people recommend for coding for various tasks and it doesn’t just get things wrong. I have been doing quite a bit of embedded work recently and some of the designs it comes up with would cause electrical fires, its that bad. Where the earlier versions would be like “oh yes that is wrong let me correct it…” then often get it wrong again the new ones will confidently tell you that you are wrong. When you tell them it set on fire they just don’t change.
I don’t get it I feel like all these people claiming success with them are just not very discerning about the quality of the code it produces or worse just don’t know any better.
It is possible to get good results, the problem is that you yourself need to have an very good understanding of the problem and how to solve it, and then accurately convey that to the AI.
Granted, I don’t work on embedded and I’d imagine there’s less code available for AI to train on than other fields.
I got a hot take on this. People are treating AI as a fire and forget tool when they really should be treating it like a junior dev.
Now here’s what I think, it’s a force multiplier. Let’s assume each dev has a profile of…
2x feature progress, 2x tech debt removed 1x tech debt added.
Net tech debt adjusted productivity at 3x
Multiply by AI for 2 you have a 6x engineer
Now for another case, but a common one 1x feature, net tech debt -1.5x = -0.5x comes out as -1x engineer.
The latter engineer will be as fast as the prior in cranking out features without AI but will make the code base worse way faster.
Now imagine that the latter engineer really leans into AI and gets really good at cranking out features, gets commended for it and continues. He’ll end up just creating bad code at an alarming pace until the code becomes brittle and unweildy. This is what I’m guessing is going to happen over the next years. More experienced devs will see a massive benefit but more junior devs will need to be reined in a lot.
Going forward architecture and isolation of concerns will be come more important so we can throw away garbage and rewrite it way faster.
Junior software developers understand the task. They improve their skill in understanding the code and writing better code. They can read the documentation.
Large language models just generate code based on what it looked like in previous examples.
It’s not even a junior dev. It might “understand” a wider and deeper set of things than a junior dev does, but at least junior devs might have a sense of coherency to everything they build.
I use gen AI at work (because they want me to) and holy shit is it “deceptive”. In quotes because it has no intent at all, but it is just good enough to make it seem like it mostly did what was asked, but you look closer and you’ll see it isn’t following any kind of paradigms, it’s still just predicting text.
The amount of context it can include in those predictions is impressive, don’t get me wrong, but it has zero actual problem solving capability. What it appears to “solve” is just pattern matching the current problem to a previous one. Same thing with analysis, brainstorming, whatever activity can be labelled as “intelligent”.
Hallucinations are just cases where it matches a pattern that isn’t based on truth (either mispredicting or predicting a lie). But also goes the other way where it misses patterns that are there, which is horrible for programming if you care at all about efficiency and accuracy.
It’ll do things like write a great helper function that it uses once but never again, maybe even writing a second copy of it the next time it would use it. Or forgetting instructions (in a context window of 200k, a few lines can easily get drowned out).
Code quality is going to suffer as AI gets adopted more and more. And I believe the problem is fundamental to the way LLMs work. The LLM-based patches I’ve seen so far aren’t going to fix it.
Also, as much as it’s nice to not have to write a whole lot of code, my software dev skills aren’t being used very well. It’s like I’m babysitting an expert programmer with alzheimer’s but thinks they are still at their prime and don’t realize they’ve forgotten what they did 5 minutes ago, but my company pays them big money and get upset if we don’t use his expertise and probably intend to use my AI chat logs to train my replacement because everything I know can be parsed out of those conversations.
Or maybe don’t try and drive a screw in with a hammer?
It’s just not good for 99% of the shit it’s marketed for. Sorry.
WALL OF TEXT that says inadvertently that junior devs should be treated like machines not people.
It IS working well for what it is - a word processor that’s super expensive to run. It’s because there idiots thought the world was gonna end and that we were gonna have flying cars going around.
Businesses were failing even before AI. If I cannot eventually speak to a human on a telephone then the whole human layer is gone and I no longer want to do business with that entity.
Yes it does not work right! also there are no new discoveries made by AI, we only see chat bots, self driving cars, automation in workplace, yet no discoveries. At some point I thought AI will help us solve cancer or way to travel in space, yet billionaires think of money.
Tell me that negative, tell that an idiot, but the only thing I see people profiting now that they can, and letter on nothing will happen.Yes it does not work right!
I agree.
also there are no new discoveries made by AI, we only see chat bots, self driving cars, automation in workplace, yet no discoveries. At some point I thought AI will help us solve cancer or way to travel in space, yet billionaires think of money.
We aren’t there yet. AI and research around it started, or rather really took off, around 2018 (at least relating to what we mean by AI today; ruled based approaches existed much longer). It is very much a new field, considering most other fields existed for over 30 years at this point. And well, to be pedantic, large language models aren’t really AI because there is no intelligence. They are just generating output that is the most probable continuation of the input and context provided. So yeah, “AI” cannot really research or make new discoveries yet. There may very well be a time, where AI helps us solve cancer. It definitely isn’t today nor tomorrow.
I also don’t think that billionaries make money with AI. I mean, if you look at OpenAI: they are actually burning money, at a fast rate measured in billions. They are believed to turn a profit in 2030. Without others investing in it, they would be long gone already. The people with money believe that OpenAI and other companies related to AI will someday make the world changing discovery. That could very well lead to AI making discoveries on its own AND to lots of money. Until then, they are obviously willing to burn a tremendous amount of money and that is keeping OpenAI in particular alive at this moment. Only time will tell what happens next. I keep my popcorn ready, once the bubble bursts :D
Edit: Connected AI making discoveries to lots of money gained or rather saved. That is the sole reason for investments from people with big money.
Neural networks existed since the 1970s.
Yes, I meant the current state-of-the-art architecture by the term “AI” and partly the boom thereafter. The field “AI” is obviously much older. Sorry for that and thanks for pointing it out.
I took a class in what is ultimately the current approach in AI and Machine learning in 2002 using textbooks that had their first editions in the 90s. The field is in reality 30 years old.
I wrote that part from memory and meant the current state-of-the-art architecture, which most of the models are based on now, instead of the whole field. It is actually a bit older than that. AI as academic discipline was established in 1956, so it is about 70 years old. Though you would not consider much of it useful relating to independently making discoveries. I should have read up on it beforehand. Sorry for that and thanks for pointing out.
I find this hard to believe, unless it’s talking about 100% vibecoding
Yeah it is, it brings up a lot of good points that often don’t get talked about by the anti-AI folks (the sky is falling/AI is horrible) and extreme pro-AI folks (“we’re going to replace all the workers with AI”)
You absolutely have to know what the AI is doing at least somewhat to be able to call it out when it’s clearly wrong/heading down a completely incorrect path.
recent attempt to rewrite SQLite in Rust using AI
I think it is talking 100% vibe code. And yea it’s pretty useful if you don’t abuse it
Yeah, it’s really good at short bursts of complicated things. Give me a curl statement to post this file as a snippet into slack. Give me a connector bot from Ollama to and from Meshtastic, it’ll give you serviceable, but not perfect code.
When you get to bigger, more complicated things, it needs a lot of instruction, guard rails and architecture. You’re not going to just “Give me SQLite but in Rust, GO” and have a good time.
I’ve seen some people architect some crazy shit. You do this big long drawn out project, tell it to use a small control orchestrator, set up many agents and have each agent do part of the work, have it create full unit tests, be demanding about best practice, post security checks, oroborus it and let it go.
But it’s expensive, and we’re still getting venture capital tokens for less than cost, and you’ll still have hard-to-find edge cases. Someone may eventually work out a fairly generic way to set it up to do medium scale projects cleanly, but it’s not now and there are definite limits to what it can handle. And as always, you’ll never be able to trust that it’s making a safe app.
Yea I find that I need to instruct it comparably to a junior to do any good work…And our junior standard - trust me - is very very low.
I usually spam the planning mode and check every nook of the plan to make sure it’s right before the AI even touches the code.
I still can’t tell if it’s faster or not compared to just doing things myself…And as long as we aren’t allocated time to compare end to end with 2 separate devs of similar skill there’s no point even trying to guess imho. Though I’m not optimistic. I may just be wasting time.
And yea, the true costs per token are probably double than what they are today, if not more…
“Codestrap founders”
Let me guess they will spearhead the correct way to use AI?
it sure works well for slop marketers taking A/B testing to the new level of pointlessness.
I once saw someone sending ChatGPT and Gemini Pro in a constant loop by asking “Is seahorse emoji real?”. That responses were in a constant loop. I have heard that the theory of “Mandela Effect” in this case is not true. They say that the emoji existed on Microsoft’s MSN messenger and early stages of Skype. Don’t know how much of it is true. But it was fun seeing artificial intelligence being bamboozled by real intelligence. The guy was proving that AI is just a tool, not a permanent replacement of actual resources.
Ask it which is heavier: 20 pounds of gold or 20 feathers.
They could be dinosaur feathers, weighting a pound each 🤷♂️
That was working the same in gpt one month ago. Do it, it is incredibly fun to see yourself.
these types of articles aren’t analyzing the usefulness of the tool in good faith. they’re not meant to do a lot of the things that are often implied. the coding tools are best used by coders who can understand code and make decisions about what to do with the code that comes out of the tool. you don’t need ai to help you be a shitty programmer
they are analyzing the way the tools are being used based on marketing. yes they’re useful for senior programmers who need to automate boilerplate, but they’re sold as complete solutions.
Recently had to call out a coworker for vibecoding all her unit tests. How did I know they were vibe coded? None of the tests had an assertion, so they literally couldn’t fail.
That’s weird. I’ve made it write a few tests once, and it pretty much made them in the style of other tests in the repo. And they did have assertions.
My company is pushing LLM code assistants REALLY hard (like, you WILL use it but we’re supposedly not flagging you for termination if you don’t… yet). My experience is the same as yours - unit tests are one of the places where it actually seems to do pretty good. It’s definitely not 100%, but in general it’s not bad and does seem to save some time in this particular area.
That said, I did just remove a test that it created that verified that
IMPORTED_CONSTANT is equal to localUnitTestConstantWithSameHardcodedValueAsImportedConstant. It passed ; )Trust with verification. I’ve had it do everything right, I’ve had it do thing so incredibly stupid that even a cursory glance at the could would me more than enough to /clear and start back over.
claude code is capable of producing code and unit tests, but it doesn’t always get it right. It’s smart enough that it will keep trying until it gets the result, but if you start running low on context it’ll start getting worse at it.
I wouldn’t have it contribute a lot of code AND unit tests in the same session. new session, read this code and make unit tests. new session read these unit tests, give me advice on any problems or edge cases that might be missed.
To be fair, if you’re not reading what it’s doing and guiding it, you’re fucking up.
I think it’s better as a second set of eyes than a software architect.
I think it’s better as a second set of eyes than a software architect.
A rubber ducky that talks back is also a good analogy for me.
I wouldn’t have it contribute a lot of code
Yeah, I tried that once, for a tedious refactoring. It would’ve been faster if I did it myself tbh. Telling it to do small tedious things, and keeping the interesting things for yourself (cause why would you deprive yourself of that …) is currently where I stand with this tool
Vibe coding guy wrote unit tests for our embedded project. Of course, the hardware peripherals aren’t available for unit tests on the dev machine/build server, so you sometimes have to write mock versions (like an “adc” function that just returns predetermined values in the format of the real analog-digital converter).
Claude wrote the tests and mock hardware so well that it forgot to include any actual code from the project. The test cases were just testing the mock hardware.
Not realizing that should be an instant firing. The dev didn’t even glance a look at the unit tests…
if you reject her pull requests, does she fix it? is there a way for management to see when an employee is pushing bad commits more frequently than usual?
Hahaha 🤣
Guy selling ai coding platform says other AI coding platforms suck.
This just reads like a sales pitch rather than journalism. Not citing any studies just some anecdotes about what he hears “in the industry”.
Half of it is:
You’re measuring the wrong metrics for productivity, you should be using these new metrics that my AI coding platform does better on.
I know the AI hate is strong here but just because a company isn’t pushing AI in the typical way doesn’t mean they aren’t trying to hype whatever they’re selling up beyond reason. Nearly any tech CEO cannot be trusted, including this guy, because they’re always trying to act like they can predict and make the future when they probably can’t.
My take exactly. Especially the bits about unit tests. If you cannot rely on your unit tests as a first assessment of your code quality, your unit tests are trash.
And not every company runs GitHub. The metrics he’s talking about are DevOps metrics and not development metrics. For example In my work, nobody gives a fuck about mean time to production. We have a planning schedule and we need the ok from our customers before we can update our product.
I love this bit especially
Insurers, he said, are already lobbying state-level insurance regulators to win a carve-out in business insurance liability policies so they are not obligated to cover AI-related workflows. “That kills the whole system,” Deeks said. Smiley added: “The question here is if it’s all so great, why are the insurance underwriters going to great lengths to prohibit coverage for these things? They’re generally pretty good at risk profiling.”
This feels like an exercise in Goodhart’s Law: Any measure that becomes a target ceases to be a useful measure.









