Yes. Also, my llms can do it too
Guys, you can laugh at a joke. The AI doesn’t win just because someone upvoted a meme. Maintainability of codebases has been a joke for longer than LLMs have been around because there’s a lot of truth to it.
Even the most well intentioned design has weaknesses that we didn’t see coming. Some of its abstractions are wrong. There are changes to the requirements and feature set that they didn’t anticipate. They over engineered other parts that make them more difficult to navigate for no maintainability gain. That’s ok. Perfectly maintainable code requires us to be psychics and none of us are.
I actually laughed out loud at this meme.
Yes, but only I can maintain it.
I can maintain it. But I won’t.
ouch
10 PRINT 'Hello World!'20 GOTO 10EZ
Infinite loop and hard coded magic constant; this should have a configurable timeout and a resource file the string is read from so we can internationalize the application. Additionally, the use of a goto with a hard coded line number is a runtime bug waiting to happen after unrelated refactors; it’s best to use a looping construct that has more deterministic bounds.
No, so let’s vibe unmaintainable code together!

yes. yes I can. been doing it for 25 years.
I might not be the best, but I can still do a better job than AI
This is a bold claim I will not make.
A tale as old as time. The US nuclear missile codes were 000000, but it didn’t matter. The chain of command was purpose-built, ironically, so the front line soldier in a cold war scenario had to make the last decision to delete all life on the planet. Chain of command doesn’t matter at that point. You are choosing to kill everyone you know from an order from who knows who. The ultimate checksum.
You will always be better at decisions than an n-dimensional matrix of numbers on an overpriced GPU.
If you are complete novice then obviously not but I think anyone reasonably proficient in a language would be able to identify optimisations that an AI just doesn’t seem to perceive largely because humans are better at context.
It’s like that question about whether it’s worth driving your car to the car wash if the car wash is only 10 metres away. AIs have no experience of the real world so they don’t inherently understand that you can’t wash a car if it’s not at the car wash. A human would instantly know that that’s a stupid statement without even thinking about it, and unless you instruct an AI to actually deeply think about something they just give you the first answer they come up with.
I agree with you. But the tool will output a basic code that mostly do what asked in seconds instead of tens of minutes if not hours. So now we could argue if the optimization you make are worth the added cost I’d writing the code yourself or if it’s better to have the tool to generate the code and then optimizing it.
What’s why they’re pushing for the datacenters, they want to turn make every query that deep. The tech is here, but the ability to sustain it isn’t. They build the data centers, kick the developers out, depress the education market for it, and then raise the prices.
Companies will be paying the AI companies 60k per year per seat in a decade.
At that price it would be cheeper to use humans
ITT:


In this economy?
Whoever upvoted this needs to read some books.
What are these books you speak of? Do they have special features?
I’m ass at coding and I still can, lmao
Maybe the real slop was the code we wrote along the way
But, I didn’t check any of mine in?
Waiting for Amazon to release an ad like the one Spooner describes to that CEO when he’s being sarcastic. .
“a carpenter, making a beautiful chair. And then one of your robots comes in and makes a better chair twice as fast. And then you superimpose on the screen, ‘USR: Shittin’ on the Little Guy’”
I could.
I choose not to! Take that, LLM!
Exactly. I’ve been sabotaging the AI with shitty code output since long before LLMs existed. That’s how I play 4D chess. (This is just meant to get a laugh. Some of my code is even quite nice, actually.)
I mean, yes, absolutely I can. So can my peers. I’ve been doing this for a long, long time, as have my peers.
The code we produce is many times more readable and maintainable than anything an LLM can produce today.
That doesn’t mean LLMs are useless, and it also doesn’t mean that we’re irreplaceable. It just means this argument isn’t very effective.
If you’re comparing an LLM to a Junior developer? Then absolutely. Both produce about the same level of maintainable code.
But for Senior/Principal level engineers? I mean this without any humble bragging at all: but we run circles around LLMs from the optimization and maintainability standpoint, and it’s not even close.
This may change in the future, but today it is true (and I use all the latest Claude Code models)
With LLMs I get work done about 3-5x faster. Same level of maintainability and readability I’d have gotten writing it myself. Where LLMs fail is architecting stuff out- they can’t see the blind alleys their architecture decisions being them down. They also can’t remember to activate python virtual environments, like, ever.
The biggest problem with using AI instead of junior developers is that junior developers eventually become senior developers. LLMs … don’t.
They might, but it does not seem likely to me and is definitely not guaranteed.
It’s more likely than it happening with an LLM, though. Without junior developers the number of future senior devs approaches zero.
😞 Sir this is a Wendy’s.
sir, this is programmer_humor
and some jokes just aren’t funny








