German General Kurt von Hammerstein-Equord (a high-ranking army officer in the Reichswehr/Wehrmacht era):
“I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined.
Some are clever and diligent — their place is the General Staff.
The next lot are stupid and lazy — they make up 90% of every army and are suited to routine duties.
Anyone who is both clever and lazy is qualified for the highest leadership posts, because he possesses the intellectual clarity and the composure necessary for difficult decisions.
One must beware of anyone who is both stupid and diligent — he must not be entrusted with any responsibility because he will always cause only mischief.”
I think we put too much negative emphasis on people who aren’t as gifted intellectually.
In reality, the world works because of human automotons, honest people doing honest work; living their life in hopefully a comforting, complete and wholesome way, quietly contributing their piece to society.
There is no shame in this, yet we act as though there is.
This is what pains me with how many people respond negatively toward the idea of everyone being able to earn an honest living and raise a family. Too often the idea of "deserving it" comes into it as if doing your small part to contribute to society is not enough.
Similar to bragging about LOC, I have noticed in my own field of computational fluid dynamics that some vibe coders brag about how large or rigorous their test suites are. The problem is that whenever I look more closely into the tests, the tests are not outstanding and less rigorous than my own manually created tests. There often are big gaps in vibe coded tests. I don't care if you have 1 million tests. 1 million easy tests or 1 million tests that don't cover the right parts of the code aren't worth much.
> Generally, though, most of us need to think about using more abstraction rather than less.
Maybe this was true when Programming Perl was written, but I see the opposite much more often now. I'm a big fan of WET - Write Everything Twice (stolen from comments here), then the third time think about maybe creating a new abstraction.
Totally agree with this, the beauty of software is the right abstractions have untold impact, spanning many orders of magnitude. I'm talking about the major innovations, things like operating systems, RDBMS, cloud orchestration. But the majority of code in the world is not like that, it's just simple business logic that represents ideas and processes run by humans for human purposes which resist abstraction.
That doesn't people from trying though, platform creation is rife within big tech companies as a technical form of empire building and career-driven development. My rule of thumb in tech reviews is you can't have a platform til you have three proven use cases and shown that coupling them together is not a net negative due to the autonomy constraint a shared system imposes.
As dumb as it is to loudly proclaim you wrote 200k loc last week with an LLM, I don’t think it’s much better to look at the code someone else wrote with an LLM and go “hah! Look at how stupid it is!” You’re making exactly the same error as the other guy, just in the opposite direction: you’re judging the profession of software engineering based on code output rather than value generation.
Now, did Garry Tan actually produce anything of value that week? I dunno, you’ll have to ask him.
> As for the artifact that Tan was building with such frenetic energy, I was broadly ignoring it. Polish software engineer Gregorein, however, took it apart, and the results are at once predictable, hilarious and instructive: A single load of Tan’s "newsletter-blog-thingy" included multiple test harnesses (!), the Hello World Rails app (?!), a stowaway text editor, and then eight different variants of the same logo — one of which with zero bytes.
Do you think any of the... /things/ bundled in this software increased the surface area that attacks could be leveraged against?
I also struggle with this all the time, balance between bringing value/joy and level of craft. Most human written stuff might look really ugly or was written in a weird way but as long as it’s useful it’s ok.
What I don’t like here is the bragging about the LoC. He’s not bragging about the value it could provide. Yes people also write shitty code but they don’t brag about it - most of the time they are even ashamed.
> Now, did Garry Tan actually produce anything of value that week? I dunno, you’ll have to ask him.
Let’s not be naive. Garry is not a nobody. He absolutely doesn’t care about how many lines of code are produced or deleted. He made that post as advertisement: he’s advertising AI because he’s the ceo of YC which profitability depends on AI.
> You’re making exactly the same error as the other guy, just in the opposite direction: you’re judging the profession of software engineering based on code output rather than value generation.
But the true metric isn't either one, it's value created net of costs. And those costs include the cost to create the software, the cost to understand and maintain it, the cost of securing it and deploying it and running it, and consequential costs, such as the cost of exploited security holes and the cost of unexpected legal liabilities, say from accidental copyright or patent infringement or from accidental violation of laws such as the Digital Markets Act and Digital Services Act. The use of AI dramatically decreases some of these costs and dramatically increases other costs (in expectation). But the AI hypesters only shine the spotlight on the decreased costs.
"Value generation" is a term I would be somewhat wary of.
To me, in this context, it's similar to drive economic growth on fossil fuel.
Whether in the end it can result in a net benefit (the value is larger than the cost of interacting with it and the cost to sort out the mess later) is likely impossible to say, but I don't think it can simply be judged by short sighted value.
It isn't worth the time. I am not going to read the 200k LOC to prove it was a bad idea to generate that much code in a short time and ship it to production. It is on the vibe coder to prove it is. And if it is just tweets being exchanged, and I want to judge someone who is boasting about LOC and aiming to make more LOC/second. Yep I'll judge 'em. It is stupid.
Given the framing of the article, I can understand where the opposite direction comment is coming from. The author also gives mixed signals, by simultaneously suggesting that the "laziness" of the programmer and code are virtues. Yet I don't think they are ignoring value generation. Rather, I think they are suggesting that the value is in the quality of the code instead of the problem being solves. This seems to be an attitude held by many developers who are interested in the pursuit of programming rather than the end product.
I've had this exact sentiment in the past couple months after seeing a few PRs that were definitely the wrong solution to a problem. One was implementing it's own parsing functions to which well established solutions like JSON or others likely existed. I think any non-llm programmer could have thought this up but then immediately decide to look elsewhere, their human emotions would have hit and said "that's way too much (likely redundant) work, there must be a better way". But the LLM has no emotion, it isn't lazy and that can be a problem because it makes it a lot easier to do the wrong thing.
LLMs not being lazy enough definitely feels true. But it's unclear to me if it a permanent issue, one that will be fixed in the next model upgrade or just one your agent framework/CICD framework takes care of.
e.g. Right now when using agents after I'm "done" with the feature and I commit I usually prompt "Check for any bugs or refactorings we should do" I could see a CICD step that says "Look at the last N commits and check if the code in them could be simplified or refactored to have a better abstraction"
I have noticed LLMs have a propensity to create full single page web applications instead of simpler programs that just print results to the terminal.
I've also struggled with getting LLMs to keep spec.md files succinct. They seem incapable of simplifing documents while doing another task (e.g. "update this doc with xyz and simply the surrounding content") and really need to be specifically tasked at simplifying/summarizing. If you want something human readable, you probably just need to write it yourself. Editing LLM output is so painful, and it also helps to keep yourself in the loop if you actually write and understand something.
I very much agree; I think laziness / friction is basically a critically important regularizer for what to build and for what to not build. LLMs remove that friction and it requires more discipline now. (Wrote some of this up a while ago here: https://matthiasplappert.com/blog/2026/laziness-in-the-age-o...)
The more people boast about AI while delivering absolute garbage like in the example here, the more I feel happier toiling around in Nginx configurations and sysadmin busy work. Why worry about AI when it's the same old idiots using it as a crutch, like any new fad.
Great article, I've been saying something similar (much less eloquently) at work for months and will reference this one next time it comes up.
Quite often I see inexperienced engineers trying to ship the dumbest stuff. Back before LLM these would be projects that would take them days or weeks to research, write, test, and somewhere along the way they could come to the realization "hold on, this is dumb or not worth doing". Now they just send 10k line PR before lunch and pat themselves on the back.
oh this hits all the right notes for me! I am just the demographic that tried to perl my way into the earliest web server builds, and read those exact words carefully while looking at the very mixed quality, cryptic ascii line noise that is everyday perl. And as someone who had built multi-thousand line C++ systems already, the "virtues" by Larry Wall seemed spot on! and now to combine the hindsight with current LLM snotty Lord Fauntleroy action coming from San Francisco.. perfect!
Disregarding the fact that Bryan operates oxide a company that has multiple investors and customers (id say this proves valuable knowledge) the crazier fact is that people think html is useless knowledge.
React USES html. Understanding html is core to understanding react. React does not in anyway devalue html in the same way that driving automatic devalues driving manual
Go to Facebook.com and right click view source and tell me html is not being devalued. No person who wants to write aesthetic html would write that stuff.
When it matters it matters. Even in facebooks case they made react fit for their use case. You think the react devs didn’t understand html? Do you think quality frontends can be written without any understanding of html?
Like the article says we’ve moved an abstraction up. That does not make the html knowledge useless
“I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined.
Some are clever and diligent — their place is the General Staff.
The next lot are stupid and lazy — they make up 90% of every army and are suited to routine duties.
Anyone who is both clever and lazy is qualified for the highest leadership posts, because he possesses the intellectual clarity and the composure necessary for difficult decisions.
One must beware of anyone who is both stupid and diligent — he must not be entrusted with any responsibility because he will always cause only mischief.”
In reality, the world works because of human automotons, honest people doing honest work; living their life in hopefully a comforting, complete and wholesome way, quietly contributing their piece to society.
There is no shame in this, yet we act as though there is.
Maybe this was true when Programming Perl was written, but I see the opposite much more often now. I'm a big fan of WET - Write Everything Twice (stolen from comments here), then the third time think about maybe creating a new abstraction.
I've always heard this as the "Rule of three": https://en.wikipedia.org/wiki/Rule_of_three_(computer_progra...
That doesn't people from trying though, platform creation is rife within big tech companies as a technical form of empire building and career-driven development. My rule of thumb in tech reviews is you can't have a platform til you have three proven use cases and shown that coupling them together is not a net negative due to the autonomy constraint a shared system imposes.
Now, did Garry Tan actually produce anything of value that week? I dunno, you’ll have to ask him.
https://en.wikipedia.org/wiki/Horizon_IT_scandal
Furthermore,
> As for the artifact that Tan was building with such frenetic energy, I was broadly ignoring it. Polish software engineer Gregorein, however, took it apart, and the results are at once predictable, hilarious and instructive: A single load of Tan’s "newsletter-blog-thingy" included multiple test harnesses (!), the Hello World Rails app (?!), a stowaway text editor, and then eight different variants of the same logo — one of which with zero bytes.
Do you think any of the... /things/ bundled in this software increased the surface area that attacks could be leveraged against?
What I don’t like here is the bragging about the LoC. He’s not bragging about the value it could provide. Yes people also write shitty code but they don’t brag about it - most of the time they are even ashamed.
ive seen plenty of real code written by real people with multiple test harnesses and multiple mocking libraries.
its still kinda irrelevant to whether the code does anything useful; only a descriptor of the funding model
Let’s not be naive. Garry is not a nobody. He absolutely doesn’t care about how many lines of code are produced or deleted. He made that post as advertisement: he’s advertising AI because he’s the ceo of YC which profitability depends on AI.
He’s just shipping ads.
The cautionary/pessimist folks at least don't make money by taking the stance.
But the true metric isn't either one, it's value created net of costs. And those costs include the cost to create the software, the cost to understand and maintain it, the cost of securing it and deploying it and running it, and consequential costs, such as the cost of exploited security holes and the cost of unexpected legal liabilities, say from accidental copyright or patent infringement or from accidental violation of laws such as the Digital Markets Act and Digital Services Act. The use of AI dramatically decreases some of these costs and dramatically increases other costs (in expectation). But the AI hypesters only shine the spotlight on the decreased costs.
To me, in this context, it's similar to drive economic growth on fossil fuel.
Whether in the end it can result in a net benefit (the value is larger than the cost of interacting with it and the cost to sort out the mess later) is likely impossible to say, but I don't think it can simply be judged by short sighted value.
e.g. Right now when using agents after I'm "done" with the feature and I commit I usually prompt "Check for any bugs or refactorings we should do" I could see a CICD step that says "Look at the last N commits and check if the code in them could be simplified or refactored to have a better abstraction"
I've also struggled with getting LLMs to keep spec.md files succinct. They seem incapable of simplifing documents while doing another task (e.g. "update this doc with xyz and simply the surrounding content") and really need to be specifically tasked at simplifying/summarizing. If you want something human readable, you probably just need to write it yourself. Editing LLM output is so painful, and it also helps to keep yourself in the loop if you actually write and understand something.
Quite often I see inexperienced engineers trying to ship the dumbest stuff. Back before LLM these would be projects that would take them days or weeks to research, write, test, and somewhere along the way they could come to the realization "hold on, this is dumb or not worth doing". Now they just send 10k line PR before lunch and pat themselves on the back.
It is * exactly * the same as a person who spent years perfecting hand written HTML, just to face the wrath of React.
React USES html. Understanding html is core to understanding react. React does not in anyway devalue html in the same way that driving automatic devalues driving manual
When it matters it matters. Even in facebooks case they made react fit for their use case. You think the react devs didn’t understand html? Do you think quality frontends can be written without any understanding of html?
Like the article says we’ve moved an abstraction up. That does not make the html knowledge useless
Now look up who he actually is.
I recommend you go look at some of his talks on Youtube, his best five talks are probably all in my all time top-ten list!
He's co-founder and CTO of his own company, so I think he's doing fine in his field.