31 comments

  • leokennis 2 hours ago
    > The obvious objection is that code produced at that speed becomes unmanageable, a liability in itself. That is a reasonable concern, but it largely applies when agents produce code that humans then maintain. Agentic platforms are being iterated upon quickly, and for established patterns and non-business-critical code, which is the majority of what most engineering organizations actually maintain, detailed human familiarity with the codebase matters less than it once did. A messy codebase is still cheaper to send ten agents through than to staff a team around. And even if the agents need ten days to reason through an unfamiliar system, that is still faster and cheaper than most development teams operating today. The liability argument holds in a human-to-human or agent-to-human world. In an agent-to-agent world, it largely dissolves.

    Then I'd wager it's the same for the courses and workshop this guy is selling...an LLM can probably give me at least 75% of the financial insights for not even .1% of what this "agile coach" is asking for his workshops and courses.

    Maybe the "agile coach LLM" can explain to the "coding LLM's" why they're too expensive, and then the "coding LLM's" can tell the "agile coach LLM" to take the next standby shift then, if he knows so much about code?

    And then we actual humans can have a day off and relax at the pool.

    • bonesss 2 hours ago
      Ceding the premise that the AGI is gonna eat my job, my job involves reading the spec to be able verify the code and output so the there’s a human to fire and sue. There are five layers of fluffy management and corporate BS before we get to that part, and the AGI is more competent at those fungible skills.

      With the annoying process people out of the picture, even reviewing vibeslop full time sounds kinda nice… Feet up, warm coffee, just me and my agents so I can swear whenever I need to. No meetings, no problems.

      • tsukikage 1 hour ago
      • ben_w 1 hour ago
        > my job involves reading the spec to be able verify the code and output so the there’s a human to fire and sue.

        So, you're the programmer (verify code) and the QA (verify output) and the project manager (read the spec)?

        • virgilp 1 hour ago
          qa has long ago merged with programming in "unified engineering". Also with SRE ("devops") and now the trend is to merge with CSE and product management too ("product mindset", forward-deployed engineers). So yeah, pretty much, that's the trend. What would you trust more - an engineer doing project management too - or a project manager doing the engineering job?
          • ben_w 1 hour ago
            The PMs and QAs I know would disagree with that assessment.

            > What would you trust more - an engineer doing project management too - or a project manager doing the engineering job?

            If one of the three, {PM, QA, coder}, was replaced by AI, as a customer I'd prefer to pick the team missing the coder. But for teams replacing two roles with AI, I'd rather keep the coder.

            But a deeper problem now is, as a customer, perhaps I can skip the team entirely and do it all myself? That way, no game of telephone from me to the PM to the coder and QA and back to me saying "no" and having another expensive sprint.

        • catmanjan 36 minutes ago
          I mean, yes?

          Maybe it's different where you live but QA pretty much disappeared a few years ago and project managers never had anything to do with the actual software

      • taurath 1 hour ago
        There’s gonna be one guy in charge of you, and he’s going to expect you to be putting out 20x output while thanking him for the privilege of being employed, assuming all goes the way every management team seems to want

        I dont think this will happen because AI has become a straight up cult and things that are going well don’t need so many people performatively telling each other how well things are going.

    • MattGaiser 42 minutes ago
      In general, there’s very little info that costs much to learn nowadays. The human standing in the front is a disciplinarian to force you to learn it.
    • pydry 2 hours ago
      Exactly. I think it's been a while since I've read an LLM hot take which couldnt have been written by an LLM and this one is no exception.

      There's a 99% chance that the training materials on sale are equally replaceable with a prompt.

      • kaon_2 2 hours ago
        True. And yet, as an organization when you buy OP's training, you don't buy the material. You buy the feeling that you make your organization becomes more productive. You buy the signal to your boss that you are innovative and working to make your organization more productive. And you buy the time and headspace from your engineers that they are thinking if at least for 2 hours about making the organization more productive. The latter can be well worth the cost, and the former surely too.
        • pydry 1 hour ago
          They're buying a defensible (or laudable) justification when the training company's fee appears as a line item in the company budget.

          This doesnt mean the training has to be good, useful or original in the slightest but the provider does need to have credentials which arent just "some dev with a hot take" that a fellow executive would recognize.

  • boron1006 2 hours ago
    > A messy codebase is still cheaper to send ten agents through than to staff a team around. And even if the agents need ten days to reason through an unfamiliar system, that is still faster and cheaper than most development teams operating today.

    I’ve been on 2 failed projects that have been entirely AI generated and it’s not that agents slow down and you can just send more agents to work on projects for longer, it’s that they becoming completely unable to make any progress whatsoever, and whatever progress they do make is wrong.

    • jwpapi 13 minutes ago
      Same here. I have now deleted 43k and counting lines of my codebase. There is no point in putting any AI code into production anymore as it almost always uses none or the wrong abstractions.

      When you try to throw more agents at the problem or even more verification layer, you just kill your agility even if they would still be able to work

    • nishantjani10 2 hours ago
      this is the part of the article that I did not sit well with me either. Code is agent generated, agent can debug it but will alway be human owned.

      unless anthropic tomorrow comes in and takes ownership all the code claude generates, that is not changing..

    • iamflimflam1 2 hours ago
      Very much like humans when they drown in technical debt. I think the idea that a messy codebase can be magically fixed is laughable.

      What I might believe though is that agents might make rewrites a lot more easy.

      “Now we know what we were trying to build - let’s do it properly this time!”

      • Cthulhu_ 2 hours ago
        Potentially, yes, but as with other software, you need to know AND have (automated) verifications on what it does, exactly.

        And of course, make the case that it actually needs a rewrite, instead of maintenance. See also second-system effect.

        • ben_w 1 hour ago
          > Potentially, yes, but as with other software, you need to know AND have (automated) verifications on what it does, exactly.

          Yes, but even here one needs some oversight.

          My experiments with Codex (on Extra High, even) was that a non-zero percentage of the "tests" involved opening the source code (not running it, opening it) and regexing for a bunch of substrings.

        • tonyedgecombe 1 hour ago
          >And of course, make the case that it actually needs a rewrite, instead of maintenance.

          "The AI said so ..."

  • jwpapi 15 minutes ago
    I thought it was a good article, till I saw the Slack example.

    The copy doesn’t even remotely grasp the scale of what the actual Slack sofware does in terms of scale, relaiability, observability, monitorability, maintability and pretty sure also functionality.

    Author only writes about the non-dev work as difference, which seems like he doesn’t know what he’s talking about in all, and what running an application at that scale actually means.

    This "clone" doesn’t get you any closer to an actualy Slack copy than a white piece of paper

  • InfinityByTen 2 hours ago
    When I see someone just throwing a lot of numbers and graphs at me, I see that there are in to win an argument, and not propose an idea.

    Of late, I've come across a lot of ideas from Rory Sutherland and my conclusion from listening to his ideas is that there are some people, who're obsessed with numbers, because to them it's a way to find certainty and win arguments. He calls them "Finance People" (him being a Marketing one). Here's an example

    "Finance people don’t really want to make the company money over time. They just thrive on certainty and predictability. They try to make the world resemble their fantasy of perfect certainty, perfect quantification, perfect measurement.

    Here’s the problem. A cost is really quantifiable and really visible. And if you cut a cost, it delivers predictable gains almost instantaneously."

    > Choosing to spend three weeks on a feature that serves 2% of users is a €60,000 decision.

    I'd really want to hire the Oracle of a PM/ Analyst that can give me that 2% accurately even 75% of the time, and promise nothing non-linear can come from an exercise.

    • necovek 1 hour ago
      As with any attempt to become more precise (see software estimation, eg. Mythical Man Month), we've long argued that we are doing it for the side effects (like breaking problems down into smaller, incremental steps).

      So when you know that you are spending €60k to directly benefit small number of your users, and understand that this potentially increases your maintenance burden with up to 10 customer issues a quarter requiring 1 bug fix a month, you will want to make sure you are extracting at least equal value in specified gains, and a lot more in unspecified gains (eg. the fact that this serves your 2% of customers might mean that you'll open up to a market where this was a critical need and suddenly you grow by 25% with 22% [27/125] of your users making use of it).

      You can plan for some of this, but ultimately when measuring, a lot of it will be throwing things at the wall to see what sticks according to some half-defined version of "success".

      But really you conquer a market by having a deep understanding of a particular problem space, a grand vision of how to solve it, and then actually executing on both. Usually, it needs to be a problem you feel yourself to address it best!

    • jascha_eng 1 hour ago
      None of his math really checks out. Building a piece of software is or at least was orders of magnitudes more expensive than maintaining it. But how much money it can make is potentially unbounded (until it gets replaced).

      So investing e.g. 10 million this year to build a product that produces maybe 2 million ARR will have armortized after 5 years if you can reduce engineering spend to zero. You can also use the same crew to build another product instead and repeat that process over and over again. That's why an engineering team is an asset.

      It's also a gamble, if you invest 10 million this year and the product doesn't produce any revenue you lost the bet. You can decide to either bet again or lay everyone off.

      It is incredibly hard or maybe even impossible to predict if a product or feature will be successful in driving revenue. So all his math is kinda pointless.

    • tweetle_beetle 2 hours ago
      As with most things, isn't the truth somewhere in the middle? True cost/value is very hard to calculate, but we could all benefit by trying a bit harder to get closer to it.

      It's all too common to frame the tension as binary: bean counters vs pampered artistes. I've seen it many times and it doesn't lead anywhere useful.

      • SpicyLemonZest 2 hours ago
        Here I think the truth is pretty far to one side. Most engineering teams work at a level of abstraction where revenue attribution is too vague and approximate to produce meaningful numbers. The company shipped 10 major features last quarter and ARR went up $1m across 4 new contracts using all of them; what is the dollar value of Feature #7? Well, each team is going to internally attribute the entire new revenue to themselves, and I don’t know what any other answer could possibly look like.
    • diatone 2 hours ago
      You’re illustrating one of the points of TFA - a team that is equipped with the right tools to measure feature usage (or reliably correlate it to overall userbase growth, or retention) and hold that against sane guardrail metrics (product and technical) is going to outperform the team that relies on a wizardly individual PM or analyst over the long term making promises over the wall to engineering.
  • jaccola 2 hours ago
    I think the only thing that matters is whether the people on the team care deeply about the product; whether they care more about the product than their own careers (in the short term). Without that, any metric or way of thinking can and will be gamed.

    Unfortunately, even with all the management techniques in the world, there are just some projects that are impossible to care about. There’s simply a significantly lower cap on productivity on these projects.

  • lknuth 2 hours ago
    Making it solely about the extraction of dollars is a great recipe to make something mediocre. See Hollywood or Microslop.

    Its like min-maxing a Diablo build where you want the quality of the product to be _just_ above the "acceptable" threshold but no higher because that's wasting money. Then, you're free to use all remaining points to spec into revenue.

    • cmarot 2 hours ago
      Exactly. In addition, sometimes a good software "only" makes you save 1% of your time, but that 1% was a terrible burden that induced mental fatigue, made you take bad decisions, etc. It can even make a great Engineer stay when he would have left with the previous version.
  • kcexn 27 minutes ago
    I feel like there is a lot of nuance around this topic that is getting lost in the noise.

    The direct and indirect financial impact of technical decisions are indeed hard to measure. But some technical decisions definitely have greater financial impact than others. Even if it's hard to precisely quantify the financial costs/benefits of every decision. It is possible to order them relatively. X is likely to make more money than Y. So we do X first and Y later.

    There is a significant amount of chance involved in whether a product/feature will even make money at all. So even good plans with measurably positive expected value could end up losing money.

    Just because it's impossible to be 100% certain of the outcome of any decision. Doesn't mean we should throw the baby out with the bathwater.

  • sdevonoes 2 hours ago
    Still don’t understand what regular people (like the author) gain from selling how wonderful AI is. I get that the folks at Anthropic and openai shove AI through our throats every day, but nobodies?
    • csomar 2 hours ago
      He is selling consulting around AI/LLM.
      • febusravenga 57 minutes ago
        In other words, he's cutting branch he's sitting on.
  • mlazos 1 hour ago
    Look! A guy built 95% of slack in 2 weeks! Very skeptical of that btw, but also an organization that justifies every single team by exactly how much $ value they’re generating sounds like hell. How would you ever innovate or try out new ideas? It’s important to quantify what impact your team is generating but there are some cases (e.g. UX) which are really hard to quantify in $ but are still very important for the product
  • consp 2 hours ago
    The estimate cost number is for very large companies with massive overhead bulk. Dump the management overhead, the HR machine and other things smaller companies do not have and this number comes down massively.
  • watsonjs 51 minutes ago
    I've been a software engineer for more than ten years and never cared about these kinds of topics. But lately, I've found them genuinely interesting. Could someone recommend books on the economics of software businesses? I can't take this author's content seriously.
  • barrkel 2 hours ago
    The argument against platform teams needs to be balanced with the compounding nature of technical debt.

    The argument to always go for the biggest return works OK for the first few years of high growth (though the timeline is probably greatly compressed the more you use AI), but it turns into a kind of quicksand later.

  • willvarfar 1 hour ago
    With a long time in the industry and seeing how so many big software companies work, this really really chimed with me. Many/most teams and projects and busy work are not actually moving the bottom line, at massive opportunity cost! And there's so little awareness that most people in squads and their managers will think they are the exception.

    Whereas Whatsapp with its 30 software engineers was the exception etc.

    A chat with friends showed how there are parallels with how LLMs will happen in the short-term future - say the next 5 years - and the whole MapReduce mess. Back when Hadoop came along you built operators and these operators communicated through disk. It took years even after Spark was about for the hadoop userbase as a whole to realise that it is orders of magnitude more efficient to only communicate through disk when two operators are not colocatable on the same machine and that most operators in most pipelines can be fused together.

    So for a while LLMs will be in the Hadoop phase where they are acting like junior devs and making more islands that communicate in bigger bloated codebases and then there might be a realisation in about 2030 that actually the LLMs could have been used to clean up and streamline and fuse software and approach the Whatsapp style of business impact.

  • TheLudd 1 hour ago
    One interesting factor that I rarely see discussed is this: Let's say a DevOps person does some improvement to internal tooling and a task that devs had to oversee manually now is automated. Every dev spent about 2 hours per week doing this task and now they don't have to anymore. Now, have we saved 2 hours of salary per dev per week?

    Not sure. Because it totally depends on what they do instead. Are they utilizing two hours more every week now doing meaningful work? Or are they just taking things a bit more easy? Very hard to determine and it just makes it harder to reason about the costs and wins in these cases.

    • viktorianer 24 minutes ago
      The freed-up time question is answerable when the work has clear metrics. A model test suite dropping from 6 minutes to 66 seconds saves developer time on every single run. Ten developers running tests five times a day, the math is straightforward.

      The problem is that most engineering work lacks that kind of before/after measurement. Not because it is unmeasurable, but because nobody set up the baseline. Profile before you optimize and the return on investment calculates itself.

    • radiator 1 hour ago
      In such a clear-cut example, I think we have saved the two hours.
      • TheLudd 1 hour ago
        Yes. You work 2 hours less, but what do you produce in those two extra hours? Can you say that your company now spends X dollars less or earns X dollars more? I don't think it can be that clear.
  • bob1029 2 hours ago
    I don't understand the urgency around quantifying every aspect of the software process. Surely, we are in agreement that money in must at least equal money out if the company is to be viable? This is a simple quickbooks report, is it not?

    Why don't we instead focus our energies on the customer and then work our way backward into the technology. There are a lot of ways to solve problems these days. But first you want to make sure you are solving the right problem. Whether or not your solution represents a "liability" or an "asset" is irrelevant if the customer doesn't even care about it.

  • petetnt 2 hours ago
    > This does not mean that Slack’s engineering investment was wasted, because Slack also built enterprise sales infrastructure, compliance capabilities, data security practices, and organizational resilience that a fourteen-day prototype does not include.

    The LLM-agent team argument also misses the core point that the engineering investment (which actually encompasses business decisions, design and much more than just programming) is what actually got Slack (or any other software product) to the point where is it is now and where it's going in the future and creating a snapshot of the current status is, while maybe not absolutely trivial, still just a tiny fraction of the progress made over the years.

  • ares623 2 hours ago
    The "author" used someone's vibecoded Slack clone to justify his conclusions. I think he believes that the majority of Slack's value lies in the slick CSS animations.

    I do agree with his thesis in the middle, about how the ZIRP decade and the cultures that were born from that period were outrageous and cannot survive the current era. It's a brave new world, and it's not because of AI. It's because there's just not enough money flowing anymore, and what little is left is sucked up by the big boys (AI).

  • ozim 2 hours ago
    Then let's disregard cost of running and maintaining a system for having exact financial feedback.

    We do proxy measurements because having exact data is hard because there is more to any feature than just code.

    Feature is not only code, it is also customer training, marketing - feature might be perfectly viable from code perspective but then utterly fail in adoption for reasons beyond of Product Owner control.

    What I saw in comments — author is selling his consultancy/coaching and I see in comments that people who have any real world experience are also not buying it.

  • jiusanzhou 2 hours ago
    The 3-5x return threshold is the part most eng leaders never internalize. I've seen teams spend entire quarters on internal tooling that saves maybe 20 minutes per developer per week — nowhere near break-even, let alone a healthy return. The uncomfortable truth is that most prioritization frameworks (RICE, WSJF, etc.) deliberately avoid dollar amounts because nobody wants to see the math on their pet project. Once you attach real costs to sprint decisions, half the roadmap becomes indefensible.
    • cpinto 2 hours ago
      You’re absolutely right, but just to a point. It should be easy to clearly quantify the desired financial outcome of a sprint, but not of its components. I don’t want to spend a single minute figuring out the financial outcome of a single ticket.
  • tgdn 2 hours ago
    I get "This site can’t be reached"
  • groby_b 53 minutes ago
    I see we're once again missing the existence of indirect impact. There's a reason organizations look at revenue/engineer overall instead of trying to attribute it directly to specific teams.

    I guess his students get to relearn that on their own.

    Also, any post talking about building software and then contains the suggestion that "cost per unit" is an efficiency metric needs to come to the red courtesy phone, Taylorism would like to have a chat about times gone by.

  • SpicyLemonZest 2 hours ago
    > The obvious objection is that code produced at that speed becomes unmanageable, a liability in itself. That is a reasonable concern, but it largely applies when agents produce code that humans then maintain. Agentic platforms are being iterated upon quickly, and for established patterns and non-business-critical code, which is the majority of what most engineering organizations actually maintain, detailed human familiarity with the codebase matters less than it once did. A messy codebase is still cheaper to send ten agents through than to staff a team around. And even if the agents need ten days to reason through an unfamiliar system, that is still faster and cheaper than most development teams operating today. The liability argument holds in a human-to-human or agent-to-human world. In an agent-to-agent world, it largely dissolves.

    I keep seeing this assumption that "unmanageable" caps out at "kinda hard to reason about", and anyone with experience in large codebases can tell you that's not so. There are software components I own today which require me to routinely explain to junior engineers (and indeed to my own instances of Claude) why their PR is unsound and I won't let them merge it no matter how many tests they add.

    • snowe2010 2 hours ago
      Yeah this really breaks down when you put the logic up against ANY sort of compliance testing. Ok you don’t meet compliance, your agents have spent weeks on it and they’re just adding more bugs. Now what are you going to do? You have to go into the code yourself. Uh oh.
  • danpalmer 2 hours ago
    > even if the agents need ten days to reason through an unfamiliar system, that is still faster and cheaper than most development teams operating today

    Citation needed. A human engineer can grok a lot in 10 days, and an agent can spend a lot of tokens in 10 days.

  • DeathArrow 1 hour ago
    >Given that software teams are expensive

    In many companies there are 3 to 5 other people per developer (QA, agile masters, PO, PM, BA, marketing, sales, customer support etc.). The costs aren't driven just by the developer salaries.

    A CEO can cost as much as 10 developers, sometimes more.

    • dude250711 1 hour ago
      That is why I respect Zuckerberg: he did not participate in Google's and Apple's salary fixing and he is willing to pay new tech hires insane money.

      There is something different about CEOs that came from tech.

  • jillesvangurp 1 hour ago
    If you want to understand economics, I recommend watching some of Don Reinertsen's videos on Lean 2.0. He goes into a few concepts quite deeply that are quite intuitive.

    Cost of delay: calculating the cost of delaying by a few weeks in terms of lost revenue (you aren't shipping whatever it is you are building), total life value of the product (your feature won't be delivering value forever), extra cost in staffing. You can slap a number on it. It doesn't have to be a very accurate number. But it will give you a handle on being mindful that you are delaying the moment where revenue is made and taking on team cost at the cost of other stuff on your backlog.

    Option value: calculating the payoff for some feature you add to your software as having a non linear payoff. It costs you n when it doesn't work out and might deliver 10*n in value if it does. Lean 1.0 would have you stay focused and toss out the option for that potential 10x payoff. But if you do a bit of math, there probably is a lot of low hanging fruit that you might want to think about picking because it has a low cost and a potential high payoff. In the same way variability is a good thing because it gives you the option to do something with it later. A little bit of overengineering can buy you a lot of option value. Whereas having tunnel vision and only doing what was asked might opt you out of all that extra value.

    A bad estimation is better than no estimation: even if you are off by 3x, at least you'll have a number and you can learn and adapt over time. Getting wildly varying estimates from different people means you have very different ideas about what is being estimated. Do your estimates in time. Because that allows you to slap a dollar value on that time and do some cost calculations. How many product owners do you know that actually do that or even know how to do that?

    Don't run teams at 100% capacity. Work piles up in queues and causes delays when teams are pushed hard. The more work you pile on the worse it gets. Worse, teams start cutting corners and take on technical debt in order to clear the queue faster. Any manufacturing plant manager knows not to plan for more than 90% capacity. It doesn't work. You just end up with a lot of unfinished work blocking other work. Most software managers will happily go to 110%. This causes more issues than it solves. Whenever you hear some manager talking about crunch time, they've messed up their planning.

    Stretching a team like that will just cause cycle times to increase when you do that. Also, see cost of delay. Queues aren't actually free. If you have a lot of work in progress with inter dependencies, any issues will cause your plans to derail and cause costly delays. It's actually very risky to do that if you think about it like that. If you've ever been on a team that seemingly doesn't get anything done anymore, this might be what is going on.

    I like this back of the envelope math; it's hard to argue with.

    I used to be a salaried software engineer in a big multinational. None of us had any notion of cost. We were doing stuff that we were paid to do. It probably cost millions. Most decision making did not have $ values on them. I've since been in a few startups. One where we got funded and subsequently ran out of money without ever bringing in meaningful revenue. And another one that I helped bootstrap where I'm getting paid (a little) out of revenue we make. There's a very direct connection between stuff I do and money coming in.

    • tome 1 hour ago
      Do you have any recommendations? I find his book Principles of Product Development Flow very interesting.
  • lynx97 2 hours ago
    Using ‘blind’ to mean ‘ignorant’ is like using any disability label as a synonym for ‘bad’—it turns a real condition into an insult.
    • Smaug123 2 hours ago
      "Flying blind" is a completely standard idiom originating from flying while blinded by e.g. cloud or darkness. Its meaning is a figurative transplant of a literal description.
      • lynx97 2 hours ago
        I know it’s an idiom. The point is that it still uses blindness as a stand-in for incompetence/unsafe guessing. Being common doesn’t make it harmless. Common just means we’ve normalized it. And you defending it shows that weve normalized it to a point where the double-meaning is seemingly only apparent to blind people.
        • anonymous908213 2 hours ago
          It absolutely does not use blindness as a stand-in for incompetence, that is your own outrage-seeking interpretation of it. A neutral interpretation would be that "flying blind" is to "operate without perfect information". It is a simple description of operating conditions, not a derogatory term in any way. Your reply is worded in such a way as to indicate that you think the person you're replying to deserves to be shamed for 'defending' it, but having a disability does not entitle you to browbeat the world into submission and regulate all usage of any words associated with your disability as you see fit. This is quite benign and people are perfectly well within their right to object to somebody trying to police plainly descriptive language.
        • Terr_ 1 hour ago
          You are equivocating. Blindness as a personal chronic medical condition is not the same as a situational difficulty.

          The pilot who is "flying blind" has perfectly normal eyeballs. They are not necessarily a member of any minority group, except for their chosen profession.

          _____

          As for "blind" being a word that appears more frequently in a negative rather than positive way... Well, I'm not sure what to tell you, that's just 10,000+ years of language from a species that evolved to prefer seeing.

          To offer an example of the positive case, the idiom "justice is blind". Yes, there is a popular cultural mascot wearing a strip a fabric over her eyes, but again: The justice doesn't actually involve any (real) personal medical condition, and it's considered a positive feature for the job.

        • srdjanr 2 hours ago
          Well flying blind is unsafe guessing (ignoring modern instruments), that's a fact. But only "flying" and "blind" together. No one thinks this makes the word "flying" has a negative connotation here, and same with "blind".

          Like "drinking" and "driving". On their own, they're both neutral, but "drinking and driving" is really bad.

        • LegNeato 2 hours ago
          No, it means not being able to see what is going on. Which is literally what the word blind means. You can be blinded by many things (blindfold, clouds/fog, bright lights, darkness, accidents, genetics, etc), permanently and temporarily. Non-humans can be blind and blinded. YOU are making it about a specific situation and projecting value judgements on it.

          The author specifically says FLYING blind. Not "stumbling around like a blind person" or some such. If you are offended, that is on you. It's your right to be offended of course, but don't expect people to join in your delusion.

    • srdjanr 1 hour ago
      Why is "ignorant" a synonym for "bad" (as a moral judgement, like "bad person")?

      It just means you don't know something, which is usually a relatively bad situation for you, but it doesn't make you a bad person.

      If you think otherwise, that's on you.

  • dfhvneoieno 7 minutes ago
    [dead]
  • ensocode 2 hours ago
    [dead]
  • dsign 2 hours ago
    [dead]
  • gtsop 1 hour ago
    [dead]