AI is great at writing code. It's terrible at making decisions

(untangle.work)

15 points | by kdbgng 21 hours ago ago

5 comments

  • adampunk 19 hours ago

    This is an ad written by a robot. It just says "AI bad at code. Hire us; we're so good at knowing tough things like that."

  • scuff3d 19 hours ago

    In this same vein, plan mode is a trap. It makes you feel like you're actually engineering a solution. Like you're making measured choices about implementation details. You're not, your just vibe coding with extra steps.

    I come from an electrical engineering background originally, and I've worked in aerospace most of my career. Most software devs don't know what planning is. The mechanical, electrical, and aerospace engineering teams plan for literal years. Countless reviews and re-reviews, trade studies, down selects, requirement derivations, MBSE diagrams, and God knows what else before anything that will end up in the final product is built. It's meticulous, detailed, time consuming work, and bloody expensive.

    That's the world software engineering has been trying to leave behind for at least two decades, and now with LLMs people think they can move back to it with a weekend of "planning", answering a handful of questions, and a task list.

    Even if LLMs could actually execute on a spec to the degree people claim (they can't), it would take as long to properly define as it would to just write it with AI assistance in the first place.

    • xxwink 4 hours ago

      Your aerospace analogy is fair, and I'd push back on one thing: the problem isn't that developers don't plan — it's that most planning tools for software are too lightweight to actually constrain AI output. "Plan mode" is indeed vibe-coding with extra steps if your plan is a bullet list. I've been building a Go web framework using AI as the primary code writer. What made it work wasn't a task list — it was locking architectural decisions upfront in a document the AI reads before touching any file. Not guidelines. Decisions. Closed. With rationale, rejected alternatives, and consequences documented. Any change that crosses a decision boundary gets stopped. Any change touching more than one file requires an explicit Amendment — numbered, approved, then implemented. If you've worked with formal change control in project management, it's exactly that mental model applied to AI-assisted development. The AI writes code. It does not decide what gets built or how the pieces fit together. That's closer to your requirements derivation + down-select model than to anything most software teams do. The difference is the tooling forces it — the AI won't proceed without the context, and the context is the spec.

      • scuff3d 33 minutes ago

        It's more cultural than anything. 90% of the work done in those other fields is excel, PowerPoint, and Visio. Wouldn't be hard to convert to text and images in a repo. Though I'm sure the planning tools will get better.

        One of the last things I did before switching to software was design a PCB board. I spent a year (not just on this one board of course) researching parts, making sure they were compatible, marking sure they met the requirements, doing the math to make sure we'd make timing based on the delays through each ICs, working with mechanical engineers to ensure it would fit in the enclosure. Did three reviews for that one board (one of hundreds in the larger system). After a year it finally went to a vendor for fab, where I spent another 3 months reviewing circuit traces, verifing layout, and then testing once the boards got delivered. I didn't lay a trace or do any of the actual work myself, until it landed in my lab for testing. And all of that was after other engineers had spent god knows how long defining all the requirements.

        When people talk about "spec driven development" that's the world they're talking about. Understanding every API call, every abstraction, every interaction, all the various behaviors of a language, and all the requirements. Then writing comprehensive design docs to describe all of that in terms an LLM could understand. Then finally letting the LLM do the work, and spending weeks or months more on testing. And that's assuming LLMs can execute on a spec, which they can't.

        You're approach sounds much more reasonable. Lay out the structure and have the LLM fill in the gaps with your oversight. That's how I hear a lot of people doing actual good work describing their workflow. I suspect most talented engineers are going to land somewhere on the spectrum between what you describe and personally hand writing each like if code.

        What's not ever going to work is spending a few days writing a several hundred lines spec, shoving it through more LLMs until you get a plan out, and then setting the LLM loose on it. And expecting people with no domain knowledge to get something decent out is even worse.

  • yesensm 20 hours ago

    [dead]