Write the damn code

(antonz.org)

161 points | by walterbell 6 hours ago ago

59 comments

  • fusslo 5 hours ago

    I think about 2 months ago my company got a license for Cursor/claude ai access.

    At first it was really cool getting an understanding of what it can do. It can be really powerful, especially for things like refactoring.

    Then, I found it to be in the way. First, I had to rebind the auto-insert from TAB to ctrl+space because I would try tabbing code over and blamo: lines inserted, resulting in more work deleting them.

    Second, I found that I'd spend more time reading the ai generated autocomplete that pops up. It would pop up, I'd shift focus to read what it generated, decide if it's what I want, then try to remember what the hell I was typing.

    So I turned it all off. I still have access to context aware chats, but not the autocomplete thing.

    I have found that I'm remembering more and understanding the code more (shocking). I also find that I'm engaging with the code more: taking more of an effort to understand the code

    Maybe some people have the memory/attention span/ability to context switch better than me. Maybe younger people more used to distractions and attention stealing content.

    • hatefulmoron 5 hours ago

      I remember discussing with some coworkers a year(?) ago about autocomplete vs chat, and we were basically in agreement that autocomplete was the better feature of the two.

      Since we've had Claude Code for a few months I think our opinions have shifted in the opposite direction. I believe my preference for autocomplete was driven by the weaknesses of Chat/Agent Mode + Claude Sonnet 3.5 at the time, rather than the strengths of autocomplete itself.

      At this point, I write the code myself without any autocomplete. When I want the help, Claude Code is open in a terminal to lend a hand. As you mentioned, autocomplete has this weird effect where instead of considering the code, you're sort of subconsciously trying to figure out what the LLM is trying to tell you with its suggestions, which is usually a waste of time.

      • wongarsu 3 hours ago

        LSP giving us high-quality autocomplete for nearly every language has made simple llm-driven autocomplete less magical. Yes, it has good suggestions some of the time, but it's not really revolutionary

        On the other hand I love cursor's autocomplete implementation. It doesn't just provide suggestions for the current cursor location, it also provides suggestions where the cursor should jump next within the file. You change a function name and just press tab a couple of times to change the name in the docstring and everywhere else. Granted, refactoring tools have done that forever for function names, but now it works for everything. And if you do something repetitive it picks up on what you are doing and turns it into a couple quick keypresses

        It's still annoying sometimes

    • hansonkd 5 hours ago

      I think the worst part of the autocomplete is when you actually just want to tab to indent a line and it tries to autocomplete something at the end of the line.

      • dingnuts 5 hours ago

        ok call me a spoiled Go programmer but I have had an allergy to manually formatting code since getting used to gofmt on save. I highly recommend setting up an autoformatter so you can write nasty, undented code down the left margin and have it snap into place when you save the file like I do, and never touch tab for indent. Unless you're writing Python of course haha

        • justinrubek 5 hours ago

          Format on save is my worst enemy. It may work fine for go, but you'll eventually run into code where it isn't formatted using the one your editor is configured for. Then, you end up formatting the whole file or having to remember how to disable save formatting. I check formatting as a git hook on commits instead.

          • chatmasta 4 hours ago

            If you’re checking it on git hooks then it’s even safer to have format on save. Personally I default to format on save, and if I’m making a small edit to a file that is formatted differently, and it would cause a messy commit to format on save, then I simply “undo” and then “save without formatting” in the VSCode command palette.

            • jvalencia 3 hours ago

              We have format commits so that we have separate non-logic commits that don't have to be aggravated over if we find files are all off.

              • chatmasta 2 hours ago

                You can also add those non-logic commits to a .git-blame-ignore-revs file, and any client that supports it will ignore those commits when surfacing git blame annotations. I believe GitHub supports this but not sure. I think VSCode does…

    • lukevp 5 hours ago

      Autocomplete is a totally different thing that this article isn’t talking about. It is referring to the loop of prompt refinement which by definition means it’s referring to the Agent Mode type of integrations. Autocomplete has no prompting.

      I agree autocomplete kinda gets in the way, but don’t confuse that with all AI coding being bad, they’re 2 totally distinct functions.

    • gopalv 5 hours ago

      > I have found that I'm remembering more and understanding the code more (shocking).

      I feel like what I felt with adaptive cruise control.

      Instead of watching my speed, I was watching traffic flow, watching cars way up ahead instead.

      The syntax part of my brain is turned off, but the "data flow" part is 100% on when reading the code instead.

      • stouset 4 hours ago

        Wait, really? This is kind of surprising to me. Even without adaptive cruise control, I generally spend very few brain cycles paying attention to speed. My speed just varies based on conditions and the traffic flow around me, and I'm virtually never concerned with the number on the dial itself.

        As a result I've never found adaptive cruise control (or self-driving) to be all that big a deal for me. But hearing your perspective suddenly makes me realize why it is so compelling for so many others.

        • ryukafalz 2 hours ago

          That's how it should be ideally, but that can be a problem depending on the infrastructure around you. In my area (South Jersey) the design speed of our roads is consistently much higher than the posted speed limit. This leads to a lot of people consistently going much faster than the posted limit, and to people internalizing the idea that e.g. it's only really speeding if you're going 10+ mph over the limit. Which isn't actually safe in a lot of places!

          If the design speed of your roads is a safe speed for those around you then yeah that works perfectly.

    • javier2 5 hours ago

      Yeah, I also have the auto complete disabled. To me its the most useful when I am working in an area I know, but not the details. Such as, I know cryptography, but I don't know the cryptography APIs in nodejs, so Claude is very helpful when writing code for that.

    • bogdanoff_2 4 hours ago

      I totally agree with the "attention stealing".

      What you can do is create a hotkey to toggle autocomplete on and off.

    • WesleyJohnson 5 hours ago

      I love Cursor and the autocomplete is so helpful, until it's not. I don't know why I didn't think to rebind the hotkey for that. Thank you.

    • leptons 4 hours ago

      "AI" autocomplete has become a damn nuisance. It always wants to second-guess what I've already done, often making it worse. I try to hit escape to make it go away, but it just instantly suggests yet another thing I don't want. It's cumbersome. It gets in the way to an annoying extent. It's caused so many problems, I am about to turn it off.

      The only time it helps is when I have several similar lines and I make a change to the first line it offers to change all the rest of the lines. It's almost always correct, but sometimes it is subtlety not and then I waste 5 minutes trying to figure out why it didn't work only to notice the subtle bug it introduced. I'm not sure how anyone thinks this is somehow better than just knowing what you're doing and doing it yourself.

  • nasretdinov 5 hours ago

    I kinda agree with the author — as a person with more than enough coding experience I don't get much value (and, certainly, much enjoyment) from using AI to write code for me. However it's invaluable when you're operating in even a slightly unfamiliar environment — essentially, by providing (usually incorrect or incomplete) examples of the code that can be used to solve the problem it allows to overcome the main "energy barrier" for me — helping to navigate e.g. the vast standard library of a new programming language, or provide idiomatic examples of how to do things. I usually know _what_ I want to do, but I don't know exactly the syntax to express it in a certain framework or language

    • dunham 5 hours ago

      Yeah, I don't leverage LLMs much, but I have used it to look up APIs for writing vscode extensions. The code wasn't usable as-is, but it gave me an example that I could turn into working code - without looking up all of the individual api calls.

      I've also used it in the past to look up windows api, since I haven't coded for windows in decades. (For the equivalent of pipe, fork, exec.) The generated code had a resource leak, which I recognized, but it was enough to get me going. I suspect stack overflow also had the answer to that one.

      And for fun, I've had copilot generate a monad implementation for a parser type in my own made-up language (similar to Idris/Agda), and it got fairly close.

    • CraigJPerry 5 hours ago

      There's a product called Context7 which among other things provides succinct examples of how to use an API in practice (example of what it does: https://context7.com/tailwindlabs/tailwindcss.com )

      It's supposed to be consumed by LLMs to help prepare them to provide better examples - maybe a newer version of a library than is in the model's training data for example.

      I've often thought rather than an MCP server of this that my LLM agent can query, maybe i just want to query this high signal to noise resource myself rather than trawl the documentation.

      What additional value does an LLM provide when a good documentation resource exists?

  • alphazard 4 hours ago

    The approach of treating the LLMs like a junior engineer that is uninterested in learning seems to be the best advice, and correctly leverages the existing intuitions of experienced engineers.

    Spend more time on interfaces and test suites. Let the AI toil away making the implementation work according to your spec. Not implementing the interface is a wrong answer, not passing the tests is a wrong answer.

    If you've worked in software long enough you will have encountered people who are uninterested in learning or uncoachable for whatever reason. That is all of the LLMs too. If the LLM doesn't get it, don't waste your time; it will probably never get it. You need to try a different model or get another human involved, same as you would for an incompetent and uncoachable human.

    As an aside: my advice to junior engineers is to show off your wetware, demonstrate learning and adaptation at runtime. The models can't do that yet.

    • giancarlostoro 3 hours ago

      What's really funny is, if you copy its output, and start a new prompt, and ask it "From the perspective of Senior / Staff level engineer, what is wrong with this code?" and you paste the code you got from the LLM, it will trash all over its own code with a fresh mind. Technically you can do it in the existing prompt, but sometimes LLMs get a bug up their butts about what they've decided is reality all of a sudden.

      When switching context in any way, I start a new prompt.

      • lomase 3 hours ago

        I never use LLMs but what happens if you use the same code and write:

        "From the perspective of Senior / Staff level engineer, what is good about this code"

        Does it praise it?

        • giancarlostoro 3 hours ago

          Probably points out the bits it got correct I suppose.

        • svieira 3 hours ago

          "This is a clever usage of the too—little—used plus operator to perform high performance addition"

  • manoDev 6 hours ago

    I use AI as a pairing buddy who can lookup APIs and algorithms very quickly, or as a very smart text editor that understands refactoring, DRY, etc. but I still decide the architecture and write the tests. Works well for me.

    Apparently what the article talks against is using it like software factory - give it a prompt of what you want and when it gets it wrong, iterate on the prompt.

    I understand why this can be a waste of time: if programming is a specification problem [1], just shifting from programming language to natural language doesn’t solve it.

    1. https://pages.cs.wisc.edu/~remzi/Naur.pdf

    • lukevp 5 hours ago

      Yes, but… The AI has way more context on our industry than the raw programming language does. I can say things like “add a stripe webhook processor for the purchase event” and it’s gonna know which library to import, how to structure the API calls, the shape of the event, the database tables that people usually back Stripe stuff with, idempotency concerns of the API, etc.

      So yes you have to specify things but there’s a lot more implicit understanding and knowledge that can be retrieved relevant to the task you’re doing than a regular language would have

      • lomase 3 hours ago

        Have you deployed any of those Stripe integrations to prod?

        Can you show it to us?

  • wduquette 5 hours ago

    Unless you're solving the same old problem for the Nth time for a new customer, you don't really understand the problem fully until you write the code.

    If it's a new problem, you need to write the code so that you discover all the peculiar corner cases and understand them.

    If it's the (N+M)th time, and you've been using AI to write the code for the last M times, you may find you no longer understand the problem.

    Fair warning. Write the damn code.

  • pkdpic 3 hours ago

    > Ask AI for an initial version and then refactor it to match your expectations.

    > Write the initial version yourself and ask AI to review and improve it.

    > Write the critical parts and ask AI to do the rest.

    > Write an outline of the code and ask AI to fill the missing parts.

    So well put. I'm writing these on a post it note and putting it above my monitor. I held off on using agents to generate code for a long time and finally was forced to really make use of them and this is so in line with my experience.

    My biggest surprises have been how much the model doesn't seem to matter (?) when I'm making the prompts appropriately narrow. Also surprised at how hard it is to pair program in something like cursor. If your prompting is even slightly off it seems like it can go from 10xing a build process to making it a complete waste of time with nothing to show but spaghetti code at the end.

    Anyway long live the revolution, glad this was so technically on point and not just a no-ai rant (love those too tho).

  • 0x696C6961 an hour ago

    This is exactly how I work and I feel like the tools don't accomodate this workflow. I shouldn't have to tell the agent to explicitly re-read the a file after every edit.

  • bryanrasmussen 6 hours ago

    currently on HN's front page we have write the damn code, and write the stupid code, but we don't have write the good code.

    • kragen 2 hours ago

      The first five times you solve a problem, you don't know enough about it to write good code for it.

    • recursive 4 hours ago

      Good code is a hoax.

  • dayvster 2 hours ago

    Yes! I could not agree more with this sentiment.

    We over-analyse, over discuss, over plan and over optimize before we even write the first import or include.

    Some of my best ideas came to me as I was busy programming away at my vision. There's almost a zen like state there

  • larodi an hour ago

    Blessings, brother, but this insight will never get through to the masses. I can bet about it, so no rage.

  • zkmon 5 hours ago

    Precisely. That's the most optimal way to use AI code assistants right now.

    If you keep on refining the prompts, you are just eating the hype that is designed to be sold to C Suite.

    • bityard 4 hours ago

      I don't care much about hype one way or the other, but I find that continually asking for changes/improvements past the first prompt or two almost always sends the AI off into the weeds except for all of the simplest use cases.

      • stocksinsmocks 2 hours ago

        New prompts in the same session are dangerous because the undesired output (including nonsense reasoning) is getting put back into the context. Unless you’re brainstorming and need the dialogue to build up toward some solution, you are much better off removing anything that is not essential to the problem. If the last attempt was wrong, clear the context, feed in the spec, what information it must have like an error log and source, and write your instructions.

  • tarwich 5 hours ago

    Yup. It's not the learning AI or prompt engineering is bad in anyway. A similar writeup https://news.ycombinator.com/item?id=45405177 mentions the problem I see: when AI does most of the work, I have to work hard to understand what AI wrote.

    In your model, I give enough guidance to generally know what AI is doing, and AI is finishing what you started.

  • righthand 5 hours ago

    IMO no one is taking even the first bit of software development advice with Llms.

    Today my teammate laughed off generating UI components to quickly solve a ticket. Knowing full well no one will review the ticket now that it’s Llm generated and that it will probably make our application slower because of the unreviewed code gets merged. The consensus is that anything they make worse, they can push off to fix onto me because I’m the expert on our small team. I have been extremely vocal about this. However It is more important to push stuff through for release and make my life miserable than make sure the code is right.

    Today I now refuse to fix anymore problems on this team and might quit tomorrow. This person tells me weekly they always want to spend more time writing and learning good code and then always gets upset when I block a PR merge.

    Today I realized I might hate my current job now. I think all Llms have done is enabled my team to collect a pay check and embrace disinterest.

    • nlcs 4 hours ago

      Job market is currently really bad, it has never been worse. Two years ago, it was almost impossible to find an expert for a more specialized domain like computer vision or RTOS. Now, it’s impossible not to receive applications from multiple experts for a single role (and that’s only counting experts; senior and junior software developers or architects aren’t even included) that isn't even a sepcialized role and at best, "just a simple" senior role.

      • kragen 2 hours ago

        That's surprising! Thanks for letting us know.

    • leptons 4 hours ago

      Do Not Quit until you have accepted an offer from another job. I'm serious. Don't do it. It's fucking hell out there right now for tech jobs.

    • OutOfHere 5 hours ago

      I am in the minority who agrees with you that the code should be right.

      Don't quit. Get fired instead (strictly without cause). In this way you can at least collect some severance and also unemployment. You will also absolve yourself of any regrets for having quit. Actually, just keep doing what you're doing, and you will get fired soon enough.

      The other thing you can try is to ask for everyone to have their own project that they own, and for the assigned owner be fully responsible for it, so you can stop reviewing the work of other people.

      • hunterpayne 2 hours ago

        This is good advice. If you quit, you don't get severance nor do you get UI. If they let you go, you do.

  • econ 2 hours ago

    Write the code, deploy.

    the end

  • MangoCoffee 5 hours ago

    AI is pretty good on CRUD web app for me. I worked out a web page for create something and if the next page is similar. i just told AI to use the previous page as template. it cut down a lot of typing.

    AI is just another tool, use it or turn it off. it shouldn't matter much to a developer.

  • g42gregory 3 hours ago

    Yes, you could write the code yourself, but keep in mind that this activity is going away for most engineers (but not for all) in 1 - 2 years.

    I think a better advice would be to learn reading/reviewing an inordinate amount of code, very fast. Also heavy focus on patterns, extremely detailed SDLC processes, TDD, DDD, debugging, QA, security reviews, etc...

    Kinda the opposite advice from the blog. :-)

    Edit: Somebody pointed out that, in order to read/review code, you have to write it. Very true. It brings a questions of how do you acquire/extend your skills in the age of AI-coding assistance? Not sure I have an answer. Claude Code now has /output-style: Learning, which forces you to write part of the code. That's a good start.

    • mhuffman 2 hours ago

      >Yes, you could write the code yourself, but keep in mind that this activity is going away for most engineers (but not for all) in 1 - 2 years.

      I'm not saying that it definitely isn't going to happen, but there is a loooong way to go for non-FAANG medium and small companies to let their livelihoods ride on AI completely.

      >I think a better advice would be to learn reading/reviewing an inordinate amount of code, very fast. Also heavy focus on patterns, extremely detailed SDLC processes, TDD, DDD, debugging, QA, security reviews, etc...

      If we get to a point in 1-2 years where AI is vibe-coding at a high mostly error-free level, what makes you think that it couldn't review code as well?

      • g42gregory 2 hours ago

        I can't see into the future, but I think that AI, at any level, will not excuse people from the need to acquire top professional skills. Software engineers will need to know Software Engineering and CS, AI or not. Marketers will have to understand marketing, AI or not. And so on... I could be wrong, but that's what I think.

        AI-assistance is a multiplier, not an addition. If you have zero understanding before AI, you will get zero capabilities with AI.

    • rileymichael 2 hours ago

      > keep in mind that this activity is going away for most engineers (but not for all) in 1 - 2 years

      sure thing. we've been '6 months' away from AI taking our jobs for years now

      • g42gregory 2 hours ago

        Not saying AI will take anybody's job. It's just that the nature of the job is changing, and we have to acknowledge that. Will still be competitive. Will still require strong SE/CS knowledge and skills. Will still require/favor CS/EE degrees, which NVIDIA CEO told us not to get anymore. :-)

        Also, it looks like the OpenAI and Anthropic has completed their fundraising cycles. So the AGI "has been cancelled" for now. :-)

    • kragen 2 hours ago

      Nobody has any idea what will happen in 1–2 years. Will AI still be just as incompetent at writing code as it is today? Will AI wipe out biological humanity? Nobody has any idea.

      • g42gregory 2 hours ago

        Very true. One thing we could do it to take a positive/constructive view of the future and drive towards it. People can all lose their jobs, OR we could write 1,000x more software. Let's give corporate developers tools to write 1,000x more software, instead of buying it from the outside vendors, as a way of example.

        • kragen 2 hours ago

          It might work!