Vibing a non-trivial Ghostty feature

(mitchellh.com)

194 points | by skevy 9 hours ago ago

98 comments

  • tptacek 6 hours ago

    Tip: I very often use AI for inspiration. In this case, I ended up keeping a lot (not all) of the UI code it made, but I will very often prompt an agent, throw away everything it did, and redo it myself (manually!). I find the "zero to one" stage of creation very difficult and time consuming and AI is excellent at being my muse.

    This right here is the single biggest win for coding agents. I see and directionally agree with all the concerns people have about maintainability and sprawl in AI-mediated projects. I don't care, though, because the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races. It's getting to that golden moment that constitutes 80% of what's costly about programming for me.

    This is the part where I simply don't understand the objections people have to coding agents. It seems so self-evidently valuable --- even if you do nothing else with an agent, even if you literally throw all the code away.

    PS

    Put a weight on that bacon!

    • jebarker 5 hours ago

      Just this week I watched an interview with Mitchell about his dev setup and when asked about using neovim instead of an IDE he said something along the lines of "I don't want something that writes code for me". I'm not pointing this out as a criticism, but rather that it's worth taking note that an accomplished developer like him sees value in LLMs that he didn't see in previous intellisense-like tooling.

      • mitchellh 4 hours ago

        Not sure exactly what you're referring to, but I'm guessing it may be this interview I did 2 years ago: https://youtu.be/rysgxl35EGc?t=214 (timestamp linked to LLM-relevant section) I'm genuinely curious because I don't quite remember saying the quote you're saying I did. I'm not denying it, but I'd love to know more of the context. :)

        But, if it is the interview from 2 years ago, it revolved more around autocomplete and language servers. Agentic tooling was still nascent so a lot of what we were seeing back then was basically tab models and chat models.

        As the popular quote goes, "When the Facts Change, I Change My Mind. What Do You Do, Sir?"

        The facts and circumstances have changed considerably in recent years, and I have too!

        • jebarker 3 hours ago

          It was this one: https://sourcegraph.com/blog/dev-tool-time-mitchell-hashimot...

          They even used the quote as the title of the accompanying blog post.

          As I say, I didn’t mean this as a gotcha or anything- I totally agree with the change and I have done similarly. I’ve always disabled autocomplete, tool tips, suggestions etc but now I am actively using Cursor daily.

          • moderation 6 minutes ago

            The CEO of Sourcegraph Quinn was pretty negative on coding agents and agentic development only about 10 months ago [0]. He had 'agentic stuff' in the Deader category (Used rarely, Reviewing it aint worth it). In fairness, he did say it was the future but 'is not there yet'. Since then, Sourcregraph's code assistant plugin Cody has been deprecated an they are all in on agents and agentic with Amp.

            0.https://youtu.be/Up6WVA07QdE?si=xU_iu2rQAWoHXPpO&t=898

          • mitchellh 3 hours ago

            Yeah understood, I'm not taking it negatively, I just genuinely wanted to understand where it came from.

            Yeah this is from 2021 (!!!) and is directly related to LSPs. ChatGPT didn't even get launched until Nov 2022. So I think the quote doesn't really work in the context of today, it's literally from an era where I was looking at faster horses when cars were right around the corner and I had not a damn clue. Hah.

            Off topic: I still dislike [most] LSPs and don't use them.

            • jebarker an hour ago

              Are you still using the NixOS setup you showed in that interview or do you now use the MacOS native ghostty?

            • Fraterkes 2 hours ago

              What do you not like about LSPs? When you do eg refactoring isn't it nice to do operations on something that actually reflects the structure of your code?

              • mitchellh 2 hours ago

                I use agents for that, and it does a shockingly good job. LSPs constantly take up resources, most are poorly written, and I have to worry about version compatibility, editor compatibility, etc. It's just a very annoying ecosystem to me.

                External agent where I can say "rename this field from X to Y" or "rewrite this into a dedicated class and update all callers" and so on works way better. Obviously, you have to be careful reviewing it since its not working at the same level of guarantee an LSP is but its worth it.

                • underdeserver 2 hours ago

                  Hmm, most LSPs don't give you a very strong guarantee either (when you e.g. rename a variable).

                  I suppose in some languages it's undecidable in the worst case, but it should work in reasonably hygienic codebases.

                  Also, they tend to freeze or crash for no reason.

      • scns 4 hours ago

        Cognitive Dissonance. Still there, even in the best of us.

    • ttiurani 5 hours ago

      > the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races. [...] This is the part where I simply don't understand the objections people have to coding agents.

      That's what's valuable to you. For me the zero to one part is the most rewarding and fun part, because that's when the possibilities are near endless, and you get to create something truly original and new. I feel I'd lose a lot of that if I let an AI model prime me into one direction.

      • wahnfrieden 5 hours ago

        OP is considering output productivity, but your comment is about personal satisfaction of process

        • ttiurani 5 hours ago

          That's true, but when the work is rewarding, I also do it quite fast. When it's tedious tweaking, I have force myself to keep on typing.

          Also: productivity is for machines, not for people.

          • simonw 4 hours ago

            Tedious tweaking is my favorite thing to outsource to coding agents these days.

      • srcreigh 4 hours ago

        Surely there are some things which you can’t be arsed to take from zero to one?

        This isn’t selling your soul; it is possible to let AI scaffold some tedious garbage while also dreaming up cool stuff the old fashioned way.

        • ttiurani 4 hours ago

          > Surely there are some things which you can’t be arsed to take from zero to one?

          No, not really: https://news.ycombinator.com/item?id=45232159

          > This isn’t selling your soul;

          There is a plethora of ethical reasons to reject AI even if it was useful.

    • stavros an hour ago

      I'm the opposite, I find getting started easy and rewarding, I don't generally get blocked there. Where I get blocked, after almost thirty years of development, is writing the code.

      I really like building things, but they're all basically putting the same code together in slightly different ways, so the part I find rewarding isn't the coding, it's the seeing everything come together in the end. That's why I really like LLMs, they let me do all the fun parts without any of the boring parts I've done a thousand times before.

      • tptacek an hour ago

        It's funny because the part you find challenging is exactly the thing LLM skeptics tend to say accounts for almost none of the work (writing the code). I personally find that once my project is up on its legs and I'm in flow state, writing code is easy and pleasant, but one thing clear from this thread is everyone experiences programming a little differently.

        • stavros an hour ago

          Yeah, definitely. I do agree with the skeptics to a point, as I don't let the LLM write code without reviewing (it makes many mistakes that compound), but I'd still rather have it write a function, review and steer, have it write another, and so on, than write database models myself for the millionth time.

          It's not that I find it hard, I've just done it so many times that it's boring. Maybe I should be solving different/harder problems, but I don't mind having the LLM write the code, and I'm doing what I like and I'm more productive than ever, so eh!

          • tptacek an hour ago

            I was just talking about this in a chat today, because 'simonw had at some point talked about getting to the point where he was letting go of reviewing every line of LLM code, and I am nowhere close to that point --- I'll take Claude's word on Tailwind classes as long as the HTML looks right, but actual code, I review line-by-line, token-by-token, and usually rewrite things, even if just for cosmetic reasons.

            • stavros an hour ago

              Yeah, there's a definite continuum for where LLMs are most to least expert. They seem to be fairly OK with JS, less so with Python, and C is just a crapshoot.

              It depends on the project as well, for throwaway things I'm fine to just let it do whatever it wants, but for projects that I need to last more than a few days, I review everything.

              • tptacek 31 minutes ago

                A friend of mine has been doing embedded C stuff (making some kind of LCD wall) and has been blown away by how well it's been doing with C --- he went in an LLM skeptic (I've been trying to sell him for months, it finally clicked).

                • stavros 21 minutes ago

                  Huh, interesting. I wanted to turn an old rotary phone into a meeting headset, and I tried to get Opus to make me a sound card, but $35 in API costs later, I had no sound card.

    • roughly 4 hours ago

      I was talking about this the other day with someone - broadly I agree with this, they're absolutely fantastic for getting a prototype so you can play with the interactions and just have something to poke at while testing an idea. There's two problems I've found with that, though - the first is that it's already a nightmare to convince management that something that looks and acts like the thing they want isn't actually ready for production, and the vibe coded code is even less ready for production than my previous prototyping efforts.

      The second is that a hand-done prototype still teaches you something about the tech stack and the implementation - yes, the primary purpose is to get it running quickly so you can feel how it works, but there's usually some learning you get on the technical side, and often I've found my prototypes inform the underlying technical direction. With vibe coded prototypes, you don't get this - not only is the code basically unusable, but you really are back to starting from scratch if you decide to move forward - you've tested the idea, but you haven't really tested the tech or design.

      I still think they're useful - I'm a big proponent of "prototype early," and we've been able to throw together some surprisingly large systems almost instantly with the LLMs - but I think you've gotta shift your understanding of the process. Non-LLM prototypes tend to be around step 4 or 5 of a hypothetical 10-step production process, LLM prototypes are closer to step 2. That's fine, but you need to set expectations around how much is left to do past the prototype, because it's more than it was before.

    • wilsonnb3 3 hours ago

      > This is the part where I simply don't understand the objections people have to coding agents. It seems so self-evidently valuable --- even if you do nothing else with an agent, even if you literally throw all the code away.

      It sounds like the blank page problem is a big issue for you, so tools that remove it are a big productivity boost.

      Not everyone has the same problems, though. Software development is a very personal endeavor.

      Just to be clear, I am not saying that people in category A or category B are better/worse programmers. Just that everyone’s workflow is different so everyone’s experience with tools is also different.

      The key is to be empathetic and trust people when they say a tool does or doesn’t work for them. Both sides of the LLM argument tend to assume everyone is like them.

    • harrall 4 hours ago

      People get into this field for very different reasons.

      - People who like the act and craftsmanship of coding itself. AI can encourage slop from other engineers and it trivializes the work. AI is a negative.

      - People who like general engineering. AI is positive for reducing the amount of (mundane) code to write, but still requires significant high-level architectural guidance. It’s a tool.

      - People who like product. AI can be useful for prototyping but won’t won’t be able to make a good product on its own. It’s a tool.

      - People who just want to build a MVP. AI is honestly amazing at making something that at least works. It might be bad code but you are testing product fit. Koolaid mode.

      That’s why everyone has a totally different viewpoint.

      • tptacek 4 hours ago

        Real subtle. Why not just write "there are good programmers and bad programmers and AI is good for bad programmers and only bad programmers"? Think about what you just said about Mitchell Hashimoto here.

        • roughly 4 hours ago

          I'm not sure that's a fair take.

          I don't think it's an unfair statement that LLM-generated code typically is not very good - you can work with it and set up enough guard rails and guidance and whatnot that it can start to produce decent code, but out of the box, speed is definitely the selling point. They're basically junior interns.

          If you consider an engineer's job to be writing code, sure, you could read OP's post as a shot, but I tend to switch between the personas they're listing pretty regularly in my job, and I think the read's about right.

          To the OP's point, if the thing you like doing is actually crafting and writing the code, the LLMs have substantially less value - they're doing the thing you like doing and they're not putting the care into it you normally would. It's like giving a painter an inkjet printer - sure, it's faster, but that's not really the point here. Typically, when building the part of the system that's doing the heavy lifting, I'm writing that myself. That's where the dragons live, that's what's gotta be right, and it's usually not worth the effort to incorporate the LLMs.

          If you're trying to build something that will provide long-term value to other people, the LLMs can reduce some of the boilerplate stuff (convert this spec into a struct, create matching endpoints for these other four objects, etc) - the "I build one, it builds the rest" model tends to actually work pretty well and can be a real force multiplier (alternatively, you can wind up in a state where the LLM has absolutely no idea what you're doing and its proposals are totally unhinged, or worse, where it's introducing bugs because it doesn't quite understand which objects are which).

          If you've got your product manager hat on, being able to quickly prototype designs and interactions can make a huge, huge difference in what kind of feedback you get from your users - "hey try this out and let me know what you think" as opposed to "would you use this imaginary thing if I built it?" The point is to poke at the toy, not build something durable.

          Same with the MVP/technical prototyping - usually the question you're trying to answer is "would this work at all", and letting the LLM crap out the shittiest version of the thing that could possibly work is often sufficient to find out.

          The thing is, I think these are all things good engineers _do_. We're not always painting the Sistine Chapel, we also have to build the rest of the building, run the plumbing, design the thing, and try to get buy-in from the relevant parties. LLMs are a tool like any other - they're not the one you pull out when you're painting Adam, but an awful lot of our work doesn't need to be done to that standard.

          • tptacek 2 hours ago

            I can't get past the framing that "people who like the act and craftsmanship" feel AI is negative, which implicitly defines whatever Mitchell Hashimoto is doing as not craftsmanship, which: ghostty is pure craftsmanship (the only reason anyone would spend months writing a new terminal).

            No, I think my response was fair, if worded sharply. I stand by it.

    • sjdjsin 6 hours ago

      > This is the part where I simply don't understand the objections people have to coding agents

      Because I have a coworker who is pushing slop at unsustainable levels, and proclaiming to management how much more productive he is. It’s now even more of a risk to my career to speak up about how awful his PRs are to review (and I’m not the only one on the team who wishes to speak up).

      The internet is rife with people who claim to be living in the future where they are now a 10x dev. Making these claims costs almost nothing, but it is negatively effecting mine and many others day to day.

      I’m not necessarily blaming these internet voices (I don’t blame a bear for killing a hiker), but the damage they’re doing is still real.

      • tptacek 6 hours ago

        I don't think you read the sentence you're responding to carefully enough. The antecedent of "this" isn't "coding agents" generally: it's "the value of an agent getting you past the blank page stage to a point where the substantive core of your feature functions well enough to start iterating on". If you want to respond to the argument I made there, you have to respond to the actual argument, not a broader one that's easier (and much less interesting) to take swipes at.

        • sjdjsin 6 hours ago

          My understanding of your argument is:

          Because agents are good on this one specific axis (which I agree with and use fwiw), there’s no reason to object to them as a whole

          My argument is:

          The juice isn’t worth the squeeze. The small win (among others) is not worth the amounts of slop devs now have to deal with.

          • tptacek 6 hours ago

            Sounds like a very poorly managed team.

            • JasonSage 6 hours ago

              I have to agree. My experience working on a team with mixed levels of seniority and coding experience is that everybody got some increase in productivity and some increase in quality.

              The ones who spend more time developing their agentic coding as a skillset have gotten much better results.

              In our team people are also more willing to respond to feedback because nitpicks and requests to restructure/rearchitect are evaluated on merit instead of how time-consuming or boring they would have been to take on.

            • roughly 4 hours ago

              In tech? Say it ain't so.

              • dingnuts 3 hours ago

                in any organization???

      • wilg 6 hours ago

        Not sure what to tell you, if there's a problem you have to speak up.

        • fn-mote 6 hours ago

          And the longer you wait, the worse it will be.

          Also, update your resume and get some applications out so you’re not just a victim.

      • j45 6 hours ago

        Maybe it's possible to use AI to help review the PRs and claim it's the AI making the PR's hyperproductive?

        • XenophileJKO 6 hours ago

          Yes, this. If you can describe why it is slop, an AI can probably identify the underlying issues automatically.

          Done right you should get mostly reasonable code out of the "execution focused peer".

          • austinjp 5 hours ago

            In climate terms, or even simply in terms of $cost, this very much feels like throwing failing on a bonfire.

            Should we really advocate for using AI to both create and then destroy huge amounts of data that will never be used?

            • XenophileJKO 4 hours ago

              I don't think it is a long term solution. More like training wheels. Ideally the engineers learn to use AI to produce better code the first time. You just have a quality gate.

              Edit: Do I advocate for this? 1000%. This isn't crypto burning electricity to make a ledger. This objectively will make the life of the craftsmanship focused engineer easier. Sloppy execution oriented engineers are not a new phenomenon, just magnified with the fire hose that an agentic AI can be.

            • wahnfrieden 5 hours ago

              The environmental cost of AI is mostly in training afaik. The inference energy cost is similar to the google searches and reddit etc loads you might do during handwritten dev last I checked. This might be completely wrong though

              • pastel8739 4 hours ago

                I hear this argument a lot, but it doesn’t hold water for me. Obviously the use of the AI is the thing that makes it worthwhile to do the training, so you obviously need to amortize the training cost over the inference. I don’t know whether or not doing so makes the environmental cost substantially higher, though.

    • vunderba 4 hours ago

      > the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races

      AI is an absolute boon for "getting off the ground" by offloading a lot of the boilerplate and scaffolding that one tends to lose enthusiasm for after having to do it for the 99th time.

      > AI is excellent at being my muse.

      I'm guessing we have a different definition for muse. Though I admit I'm speaking more about writing (than coding) here but for myself, a muse is the veritable fount of creation - the source of ideas.

      Feel free to crank the "temperature" on your LLM until the literal and figurative oceans boil off into space, at the end of the day you're still getting the ultimate statistical distillation.

      https://imgur.com/a/absqqXI

    • SatvikBeri 4 hours ago

      Agree – my personal rule is that I throw away any branches where I use LLM-generated code, and I still find it very helpful because of the speed of prototyping various ideas.

    • chrisweekly 5 hours ago

      100% agreed.

      > "Put a weight on that bacon!" ?

      • simonw 4 hours ago

        Mitchell included a photograph of his breakfast preparation, and the bacon was curling up on the frying pan.

    • Xss3 5 hours ago

      My trouble is that the remaining 20% of work takes 80% of my time. Ai assistance or not. The edge

      • shinycode 4 hours ago

        100% agree and LLM does have many blind spots and high confidence which makes it hard to really trust without checking

    • ttul 3 hours ago

      I concur here and would like to add that I worry less about sprawl when I know I can ask the agent to rework things in future. Yes, it will at times implement the same thing twice in two different files. Later, I’ll ask it to abstract that away to a library. This is frankly how a lot of human coding effort goes too.

    • ramon156 6 hours ago

      It's invaluable if you don't know how to work with it

    • ipaddr 5 hours ago

      This is an artefact of a language ecosystem that does not prioritize getting started. If you picked php/laravel with a few commands you are ahead of the days of work piping golang or node requires to get to a starting point.

      • trenchpilgrim 5 hours ago

        I guess it depends on your business. I rarely start new professional projects, but I maintain them for 5+ years - a few pieces of production software I started are now in the double digits. Ghostty definitely aims to be in that camp of software.

  • commandersaki 6 hours ago

    I really respect Mitchell's response to the OpenAI accident, even if it is seen in positive light for ghostty. Can't think of any software vendor that actively tries to eliminate nag / annoyances (thinking specifically of MS Auto Update), so this is welcome.

    Also this article shows responsible use of AI when programming; I don't think it fits the original definition of vibe coding that caused hysterics.

    • dlvhdr 6 hours ago

      yeah the usage of "vibe coding" here doesn't fit at all. That term is so overused.

      • simonw 4 hours ago

        Worth noting that Mitchell didn't actually use the term "vibe coding" anywhere in this article.

        He called it "vibing" in the headline, which matches my suspicion that the term "vibe" is evolving to mean anything that uses generative AI, see also Microsoft "vibe working": https://www.microsoft.com/en-us/microsoft-365/blog/2025/09/2...

        • mitchellh 3 hours ago

          I'll add the title is a bit of bait. I don't use the word "vibe" (in any of its forms) anywhere outside of the title.

          I'm not baiting for general engagement, I don't really care about that. I'm baiting for people who are extremist on either side of the "vibe" spectrum such that it'd trigger them to read this, because either way I think it could be good for them.

          If you're an extreme pro-vibe person, I wanted to give an example of what I feel is a positive usage of AI. There's a lot of extreme vibe hype boys who are... sloppy.

          And if you're an extreme anti-vibe person, I wanted to give an example that clearly refutes many criticisms. (Not all, of course, e.g. there's no discussion here one way or another about say... resource usage).

          So, sorry!

      • wahnfrieden 5 hours ago

        It has simply evolved from its initial meaning, as language does.

    • afro88 5 hours ago

      > Also this article shows responsible use of AI when programming; I don't think it fits the original definition of vibe coding that caused hysterics.

      Yep. It's vibe engineering, which simonw coined here: https://simonwillison.net/2025/Oct/7/vibe-engineering/

  • ColinEberhardt 7 hours ago

    As an aside, the Ghostty recently made it mandatory to disclose the use of AI coding tools:

    https://github.com/ghostty-org/ghostty/pull/8289

  • senderista 6 hours ago

    OT but I can't imagine leaving my laptop open next to a pan of sizzling bacon

  • Shadowmist 7 hours ago

    Ghostty is awesome and I almost dropped iTerm for it until I hit cmd-f and nothing happened.

    https://github.com/ghostty-org/ghostty/issues?q=is%3Aissue%2...

    • sevg 6 hours ago

      I wonder what people would discuss in all these ghostty posts if cmd-f had been present from the start. It’s getting a little boring hearing about it in every post!

      There’s interesting things to discuss here about LLM tooling and approaches to coding. But of course we’d rather complain about cmd-f ;)

      • ghosty141 3 hours ago

        Missing scrollbars. That and ctrl f are my two annoyances, apart from that I love ghostty and use it daily at work.

      • wahnfrieden 5 hours ago

        People don’t care about advanced use cases when fundamentals are missing

        • sevg 5 hours ago

          Advanced use cases? Did you read the article? Making the update modal less intrusive isn’t an “advanced use case”, or even prime material for dicussion ;) The article is mainly about LLM-assisted coding, regardless of the terminal being used.

      • do_not_redeem 4 hours ago

        Since we're here, I'm just waiting for them to implement drag-and-drop on KDE. Right now it only works on GNOME, and although Ghostty is great I'm not going to switch to GNOME just for a terminal emulator.

    • nevon 4 hours ago

      Funnily enough, I spent the last weekend implementing search in Ghostty using Claude. There's already a kinda working implementation of the actual searching, so most of the job was just wiring it up to the UI. After two sessions of maybe 10 hours in total, I had basic searching with highlighting and go to next/previous match working in the Linux frontend. The search implementation is explicitly a work in progress though, so not something that's ready for general use.

      That said, it certainly made me appreciate the complexity of such a "basic" feature when I started thinking about how to make this work when tailing a stream of text.

    • jumploops 6 hours ago

      Yeah I quickly (and unfortunately!) switched back to Warp, as Ghostty was a little too barebones for my use case.

      Word to the wise: Ghostty’s default scrollback buffer is only ~10MB, but it can easily be changed with a config option.

    • JimDabell 5 hours ago
    • senderista 6 hours ago

      This is the blocker for me as well.

    • uzername 6 hours ago

      Missing search and weird ssh control character issues are my blockers. It's great otherwise!

      • PyWoody 6 hours ago

        Reposting my comment from [0]:

            Have you tried the suggestions in https://ghostty.org/docs/help/terminfo#ssh? I don't know what issue you may be experiencing but this solved my issue with using htop in an ssh session.
        
        
        
        [0] https://news.ycombinator.com/item?id=45359239
  • WD-42 6 hours ago

    This post demonstrates one area where ai agents are a huge win: ui frameworks. I have a very similar workflow on an app I’m currently developing in Rust and GTK.

    It’s not that I don’t know how to implement something, it’s that the agent can do so much of the tedious searching and trial and error that accompanies this ui framework code.

    Notice that Mitchell maintains understanding of all the code through the session. It’s because he already understands what he needs to do. This is a far cry from the definition of “vibe coding” I think a lot of people are riding on. There’s no shortcut to becoming an expert.

    Loving Ghostty!

  • nextworddev 6 hours ago

    Haven’t used Ghostty but why is HN putting it on front page every other week? What’s the main attraction to it

    • wahnfrieden 5 hours ago

      Product merits aside… Its developer is a celebrity figure and people here are captivated by the story of a billionaire who writes open source without a business model, like we regular folks do when we don’t have a hustle to focus on

  • piazz 5 hours ago

    Such a useful walkthrough.

    It looks like Mitchell is using an agentic framework called Amp (I’d never heard of it) - does anybody else here use it or tried it? Curious how it stacks up against Claude Code.

    • simonw 4 hours ago

      I haven't yet spent any time with it myself, but the impression I have been getting is that it is the most credible of the vendor-independent terminal coding agents right now.

      Claude Code, Codex CLI and Gemini CLI are all (loosely) locked to their own models.

      • qudat 15 minutes ago

        As far as I know it only uses sonnet 4.5

      • piazz an hour ago

        This is good to know. I’ll probably play around with it sometime in the future.

        BTW, appreciate your many great write-ups - they’ve been invaluable for keeping up to date in this space.

    • qudat 16 minutes ago

      I’m using it. It’s expensive but it’s awesome

  • chrisweekly 5 hours ago

    > "You can see in chats 11 to 14 that we're entering the slop zone. The code the agent created has a critical bug, and it's absolutely failing to fix it. And I have no idea how to fix it, either.

    I'll often make these few hail mary attempts to fix a bug. If the agent can figure it out, I can study it and learn myself. If it doesn't, it costs me very little. If the agent figures it out and I don't understand it, I back it out. I'm not shipping code I don't understand. While it's failing, I'm also tabbed out searching the issue and trying to figure it out myself."

    Awesome characterization ("slop zone"), pragmatic strategy (let it try; research in parallel) and essential principle ("I'm not shipping code I don't understand.")

    IMHO this post is gold, for real-world project details and commentary from an expert doing their thing.

  • hoppp 6 hours ago

    I think as long as a human audit passes its good I also generated some pretty great code before, but I went in to review every single line to make sure

  • xlii 3 hours ago

    Today, for the first time ever, I had to kill a terminal with all the tabs because it became unresponsive.

    Welp, that explains it. I haven't changed terminal in a while anyway...

  • chrsig 5 hours ago

    > You can see in chats 11 to 14 that we're entering the slop zone. The code the agent created has a critical bug, and it's absolutely failing to fix it. And I have no idea how to fix it, either.

    This definitely relaxes my ai-hype anxiety

  • dlvhdr 6 hours ago

    People are really bad at evaluating whether ai speeds them up or slows them down. The main question is, do you enjoy this kind of process of working with ai. I personally don't, so I don't use it. It's hard for me to believe any claims about productivity gains.

    • CurleighBraces 5 hours ago

      This is the crux of the discussion. For me the output is a greater reward than the input. The faster I can reach the output the better.

      And to clarify, I don't mean output as "this feature works, awesome", but "this feature works, it's maintainable and the code looks as beautiful as I can make it"

    • jebarker 5 hours ago

      I like it when it works but I literally had to take a break yesterday due to the rage I was feeling from Claude repeatedly declaring "I found it!" or "Perfect - found the issue!" before totally breaking the code.

  • intended 5 hours ago

    This is exactly what I hoped for when someone talks about their LLM Enabled coding experience.

    - language - product - level of experience / seniority

  • dismalaf 5 hours ago

    > Tip: I very often use AI for inspiration. In this case, I ended up keeping a lot (not all) of the UI code it made, but I will very often prompt an agent, throw away everything it did, and redo it myself (manually!). I find the "zero to one" stage of creation very difficult and time consuming and AI is excellent at being my muse.

    This is pretty much how I use AI. I don't have it in my editor, I always use it in a browser window, but I bounce ideas off it, use it like a better search engine and even if I don't use the exact code it produces I do feel there's some value.