11 comments

  • begemotz 8 hours ago

    I am not a professional coder. However, I have typically seen people respond more favorably to Code compared to other systems including Codex (percentage-wise). I'm curious to know what exactly you feel is inferior?

    On the other hand, recently for me, the usage limits in Claude have been inconsistent and frustrating. I seem to get a lot less out of it than I have in the past and am considering trying one of the other big 3 sub plans to see whether it suits my use case more.

    • blinkbat 8 hours ago

      We're talking models, not systems.

  • dangus 7 hours ago

    This will win my award for the lowest value post I’ve read for the month of March 2026.

    Unless someone would like to post something like “I prefer Twix over KitKat bars.”

    • add-sub-mul-div 7 hours ago

      The only useful thing that came out of this is that I learned there are people paying price points above ~$20/month for this stuff. Incredible.

      • andriy_koval 7 hours ago

        some people generate and push lots of code, nothing is surprising about this.

  • blinkbat 8 hours ago

    It does seem to wildly fluctuate lately. Codex is much more consistent. Must be that DoW money... :/

  • MeetingsBrowser 7 hours ago

    > faithfully pay my $100/mo ... the literal most idiotic thing I've ever seen ... causes me to almost get physically angry ... I'm so dumbfounded

    I think maybe you should consider stepping back from LLMs for a while. Take a break. The models and tooling will improve and you can try again later.

    Keeping up with latest trends is not worth your health.

  • claytongulick 7 hours ago

    Serious question here.

    Have you taken a moment to step back and truly evaluate your productivity when using LLMs for code generation?

    I don't mean the obvious confirmation-bias tickling stuff like "create a form with these fields and validation".

    I mean from a whole-system, total effort analysis, from idea to production, support and maintenance.

    I'm curious what you find.

    My current theory is that the industry will land in a place where LLMs for code generation are frowned upon for non-trivial work, but that they are embraced for tooling, summarization and explanation.

    I think these things have real, concrete value, but that it's a mistake to substitute them for human reasoning - and human reasoning is a crucial characteristic of quality code.

    The thing I'm not sure of is whether the current "good enough is good enough" approach to vibe coded solutions will be sticky, or in what contexts.

    MS Access still powers entire business departments, because good enough is good enough.

    • harvenstar 6 hours ago

      This resonates. The problem I keep running into isn't that the model is bad — it's that the feedback loop is too thin. A y/n in the terminal isn't enough to catch when the model does something subtly wrong.

      • harvenstar 6 hours ago

        I've been building a review UI layer for coding agents (Claude Code, Codex) that lets you actually inspect and edit what the agent is about to do before it executes: https://github.com/agentlayer-io/AgentClick

        Turns out most of the "dumb" mistakes OP is talking about are catchable — you just need to actually see them before they ship.

    • blinkbat 6 hours ago

      For frontend code it's fine tbh. Perfectly capable with oversight