7 comments

  • uberman 4 hours ago

    I always look at the suggested changes. Always. Not infrequently, LLMs want to put new methods in what I consider the wrong place. Not infrequently, the LLM requires pushback and admits it was over engineering something. Finally, if you can't understand how or where something is implemented then when it comes time to fix it you will likely have a hard time finding what to fix. Just my perspective.

    • blinkbat 4 hours ago

      Do you ever use the LLM itself to fix the code? Does the overengineering ever create actual problems, or just perceived ones? Just trying to cut to core of the topic, not trying to say you're right or wrong.

  • missmoss 4 hours ago

    When working on an existing project, I always take a close look on what got changed by LLM. Sometimes I explicitly tell it which line to change and nothing else. Like what most people found, LLM sometimes, if not always, creates redundant over-engineering work trying to be useful, while most of time it actually breaks something, e.g. code quality and readability. In new projects, I focus more on plan, spec, test scheme. I review how modules are set up and sometimes functions, but pay less attention on implementation details.

    • blinkbat 4 hours ago

      Interesting that you draw the distinction of "breaking something" at code quality -- your customers almost guaranteed don't see it this way, and don't care about the code at all. So your viewpoint is team-centric, which is admirable, but loses steam outside of that circle, no?

  • rvz 4 hours ago

    Let's say you are driving on the highway, then there is the autopilot button and you switch it on.

    Does that mean you don't need to look on the roads without your hands on the wheel? (It does not. You need all hands on the wheel and look on the roads at all times)

    What happens when the system fails in the middle of the motorway and you don't know how to drive? (You would be completely stuck.)

    Just like with the above analogy, you still need to look at the code if any entity (human or AI) made a mistake [0] and then judge that the fix is sound.

    [0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...

    • blinkbat 4 hours ago

      Is not the ultimate project of driverless cars to be, well, driverless?

      Granted we are not there with LLMs quite yet, but are you insisting that this is not the endgame?

      • rvz 4 hours ago

        > Is not the ultimate project of driverless cars to be, well, driverless?

        That has always been the desired end-game for decades. But as you can see, it is far more complicated than imagined.

        This is because of guidelines in the regulatory landscape, safety and what happens when these systems fail or get themselves hacked, which is why I gave that analogy.

        The future is likely to be in the middle.