7 comments

  • chris_money202 13 hours ago

    LLMs are really good at classical programming because they have plenty of examples to go off of. But what about quantum languages? What if those languages require drastially different syntaxes that we can't reasonable generate from primatives of classical computer languages. Won't we need a human to be trained and generate them?

    • salawat 9 hours ago

      I do not find that to be the case. Most of the things I'm getting spit out are straight up broken out of the box. Like, missing imports, syntax errors. Directing an LLM feels like having a junior gaslighting while they think you're gaslighting them. Spending as much time working on Prompts to generate code seems foolhardy, because even for the same exact prompt, my code generation result is so ill conditioned, the Prompt isn't source code to the degree of reliability actual source code is. A model may see the same prompt, then generate two entirely different API's as a solution. It's maddening. Made even worse I guess by the fact most hosted setups want to bill you by token. Makes me wonder if I should start billing by LOC to prove a point.

      • chris_money202 4 hours ago

        This almost sounds like you could have a setup issue or are working in a legacy codebase and the APIs are not available as context.

        You need to make sure it has access to the information it needs by providing docs as context if the code is imported or it will likely hallucinate or try to ill fit a solution into what it does know / can see.

      • munksbeer 3 hours ago

        > I do not find that to be the case. Most of the things I'm getting spit out are straight up broken out of the box. Like, missing imports, syntax errors.

        How is this even possible? You tell the agent to write such-and-such a feature and it will edit the source files, run the compile, check for issues, fix them, run tests, etc. If there are missing imports or syntax errors it won't even compile and the agent will continue to fix it. Not once since I started using claude have I had an issue with this.

        Are you just typing into a chat and copy pasting code? That was a terrible experience for me, don't do it.

  • GianFabien 13 hours ago

    Even before recent AI capabilities, writing software was (now is) table stakes.

    Deep domain knowledge and expertise is essential. Until you actually work at the coal face in a given industry you don't know the complexity nor the opportunities for improvements. Talking to the workers is good, but you never get the complete picture.

  • dhruv3006 14 hours ago

    What about operating the software over time?

    • GianFabien 13 hours ago

      Perhaps you could be more specific.

      For example, for architectural 3D modeling software, the operating the software is being the architect who visualises, designs and refines the building's design.