Deterministic Programming with LLMs

(mcherm.com)

37 points | by todsacerdoti 3 days ago ago

19 comments

  • nemo1618 5 hours ago

    > But like humans — and unlike computer programs — they do not produce the exact same results every time they are used. This is fundamental to the way that LLMs operate: based on the "weights" derived from their training data, they calculate the likelihood of possible next words to output, then randomly select one (in proportion to its likelihood).

    This is emphatically not fundamental to LLMs! Yes, the next token is selected randomly; but "randomly" could mean "chosen using an RNG with a fixed seed." Indeed, many APIs used to support a "temperature" parameter that, when set to 0, would result in fully deterministic output. These parameters were slowly removed or made non-functional, though, and the reason has never been entirely clear to me. My current guess is that it is some combination of A) 99% of users don't care, B) perfect determinism would require not just a seeded RNG, but also fixing a bunch of data races that are currently benign, and C) deterministic output might be exploitable in undesirable ways, or lead to bad PR somehow.

    • pavpanchekha 5 hours ago

      Deterministic output is incompatible with batching, which in turn is critical to high utilization on GPUs, which in turn is necessary to keep costs low.

    • willj 3 hours ago

      The temperature parameters largely went away when we moved towards reasoning models, which output lots of reasoning tokens before you get to the actual output tokens. I don’t know if it was found that reasoning works better with a higher temperature, or that having separate temperatures for reasoning vs. output wasn’t practical, but that’s my observation of the timing, anyway. And to the other commenter’s point, even a temperature of 0 is not deterministic if the batches are not invariant, which they’re not in production workloads.

    • valenterry an hour ago

      > This is emphatically not fundamental to LLMs! Yes, the next token is selected randomly; but "randomly" could mean "chosen using an RNG with a fixed seed."

      This. Thanks for saying that, because now I don't need to read the article, since if the author doesn't even get that, I'm not interested in the rest.

    • jrmg 3 hours ago

      LLMs are, fundamentally, compressed lookup tables that map input -> input + next token. Or, If you like, input -> input + list of possible next tokens with probabilities.

  • andyfilms1 7 hours ago

    At what point does this just wrap all the way back around to being genetic algorithms?

    I'm also reminded of the old software called Formulize, which could take in a set of arbitrary data and find a function that described it. http://nutonian.wikidot.com/

    • galaxy_tx 3 hours ago

      The genetic algorithm comparison is actually pretty apt. Generate variations, evaluate fitness, keep the survivors. The main difference is that LLMs have a much richer prior about what "good" looks like, so the search space is dramatically smaller than random mutation.

      But it raises an interesting question about where the fitness function comes from. In traditional GAs you define it explicitly. With LLM-generated code, the fitness function is often just "does it pass the tests" - which means the quality of your tests becomes the actual bottleneck, not the quality of the code generation.

      I wonder if that shifts the core skill of programming from "write correct code" to "write correct specifications." And if so, is that actually a new problem, or is it the same problem formal methods people have been working on for decades, just wearing a different hat?

    • xyzzy_plugh 5 hours ago

      If you extend this line of thinking a lot, given we traditionally author the software, everything kind of boils down to a genetic algorithm.

  • StevenThompson an hour ago

    I wrote a version of this post awhile back that gets into a bit more detail as to HOW to bolt on the determinism.

    I'm glad to see others talking about it. One day we'll look back on this era the same way folks look back at the time before we validated inputs.

    https://www.stevenathompson.com/effective-vibe-coding-best-p...

  • dataviz1000 6 hours ago

    > The Solution is Code-Checking Code

    I'm finding code falls into two categories. Code that produces known results and code that produces results that are not known. For example, creating a table with a pagination component with a backend that loads the first 30 rows ordered by date descending from the database on page 1 and the second set of 30 rows on page 2. We know what the code is supposed to output, we know what the result looks like. On the other hand, there is code that does statistical analysis on the 30 rows of data. This is different because we don't know what the result is.

    The known result code is easy to use an LLM with. I have a skill that will iterate with an OODA loop — observe, act, and validate. It will in the validate step take screenshots and even without telling it, it will query the database from the CLI, compare the rendered row data to the database data. It will more surprisingly make sure that all the components are responsive and render beautifully on mobile. I'm orders of magnitude past linting here which is solved with Biome.

    The statistical analysis is different. The only way I can know for sure of the result is by writing the code painstakingly by hand. The LLM will always produce specious lies. It will fabricate and show me what I want to see, not the truth. This is because until it is written manually by hand, there is no ground truth. In this case, there is no code checking code.

  • jrecyclebin 3 hours ago

    > There is no need for determinism to guarantee the job will be done identically every time if we only plan to do it once.

    So can't you just save the conversation transcript and replay it with the tools? Seems a lot more efficient that regenerating the whole thing. And, also, no risk of branching when a tool reply is slightly different. (Of course, errors can occur on subsequent runs.)

  • avaer 4 hours ago

    Or, we could just use deterministic seeds in our LLM calls and solve the problem at the root.

    Obviously this won't work if your tools are not deterministic, but reproducible builds is a well-trodden discipline.

    • mapontosevenths an hour ago

      This is actually a feature that OpenAI offers via the API. It doesn't work the way you want it to though. It makes it less random, not deterministic and they even warn you of that in the docs.

  • computersuck 5 hours ago

    this is a long article that doesn't say much at all. likely generated by AI?

    it goes on for ages just to reach the point of "write the tests first"

  • ares623 2 hours ago

    Is English deterministic and/or predictable?

  • nkel1028 5 hours ago

    How does writing tests, or in the new fashion, stealing tests from somewhere else make anything deterministic?

    LLMs really cause diminished reasoning, or in terms that LLM people might understand: Your minds have been quantized!

  • yogthos 3 hours ago

    I'd argue that another key aspect is to break programs up into small independent units that can be verified in isolation, and to compose them into larger programs with contracts between them. I've had a pretty good experience using Claude with a framework where I express the program as a state graph, and each node is treated like a microservice that gets some input and produces some output. Then the workflow engine verifies that the output matches the declared schema and then decides which step to execute next. https://github.com/yogthos/mycelium

    As the state travels across the graph, I keep a trace of the steps which were executed, which means that when an error happens, the agent has a lot more information than it normally would, it can see what decision points the code passed through already, it can cross references that with the declared workflow, and quickly find where it screwed up.

    The idea of workflow engines has been around for a long time, but they feel too awkward to use when you're writing code by hand. Writing conditional logic directly in the code keeps you in your flow, and having to jump out and declare it in config somewhere feels awkward. Coding agents completely change the dynamic though because they don't have that problem. If the LLM is writing the code, then I can just focus on ensuring the code meets the contract, while the agent can deal with the implementation details.

  • 4b11b4 6 hours ago

    soon