Formal methods only solve half my problems

(brooker.co.za)

75 points | by signa11 6 days ago ago

27 comments

  • NovemberWhiskey a day ago

    Outside of a very narrow range of safety- or otherwise ultra-critical systems, no-one is designing for actual guarantees of performance attributes like throughput or latency. The compromises involved in guarantees are just too high in terms of over-provisioning, cost to build and so on.

    In large, distributed systems the best we're looking for is statistically acceptable. You can always tailor a workload that will break a guarantee in the real world.

    So you engineer with techniques that reduce the likelihood that workloads you have characterized as realistic can be handled with headroom, and you worry about graceful degradation under oversubscription (i.e. maintaining "good-put"). In my experience, that usually comes down to good load-balancing, auto-scaling and load-shedding.

    Virtually all of the truly bad incidents I've seen in large-scale distributed systems are caused by an inability to recover back to steady-state after some kind of unexpected perturbation.

    If I had to characterize problem number one, it's bad subscriber-service request patterns that don't provide back pressure appropriately. e.g. subscribers that don't know how to back-off properly and services that don't provide back-pressure. Classical example is a subscriber that retries requests on a static schedule and gives up on requests that have been in-flight "too long", coupled with services that continue to accept requests when oversubscribed.

    • amw-zero a day ago

      I think this is less about guarantees and more about understanding behavioral characteristics in response to different loads.

      I personally could care less about proving that an endpoint always responds in less than 100ms say, but I care very much about understanding where various saturation points are in my systems, or what values I should set for limits like database connections, or how what the effect of sporadic timeouts are, etc. I think that's more the point of this post (which you see him talk about in other posts on his blog).

      • NovemberWhiskey a day ago

        I am not sure that static analysis is ever going to give answers to those questions. I think the best you can hope to do is surface knowledge about the tacit assumptions about dependencies in order to explore their behaviors through simulation or testing.

        I think it often boils down to "know when you're going to start queuing, and how you will design the system to bound those queues". If you're not using that principle at design stage then I think you're already cooked.

        • amw-zero a day ago

          Who brought up static analysis?

          I think simulation is definitely a promising direction.

    • AlotOfReading a day ago

      It's just realtime programming. I wouldn't say that realtime techniques are limited to a very narrow range of ultra critical systems, given that they encompass everything from the code on your SIM card to games in your steam library.

          In large, distributed systems the best we're looking for is statistically acceptable. You can always tailor a workload that will break a guarantee in the real world.
      
      This is called "soft" realtime.
      • NovemberWhiskey a day ago

        "Soft" realtime just means that you have a time-utility function that doesn't step-change to zero at an a priori deadline. Virtually everything in the real world is at least a soft realtime system.

        I don't disagree with you that it's a realtime problem, I do however think that "just" is doing a lot of work there.

        • AlotOfReading a day ago

          There are multiple ways to deal with deadline misses for soft systems. Only some of them actually deliver the correct data, just late. A lot of systems will abort the execution and move on with zeros/last computed data instead, or drop the data entirely. A modern network AQM system like CAKE uses both delayed scheduling and intelligent dropping.

          Agreed though, "just" is hiding quite a deep rabbit hole.

    • bluGill a day ago

      While you don't need performance guarantees for most things, you still need performance. You can safely let "a small number" of requests "take too long", but if you let "too many" your users will start to complain and go elsewhere. Of course everything in quotes is fuzzy (though sometimes we have very accurate measures for specific things), but you need to meet those requirements even if they are not formal.

  • chrisaycock a day ago

    The article points out that tools like TLA+ can prove that a system is correct, but can't demonstrate that a system is performant. The author asks for ways to assess latency et al., which is currently handled by simulation. While this has worked for one-off cases, OP requests more generalized tooling.

    It's like the quote attributed to Don Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it."

    • pjmlp a day ago

      From my point of view, they cannot even prove that, because in most cases there is no validation if the TLA+ model actually maps to the e.g. C code that was written.

      I only believe in formal methods where we always have a machine validated way from model to implementation.

    • throw-qqqqq a day ago

      There are methods of determining Worst Case Execution Time/WCET. I’ve been involved in real time embedded systems development, where that was a thing.

      But one tool (like TLA+) can’t realistically support all formalisms for all types of analyses ¯\_(ツ)_/¯

  • amw-zero a day ago

    This is the single most impactful blog post I've read in the last 2-3 years. It's so obvious in retrospect, but it really drove the point home for me that functional correctness is only the beginning. I personally had been over-indexing on functional correctness, which is understandable since a reliable but incorrect system isn't valuable.

    But, in practice, I've spent just as much time on issues introduced by perf / scalability limitations. And the post thesis is correct: we don't have great tools for reasoning about this. This has been pretty much all I've been thinking about recently.

    • adamddev1 a day ago

      There could be more linear and "resource-aware" type systems coming down the pipes through research. These would allow the type checker to show performance / resource information. Check out Resource Aware ML.

      https://www.raml.co/about/

      https://arxiv.org/abs/2205.15211

      • amw-zero a day ago

        Super interesting, but I think this will be very difficult in practice due to the gigantic effect of nondeterminism at the hardware level (caches, branch prediction, out of order execution, etc.)

  • adamddev1 a day ago

    There is a bunch of research happening around "Resource-Aware" type theory. This kind of type theory checks performance, not just correctness. Just like the compiler can show correctness errors, the compiler could show performance stats/requirements.

    https://arxiv.org/abs/2205.15211

    Already we have Resource Aware ML which

    > automatically and statically computes resource-use bounds for OCaml programs

    https://www.raml.co/about/

  • HPsquared a day ago

    Maybe they solve the first 90%, but not the other 90%.

  • whinvik a day ago

    Nice, I actually understood a lot of that post since I am trying to teach myself formal methods. Wrote up a bit here - https://vikramsg.github.io/introduction-to-formal-methods-pa...

  • jadbox a day ago

    Are there any good formal method tools that work well with Node.js/Bun/Deno projects?

  • Ericson2314 a day ago

    The author should try some more modern formal methods.

    Tools like Lean and Rocq can do arbitrary math — the limit is your time and budget, not the tool.

    These performance questions can be mathematically defined, so it is possible.

    • ted_dunning a day ago

      Indeed.

      And the SeL4 kernel has latency guarantees based on similar proofs (at considerable cost)

  • NooneAtAll3 a day ago

    what is P?

    • aw1621107 a day ago

      Looks like it's this [0]:

      > Distributed systems are notoriously hard to get right (i.e., guaranteeing correctness) as the programmer needs to reason about numerous control paths resulting from the myriad interleaving of events (or messages or failures). Unsurprisingly, programmers can easily introduce subtle errors when designing these systems. Moreover, it is extremely difficult to test distributed systems, as most control paths remain untested, and serious bugs lie dormant for months or even years after deployment.

      > The P programming framework takes several steps towards addressing these challenges by providing a unified framework for modeling, specifying, implementing, testing, and verifying complex distributed systems.

      It was last posted on HN about 2 years ago [1].

      [0]: https://p-org.github.io/P/whatisP/

      [1]: https://news.ycombinator.com/item?id=34273979