Three types of LLM workloads and how to serve them

(modal.com)

75 points | by charles_irl 4 days ago ago

5 comments

  • rippeltippel 3 days ago

    > Gallia est omnis divisor in partes tres.

    OCD-driven fix: The correct Latin quote is "Gallia est omnis divisa in partes tres".

  • ZsoltT 3 days ago

    > we recommend using SGLang with excess tensor parallelism and EAGLE-3 speculative decoding on live edge Hopper/Blackwell GPUs accessed via low-overhead, prefix-aware HTTP proxies

    lord

    • charles_irl 3 days ago

      Sorry to lead with a bunch of jargon! Wanted to make it obvious that we'd give concrete recommendations instead of palaver.

      The technical terms there are later explained and diagrammed, and the recommendations derived from something close to first principles (e.g. roofline analysis).

  • omneity 3 days ago

    Very cool insights, thanks for sharing!

    Do you have benchmarks for the SGLang vs vLLM latency and throughput question? Not to challenge your point, but I’d like to reproduce these results and fiddle with the configs a bit, also on different models & hardware combos.

    (happy modal user btw)