Big GPUs don't need big PCs

(jeffgeerling.com)

189 points | by mikece 14 hours ago ago

63 comments

  • Waterluvian 4 hours ago

    At what point do the OEMs begin to realize they don’t have to follow the current mindset of attaching a GPU to a PC and instead sell what looks like a GPU with a PC built into it?

    • pjmlp 20 minutes ago

      So basically going back to the old days of Amiga and Atari, in a certain sense, when PCs could only display text.

    • nightshift1 3 hours ago

      Exactly. With the Intel-Nvidia partnership signed this September, I expect to see some high-performance single-board computers being released very soon. I don't think the atx form-factor will survive another 30 years.

  • pjmlp 19 minutes ago

    Of course, just go to any computer store where most gamer setups on affordable bugets go with the combo "beefy GPU + an i5", instead of using an i7 or i9 Intel CPUs.

  • numpad0 12 hours ago

    Not sure what was unexpected about the multi GPU part.

    It's very well known that most LLM frameworks including llama.cpp splits models by layers, which has sequential dependency, and so multi GPU setups are completely stalled unless there are n_gpu users/tasks running in parallel. It's also known that some GPUs are faster in "prompt processing" and some in "token generation" that combining Radeon and NVIDIA does something sometimes. Reportedly the inter-layer transfer sizes are in kilobyte ranges and PCIe x1 is plenty or something.

    It takes appropriate backends with "tensor parallel" mode support, which splits the neural network parallel to the direction of flow of data, which also obviously benefit substantially from good node interconnect between GPUs like PCIe x16 or NVlink/Infinity Fabric bridge cables, and/or inter-GPU DMA over PCIe(called GPU P2P or GPUdirect or some lingo like that).

    Absent those, I've read somewhere that people can sometimes see GPU utilization spikes walking over GPUs on nvtop-style tools.

    Looking for a way to break up tasks for LLMs so that there will be multiple tasks to run concurrently would be interesting, maybe like creating one "manager" and few "delegated engineers" personalities. Or simulating multiple different domains of brain such as speech center, visual cortex, language center, etc. communicating in tokens might be interesting in working around this problem.

    • syntaxing 5 hours ago

      Theres some technical implementations that makes it more efficient like EXO [1]. Jeff Geerling recently did a review on a 4 MAC Studio cluster with RDMA support and you can see that EXO has a noticeable advantage [2].

      [1] https://github.com/exo-explore/exo [2] https://www.youtube.com/watch?v=x4_RsUxRjKU

    • zozbot234 12 hours ago

      > Looking for a way to break up tasks for LLMs so that there will be multiple tasks to run concurrently would be interesting, maybe like creating one "manager" and few "delegated engineers" personalities.

      This is pretty much what "agents" are for. The manager model constructs prompts and contexts that the delegated models can work on in parallel, returning results when they're done.

    • nodja 10 hours ago

      > Reportedly the inter-layer transfer sizes are in kilobyte ranges and PCIe x1 is plenty or something.

      Not an expert, but napkin math tells me that more often that not this will be in the order of megabytes—not kilobytes—since it scales with sequence length.

      Example: Qwen3 30B has a hidden state size of 5120, even if quantized to 8 bits that's 5120 bytes per token. It would pass the MB boundary with just a little over 200 tokens. Still not much of an issue when a single PCIe lane is ~2GB/s.

      I think device to device latency is more of an issue here, but I don't know enough to assert that with confidence.

      • remexre 8 hours ago

        For each token generated, you only send one token’s worth between layers; the previous tokens are in the KV cache.

  • yjftsjthsd-h 13 hours ago

    I've been kicking this around in my head for a while. If I want to run LLMs locally, a decent GPU is really the only important thing. At that point, the question becomes, roughly, what is the cheapest computer to tack on the side of the GPU? Of course, that assumes that everything does in fact work; unlike OP I am barely in a position to understand eg. BAR problems, let alone try to fix them, so what I actually did was build a cheap-ish x86 box with a half-decent GPU and called it a day:) But it still is stuck in my brain: there must be a more efficient way to do this, especially if all you need is just enough computer to shuffle data to and from the GPU and serve that over a network connection.

    • binsquare 12 hours ago

      I run a crowd sourced website to collect data on the best and cheapest hardware setup for local LLM here: https://inferbench.com/

      Source code: https://github.com/BinSquare/inferbench

    • seanmcdirmid 12 hours ago

      And you don’t want to go the M4 Max/M3 Ultra route? It works well enough for most mid sized LLMs.

    • zeusk 13 hours ago

      Get the DGX Spark computers? They’re exactly what you’re trying to build.

      • Gracana 5 hours ago

        They’re very slow.

        • geerlingguy 4 hours ago

          They're okay, generally, but slow for the price. You're more paying for the ConnectX-7 networking than inference performance.

          • Gracana 3 hours ago

            Yeah, I wouldn’t complain if one dropped in my lap, but they’re not at the top of my list for inference hardware.

            Although... Is it possible to pair a fast GPU with one? Right now my inference setup for large MoE LLMs has shared experts in system memory, with KV cache and dense parts on a GPU, and a Spark would do a better job of handling the experts than my PC, if only it could talk to a fast GPU.

            [edit] Oof, I forgot these have only 128GB of RAM. I take it all back, I still don’t find them compelling.

    • tcdent 12 hours ago

      We're not yet to the point where a single PCIe device will get you anything meaningful; IMO 128 GB of ram available to the GPU is essential.

      So while you don't need a ton of compute on the CPU you do need the ability address multiple PCIe lanes. A relatively low-spec AMD EPYC processor is fine if the motherboard exposes enough lanes.

      • p1necone 9 hours ago

        I'm holding out for someone to ship a gpu with dimm slots on it.

        • tymscar 7 hours ago

          DDR5 is a couple of orders of magnitude slower than really good vram. That’s one big reason.

          • dawnerd 6 hours ago

            But it would still be faster than splitting the model up on a cluster though, right? But I’ve also wondered why they haven’t just shipped gpus like cpus.

            • cogman10 5 hours ago

              Man I'd love to have a GPU socket. But it'd be pretty hard to get a standard going that everyone would support. Look at sockets for CPUs, we barely had cross over for like 2 generations.

              But boy, a standard GPU socket so you could easily BYO cooler would be nice.

          • cogman10 5 hours ago

            For AI, really good isn't really a requirement. If a middle ground memory module could be made, then it'd be pretty appealing.

        • anon25783 7 hours ago

          Would that be worth anything, though? What about the overhead of clock cycles needed for loading from and storing to RAM? Might not amount to a net benefit for performance, and it could also potentially complicate heat management I bet.

        • kristianp 7 hours ago

          A single CAMM might suit better.

      • skhameneh 12 hours ago

        There is plenty that can run within 32/64/96gb VRAM. IMO models like Phi-4 are underrated for many simple tasks. Some quantized Gemma 3 are quite good as well.

        There are larger/better models as well, but those tend to really push the limits of 96gb.

        FWIW when you start pushing into 128gb+, the ~500gb models really start to become attractive because at that point you’re probably wanting just a bit more out of everything.

        • tcdent 12 hours ago

          IDK all of my personal and professional projects involve pushing the SOTA to the absolute limit. Using anything other than the latest OpenAI or Anthropic model is out of the question.

          Smaller open source models are a bit like 3d printing in the early days; fun to experiment with but really not that valuable for anything other than making toys.

          Text summarization, maybe? But even then I want a model that understands the complete context and does a good job. Even things like "generate one sentence about the action we're performing" I usually find I can just incorporate it into the output schema of a larger request instead of making a separate request to a smaller model.

          • xyzzy123 11 hours ago

            It seems to me like the use case for local GPUs is almost entirely privacy.

            If you buy a 15k AUD rtx 6000 96GB, that card will _never_ pay for itself on a gpt-oss:120b workload vs just using openrouter - no matter how many tokens you push through it - because the cost of residential power in Australia means you cannot generate tokens cheaper than the cloud even if the card were free.

            • girvo 10 hours ago

              > because the cost of residential power in Australia

              This so doesn't really matter to your overall point which I agree with but:

              The rise of rooftop solar and home battery energy storage flips this a bit now in Australia, IMO. At least where I live, every house has a solar panel on it.

              Not worth it just for local LLM usage, but an interesting change to energy economics IMO!

            • joefourier 10 hours ago

              There’s a few more considerations:

              - You can use the GPU for training and run your own fine tuned models

              - You can have much higher generation speeds

              - You can sell the GPU on the used market in ~2 years time for a significant portion of its value

              - You can run other types of models like image, audio or video generation that are not available via an API, or cost significantly more

              - Psychologically, you don’t feel like you have to constrain your token spending and you can, for instance, just leave an agent to run for hours or overnight without feeling bad that you just “wasted” $20

              - You won’t be running the GPU at max power constantly

            • 15155 10 hours ago

              Or censorship avoidance

          • popalchemist 10 hours ago

            This is simply not true. Your heuristic is broken.

            The recent Gemma 3 models, which are produced by Google (a little startup - heard of em?) outperform the last several OpenAI releases.

            Closed does not necessarily mean better. Plus the local ones can be finetuned to whatever use case you may have, won't have any inputs blocked by censorship functionality, and you can optimize them by distilling to whatever spec you need.

            Anyway all that is extraneous detail - the important thing is to decouple "open" and "small" from "worse" in your mind. The most recent Gemma 3 model specifically is incredible, and it makes sense, given that Google has access to many times more data than OpenAI for training (something like a factor of 10 at least). Which is of course a very straightforward idea to wrap your head around, Google was scrapign the internet for decades before OpenAI even entered the scene.

            So just because their Gemma model is released in an open-source (open weights) way, doesn't mean it should be discounted. There's no magic voodoo happening behind the scenes at OpenAI or Anthropic; the models are essentially of the same type. But Google releases theirs to undercut the profitability of their competitors.

    • dist-epoch 12 hours ago

      This problem was already solved 10 years ago - crypto mining motherboards, which have a large number of PCIe slots, a CPU socket, one memory slot, and not much else.

      > Asus made a crypto-mining motherboard that supports up to 20 GPUs

      https://www.theverge.com/2018/5/30/17408610/asus-crypto-mini...

      For LLMs you'll probably want a different setup, with some memory too, some m.2 storage.

      • jsheard 12 hours ago

        Those only gave each GPU a single PCIe lane though, since crypto mining barely needed to move any data around. If your application doesn't fit that mould then you'll need a much, much more expensive platform.

        • dist-epoch 12 hours ago

          After you load the weights into the GPU and keep the KV cache there too, you don't need any other significant traffic.

          • numpad0 12 hours ago

            Even in tensor parallel modes? I thought it could only work if you're fine stalling all but n GPU for n users at any given moments.

      • skhameneh 12 hours ago

        In theory, it’s only sufficient for pipeline parallel due to limited lanes and interconnect bandwidth.

        Generally, scalability on consumer GPUs falls off between 4-8 GPUs for most. Those running more GPUs are typically using a higher quantity of smaller GPUs for cost effectiveness.

      • zozbot234 12 hours ago

        M.2 is mostly just a different form factor for PCIe anyway.

    • Eisenstein 10 hours ago

      There is a whole section in here on how to spec out a cheap rig and what to look for:

      * https://jabberjabberjabber.github.io/Local-AI-Guide/

  • omneity 5 hours ago

    I wish for a hardware + software solution to enable direct PCIe interconnect using lanes independent from the chipset/CPU. A PCIe mesh of sorts.

    With the right software support from say pytorch this could suddenly make old GPUs and underpowered PCs like in TFA into very attractive and competitive solutions for training and inference.

    • snuxoll 4 hours ago

      PCIe already allows DMA between peers on the bus, but, as you pointed out, the traces for the lanes have to terminate somewhere. However, it doesn't have to be the CPU (which is, of course, the PCIe root in modern systems) handling the traffic - a PCIe switch may be used to facilitate DMA between devices attached to it, if it supports routing DMA traffic directly.

  • 3eb7988a1663 13 hours ago

    Datapoints like this really make me reconsider my daily driver. I should be running one of those $300 mini PCs at <20W. With ~flat CPU performance gains, would be fine for the next 10 years. Just remote into my beefy workstation when I actually need to do real work. Browsing the web, watching videos, even playing some games is easily within their wheelhouse.

    • samuelknight 12 hours ago

      Switching from my 8-core ryzen minipc to an 8-core ryzen desktop makes my unit tests run way faster. TDP limits can tip you off to very different performance envelopes in otherwise similar spec CPUs.

      • adrian_b 7 hours ago

        A full-size desktop computer will always be much faster for any workload that fully utilizes the CPU.

        However, a full-size desktop computer seldom makes sense as a personal computer, i.e. as the computer that interfaces to a human via display, keyboard and graphic pointer.

        For most of the activities done directly by a human, i.e. reading & editing documents, browsing Internet, watching movies and so on, a mini-PC is powerful enough. The only exception is playing games designed for big GPUs, but there are many computer users who are not gamers.

        In most cases the optimal setup is to use a mini-PC as your personal computer and a full-size desktop as a server on which you can launch any time-consuming tasks, e.g. compilation of big software projects, EDA/CAD simulations, testing suites etc.

        The desktop used as server can use Wake-on-LAN to stay powered off when not needed and wake up whenever it must run some task remotely.

      • loeg 8 hours ago

        Even if you could cool the full TDP in a micro PC, in a full size desktop you might be able to use a massive AIO radiator with fans running at very slow, very quiet speeds instead of jet turbine howl in the micro case. The quiet and ease of working in a bigger space are mostly a good tradeoff for a slightly larger form factor under a desk.

    • ekropotin 13 hours ago

      As experiment, I decided to try using proxmox VM with eGPU and usb bus bypassed to it, as my main PC for browsing and working on hobby projects.

      It’s just 1 vCPU with 4 Gb ram, and you know what? It’s more than enough for these needs. I think hardware manufactures falsely convinced us that every professional needs beefy laptop to be productive.

    • reactordev 12 hours ago

      I went with a beelink for this purpose. Works great.

      Keeps the desk nice and tidy while “the beasts” roar in a soundproofed closet.

    • jasonwatkinspdx 7 hours ago

      For just basic windows desktop stuff, a $200 NUC has been good enough for like 15 years now.

  • jonahbenton 14 hours ago

    So glad someone did this. Have been running big gpus on egpus connected to spare laptops and thinking why not pis.

  • Wowfunhappy 13 hours ago

    I really would have liked to see gaming performance, although I realize it might be difficult to find a AAA game that supports ARM. (Forcing the Pi to emulate x86 with FEX doesn't seem entirely fair.)

    • 3eb7988a1663 13 hours ago

      You might have to thread the needle to find a game which does not bottleneck on the CPU.

  • kgeist 11 hours ago

    What about constrained decoding (with JSON schemas)? I noticed my vLLM instance is using 1 CPU 100%.

  • kristjansson 12 hours ago

    Really why have the PCI/CPU artifice at all? Apple and Nvidia have the right idea: put the MPP on the same die/package as the CPU.

    • bigyabai 12 hours ago

      > put the MPP on the same die/package as the CPU.

      That would help in latency-constrained workloads, but I don't think it would make much of a difference for AI or most HPC applications.

  • jauntywundrkind 10 hours ago

    PCIe 3.0 is the nice easy convenient generation where 1 lane = 1GBps. Given the overhead, thats pretty close to 10Gb ethernet speeds (low latency though).

    I do wonder how long the cards are going to need host systems at all. We've already seen GPUs with m.2 ssd attached! Radeon Pro SSG hails back from 2016! You still need a way to get the model on that in the first place to get work in and out, but a 1Gbe and small RISC-V chip (which Nvidia already uses formanagement cores) could suffice. Maybe even an rpi on the card. https://www.techpowerup.com/224434/amd-announces-the-radeon-...

    Given the gobs of memory cards have, they probably don't even need storage; they just need big pipes. Intel had 100Gbe on their Xeon & Xeon Phi cores (10x what we saw here!) in 2016! GPUs that just plug into the switch and talk across 400Gbe or UltraEthernet or switched CXL, that run semi independently, feel so sensible, so not outlandish. https://www.servethehome.com/next-generation-interconnect-in...

    It's far off for now, but flash makers are also looking at radically many channel flash, which can provide absurdly high GB/s, High Bandwidth Flash. And potentially integrated some extremely parallel tensorcores on each channel. Switching from DRAM to flash for AI processing could be a colossal win for fitting large models cost effectively (& perhaps power efficiently) while still having ridiculous gobs of bandwidth. With that possible win of doing processing & filtering extremely near to the data too. https://www.tomshardware.com/tech-industry/sandisk-and-sk-hy...

  • lostmsu 12 hours ago

    Now compare batched training performance. Or batched inference.

    Of course prefill is going to be GPU bound. You only send a few thousand bytes to it, and don't really ask to return much. But after prefill is done, unless you use batched mode, you aren't really using your GPU for anything more that it's VRAM bandwidth.

  • Avlin67 6 hours ago

    tired of jeff glinglin everywhere...