7 comments

  • sandwichsphinx a day ago

    For local large language models, my current setup is Ollama running on my M1 Mac Mini with 8GB of RAM, using whatever SOTA 8B model comes out. I used to have a more powerful workstation I built in 2016 with three GTX 1070s, but the capacitors were falling off, and I could not justify replacing it when Claude and ChatGPT subscriptions are more than enough for me. I plan on building a new dedicated workstation as soon as the first-mover disadvantage comes down. Today's hardware is still too early and too expensive to warrant any significant personal investment, in my opinion.

  • ActorNightly a day ago

    At work, we have access to AWS bedrock, so we use that.

    At home, I did the math, and its cheaper for me to buy credits for openai and use gpt4 than investing in graphics cards.I use maybe 5 dollars a month max

  • roosgit a day ago

    I have a separate PC that I access through SSH. I recently bought a GPU for it, before that I was running it on CPU alone.

    - B550MH motherboard

    - Ryzen 3 4100 CPU

    - 32GB (2x16) RAM cranked up to 3200MHz (prompt generation in memory bound)

    - 256GB M.2 NVMe (helps with loading models faster)

    - Nvidia 3060 12GB

    Software-wise, I use llamafile because on the CPU it's faster by 10-20% for prompt processing than llama.cpp.

    Performance "Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf":

    CPU-only: 23.47 t/s (processing), 8.73 t/s (generation)

    GPU: 941.5 t/s (processing), 29.4 t/s (generation)

  • lysace a day ago

    Is anyone doing a local Copilot? What's your setup? Is it competitive with Github Copilot?

    I just that realized my 32 GB Mac M2 Max Studio is pretty good at running relatively large models using Ollama. And there's the Continue.dev VS Code plugin that can use it, but I feel that the suggested defaults aren't very optimal for this config.

    • kingkongjaffa 20 hours ago

      You can connect a local ollama instance into the zed editor to chat with your open files and get inline prompting.

  • p1esk a day ago

    8xA6000

  • talldayo a day ago

    RTX 3070