DeepSeek-v3.1 Release

(api-docs.deepseek.com)

193 points | by wertyk 3 hours ago ago

36 comments

  • danielhanchen 11 minutes ago

    For local runs, I made some GGUFs! You need around RAM + VRAM >= 250GB for good perf for dynamic 2bit (2bit MoE, 6-8bit rest) - can also do SSD offloading but it'll be slow.

    ./llama.cpp/llama-cli -hf unsloth/DeepSeek-V3.1-GGUF:UD-Q2_K_XL -ngl 99 --jinja -ot ".ffn_.*_exps.=CPU"

    More details on running + optimal params here: https://docs.unsloth.ai/basics/deepseek-v3.1

  • hodgehog11 3 hours ago

    For reference, here is the terminal-bench leaderboard:

    https://www.tbench.ai/leaderboard

    Looks like it doesn't get close to GPT-5, Claude 4, or GLM-4.5, but still does reasonably well compared to other open weight models. Benchmarks are rarely the full story though, so time will tell how good it is in practice.

    • segmondy an hour ago

      garbage benchmark, inconsistent mix of "agent tools" and models. if you wanted to present a meaningful benchmark, the agent tools will stay the same and then we can really compare the models.

      there are plenty of other benchmarks that disagree with these, with that said. from my experience most of these benchmarks are trash. use the model yourself, apply your own set of problems and see how well it fairs.

    • coliveira 2 hours ago

      My personal experience is that it produces high quality results.

      • amrrs 2 hours ago

        Any example or prompt you use to make this statment?

        • imachine1980_ 2 hours ago

          I remember asking for quotes about the Spanish conquest of South America because I couldn't remember who said a specific thing. The GPT model started hallucinating quotes on the topic, while DeepSeek responded with, "I don't know a quote about that specific topic, but you might mean this other thing." or something like that then cited a real quote in the same topic, after acknowledging that it wasn't able to find the one I had read in an old book. i don't use it for coding, but for things that are more unique i feel is more precise.

          • mycall 18 minutes ago

            I wonder if Conway's law is at all responsible for that, in the similarity it is based on; regional trained data which has concept biases which it sends back in response.

    • tonyhart7 37 minutes ago

      Yeah but the pricing is insane, I don't care about SOTA if its not break my bank

    • guluarte 2 hours ago

      tbh companies like anthopic, openai, create custom agents for specific benchmarks

    • seunosewa 2 hours ago

      The DeepSeek R1 in that list is the old model that's been replaced. Update: Understood.

      • yorwba 2 hours ago

        Yes, and 31.3% is given in the announcement as the performance of the new v3.1, which would put it in sixteenth place.

    • YetAnotherNick 2 hours ago

      Depends on the agent. Rank 5 and 15 are claude 4 sonnet, and this stands close to 15th.

  • seunosewa 2 hours ago

    It's a hybrid reasoning model. It's good with tool calls and doesn't think too much about everything, but it regularly uses outdated tool formats randomly instead of the standard JSON format. I guess the V3 training set has a lot of those.

    • darrinm 18 minutes ago

      Did you try the strict (beta) function calling? https://api-docs.deepseek.com/guides/function_calling

    • ivape 2 hours ago

      What formats? I thought the very schema of json is what allows these LLMs to enforce structured outputs at the decoder level? I guess you can do it with any format, but why stray from json?

      • seunosewa an hour ago

        Sometimes it will randomly generate something like this in the body of the text: ``` <tool_call>executeshell <arg_key>command</arg_key> <arg_value>echo "" >> novels/AI_Voodoo_Romance/chapter-1-a-new-dawn.txt</arg_value> </tool_call> ```

        or this: ``` <|toolcallsbegin|><|toolcallbegin|>executeshell<|toolsep|>{"command": "pwd && ls -la"}<|toolcallend|><|toolcallsend|> ```

        Prompting it to use the right format doesn't seem to work. Claude, Gemini, GPT5, and GLM 4.5, don't do that. To accomodate DeepSeek, the tiny agent that I'm building will have to support all the weird formats.

  • esafak 2 hours ago

    It seems behind Qwen3 235B 2507 Reasoning (which I like) and gpt-oss-120B: https://artificialanalysis.ai/models/deepseek-v3-1-reasoning

    Pricing: https://openrouter.ai/deepseek/deepseek-chat-v3.1

    • bigyabai 2 hours ago

      Those Qwen3 2507 models are the local creme-de-la-creme right now. If you've got any sort of GPU and ~32gb of RAM to play with, the A3B one is great for pair-programming tasks.

      • decide1000 an hour ago

        I use it on a 24gb gpu Tesla P40. Very happy with the result.

        • hkt an hour ago

          Out of interest, roughly how many tokens per second do you get on that?

          • edude03 an hour ago

            Like 4. Definitely single digit. The P40s are slow af

      • pdimitar 2 hours ago

        Do you happen to know if it can be run via an eGPU enclosure with f.ex. RTX 5090 inside, under Linux?

        I'm considering buying a Linux workstation lately and I want it full AMD. But if I can just plug an NVIDIA card via an eGPU card for self-hosting LLMs then that would be amazing.

        • oktoberpaard an hour ago

          I’m running Ollama on 2 eGPUs over Thunderbolt. Works well for me. You’re still dealing with an NVDIA device, of course. The connection type is not going to change that hassle.

          • pdimitar an hour ago

            Thank you for the validation. As much as I don't like NVIDIA's shenanigans on Linux, having a local LLM is very tempting and I might put my ideological problems to rest over it.

            Though I have to ask: why two eGPUs? Is the LLM software smart enough to be able to use any combination of GPUs you point it at?

            • arcanemachiner 11 minutes ago

              Yes, Ollama is very plug-and-play when it comes to multi GPU.

              llama.cpp probably is too, but I haven't tried it with a bigger model yet.

        • gunalx 2 hours ago

          You would still need drivers and all the stuff difficult with nvidia in linux with a egpu. (Its not nessecarily terrible just suboptimal) Rather just add the second GPU in the Workstation, or just run the llm in your AMD GPU.

          • pdimitar 2 hours ago

            Oh, we can run LLMs efficiently with AMD GPUs now? Pretty cool, I haven't been following, thank you.

            • DarkFuture an hour ago

              I've been running LLM models on my Radeon 7600 XT 16GB for past 2-3 months without issues (Windows 11). I've been using llama.cpp only. The only thing from AMD I installed (apart from latest Radeon drivers) is the "AMD HIP SDK" (very straight forward installer). After unzipping (the zip from GitHub releases page must contain hip-radeon in the name) all I do is this:

              llama-server.exe -ngl 99 -m Qwen3-14B-Q6_K.gguf

              And then connect to llamacpp via browser to localhost:8080 for the WebUI (its basic but does the job, screenshots can be found on Google). You can connect more advanced interfaces to it because llama.cpp actually has OpenAI-compatible API.

        • bigyabai 2 hours ago

          Sure, though you'll be bottlenecked by the interconnect speed if you're tiling between system memory and the dGPU memory. That shouldn't be an issue for the 30B model, but would definitely be an issue for the 480B-sized models.

      • tomr75 2 hours ago

        With qwen code?

  • abtinf an hour ago

    Unrelated, but it would really be nice to have a chart breaking down Price Per Token Per Second for various model, prompt, and hardware combinations.

  • theuurviv467456 an hour ago

    Sweet. I wish there guys weren't bound by the idiotic "nationalist" () bans so that they could do their work unrestricted.

    Only idiots who are completely drowned in US's dark propaganda would think this is about anything but keeping China down.

    • tonyhart7 26 minutes ago

      Every country acting in its own best interest, US is not unique in this regard

      wait until you find out that China also acting the same way toward the rest of the world (surprise pikachu face)

    • simianparrot an hour ago

      As if the CCP needs help keeping its own people down. Please.