DeepSeek-v3.2-Exp

(github.com)

305 points | by meetpateltech 2 days ago ago

51 comments

  • awongh 2 days ago

    The 2nd order effect that not a lot of people talk about is price: the fact that model scaling at this pace also correlates with price is amazing.

    I think this is just as important to distribution of AI as model intelligence is.

    AFAIK there are no fundamental "laws" that prevent price from continuing to fall, at least correlated with Moore's law (or whatever the current AI/Nvidia chip development cycle is called right now)- each new generation of hardware is significantly faster/cheaper than the next- so will we see a ChatGPT-5 model at half the price in a year? (yes I know that thinking models cost more, but just on a per-token basis)

    • samuelknight 2 days ago

      You are vastly underestimating the price decline. To cherrypick one article; in the first two years since GPT 3.5, inference price for the same amount of intelligence has decreased 10x per year according to a study by Andreessen Horowitz https://a16z.com/llmflation-llm-inference-cost/. So in a stark slowdown scenario, we could still see a 1000x decrease in the next 5 years.

      Price deflation is not tied to Moore's right now because much of the performance gains are from model optimization, high bandwidth memory supply chains, and electrical capacity build out, not FLOP density.

      • awongh 2 days ago

        True! I just know that model optimization gains are much less guaranteed than say, FLOP density, even though model optimization has so far provided way more gains than hardware advancements.

        Part of me is optimistic that when the AI bubble bursts the excess data center capacity is going to be another force driving the cost of inference down.

        • naasking 2 days ago

          > I just know that model optimization gains are much less guaranteed than say, FLOP density, even though model optimization has so far provided way more gains than hardware advancements.

          Performance gained from model improvements has outpaced performance gained from hardware improvements for decades.

        • NemoNobody 2 days ago

          Haha, I love how delusional everyone is about AI.

          Yeppers, when that bubble burst - that's hilarious. This is the kinda stuff grandkids won't believe someday.

      • throwaway314155 2 days ago

        > has decreased 10x per year according to a study by Andreessen Horowitz

        I believe you but that's not exactly an unbiased source of information.

    • Alex_1729 a day ago

      We are heading into the future of very low-cost AI inference. It's a good thing, and expected.

  • mythz 2 days ago

    Happy to see Chinese OSS models keep getting better and cheaper. It also comes with a 50% API price drop for an already cheap model, now at:

    $0.28/M Input ($0.028/M cache hit) > $0.42/M Output

    • manishsharan 2 days ago

      This price drop is nice but I wonder how long it will last. Their prices used to be very low,then they almost doubled, and now it dropped.

      • nacs 2 days ago

        I don't know if it will stay this low but the whole point of v3.2 is to be cheaper to run than <= v3.1.

        (The inference costs are cheaper for them now as context grows because of the Sparse attention mechanism)

      • guluarte 2 days ago

        I was using it daily, but after the price jump, using codex and claude was much cheaper than using deepseek.

    • dizhn 2 days ago

      What was the price before? I thought they had just increased their prices.

  • esafak 2 days ago
    • nacs 2 days ago

      Strange - the model is marked as "Trains on data" ("To our knowledge, this provider may use your prompts and completions to train new models. This provider is disabled, but it can be re-enabled by changing your data policy.").

      This is usually not the case for paid models -- is Openrouter just marking this model incorrectly or do Deepseek actually train on submitted data?

    • echelon 2 days ago

      Is Open Router really open? I see their "main" repo as archived and various smaller projects.

      Is it just the API client bindings that are open and the core routing service is closed!

      • esafak 2 days ago

        I don't know why they need to claim to be open. Their job is to connect you to providers on the basis of price and various metrics they track. Open or close would makes no difference to me.

        • wongarsu 2 days ago

          I always interpreted it as "open" as in "open market".

          It's a frictionless marketplace connecting inference providers and customers, creating a more competitive market. Or a more open market if you play a bit fast and loose with terminology

        • echelon 2 days ago

          It's in the name. Why not name themselves ModelRouter or something similar?

          If they lead the market, they'll extract value in lots of ways that an open company could at least be compelled not to. Plus there won't be competition.

          They're probably selling your data to LLM companies and you don't even see what they're doing.

          Without competition, they'll raise their rates.

          If they were open, you could potentially run the offering on-prem. You could bolt on new providers or use it internally for your own routing.

          Lots of reasons.

          • burkaman 2 days ago

            Here's an open source alternative you can self-host: https://llmgateway.io/

            I think it's just called OpenRouter because the founder previously started OpenSea (an NFT marketplace), and also probably to sound a bit similar to OpenAI. It's like companies calling their products "natural" or "organic" or "artisan" when they can get away with it, just a marketing strategy of using words that conjure up vaguely positive connotations in your mind.

            • smakosh 2 days ago

              Fun fact, we own closedrouter.ai and redirects to llmgateway.io

          • esafak 2 days ago

            They can't raise their prices much because providers have the upper band, so users will always be able to go directly to the source. I use openrouter and openai, anthropic, google, etc.

  • eric15342335 2 days ago

    Not sure if I get it correctly:

    They trained a thing to learn mimicking the full attention distribution but only filtering the top-k (k=2048) most important attention tokens so that when the context window increases, the compute does not go up linearly but constantly for the attention->[query,key] process (it does grow up linearly in the graph because you still need to roughly scan the entire context window (which an "indexer" will do), but just very roughly here in order to speed up things, which is O(L) here).

  • Havoc 2 days ago

    wow...gigantic reduction in cost while holding the benchmarks mostly steady. Impressive.

  • grim_io 2 days ago

    One huge problem with these "cheap" models is that they happen to be more expensive in the typical agent workflow if the provider does not support caching.

    Input and output costs are peanuts compared to the order of magnitude(or more) amount of tokens that hit the cache.

    At that point you might as well use GPT-5. It will be the same price or cheaper, and more capable.

    • JimDabell 2 days ago

      > One huge problem with these "cheap" models is that they happen to be more expensive in the typical agent workflow if the provider does not support caching.

      DeepSeek supports caching and cache hits are a tenth of the cost.

      $0.028/M for cache hit

      $0.28/M for cache miss

      $0.42/M for output

      https://api-docs.deepseek.com/news/news250929

      • grim_io 2 days ago

        I auto disqualify the chinese first party endpoints.

        If they are okay for you, then sure go ahead. Enjoy the caching.

        What other provider is going to support it?

        • JimDabell 2 days ago

          > I auto disqualify the chinese first party endpoints.

          Why?

        • guluarte 2 days ago

          by your logic then you have to disqualify openai and anthropic first party endpoints for testing gpt and claude...

          • grim_io 2 days ago

            There is no bug in my logic. Anthropic and OpenAI are not chinese first party providers.

    • segmondy 2 days ago

      you declared a huge problem and followed up with an IF.

      deepseek API supports caching, stop manufacturing problems where there is none.

      https://api-docs.deepseek.com/guides/kv_cache

      • grim_io 2 days ago

        Sure. But there is no way I'm going to use the deepseek endpoint.

        Openrouter says they might use your data for training.

        • cheema33 2 days ago

          First you complained about lack of caching. When you were informed that the model supports caching, instead of admitting your error you switched to an unrelated complaint. I hope that you you do not use similar strategies for discussion in your personal and work life.

          • grim_io 2 days ago

            Your broad attack on me as a person is unnecessary.

            If you read my post carefully, you will realize that I did not make any contradictory statements.

            • llllm 2 days ago

              Not a broad attack, it is specifically targeted at your proud xenophobia.

              • grim_io 20 hours ago

                Absolutely ridiculous.

                My wife is Chinese.

        • segmondy 2 days ago

          caching is not a function of the model but the provider, all models can be cached. the provider serving the model decides if they are going to cache it. openrouter is not a provider but a middleman between providers, so some of their providers for deepseek might provide caching and some might not. if you just use any then you might run into the issue. some of their provider might use your data for training, some might not. you have to look at the list and you can cherry pick ones that won't train on your data and that also provide caching.

    • NotMichaelBay 2 days ago

      I was under the impression that this model does support caching. The pricing page says the cost of input tokens (cache hit) is $0.028.

  • mmastrac 2 days ago

    Interesting that models still evolve fast enough that dedicated model-specific hardware isn't a big contender right now. We're still seeing major scaling gains on mostly generic platforms.

    • gunalx 2 days ago

      google tpm, groq and cerebras needs yo be mentioned even if they are more general architecture optimized.

  • terespuwash 2 days ago

    Looks like Deep Sparse Attention can help with code (structured and long-file reasoning)

  • impact_sy 2 days ago

    Prices fall, benchmarks remain stable. Maybe in the future, LLM will spend most of its money on electricity.

  • wwizo 2 days ago

    You guys rock! I'm very curious how will this perform against real word data, where small nuance matters. Also have you tested it beyond 128K context window?

  • matrix2596 2 days ago

    awesome that sparse attention used in real world setting

  • techlatest_net 2 days ago

    [dead]

  • ramshanker 2 days ago

    What happened to Meta Open weights models? Lately I keep hearing more of Deepseek than LAAMA?

    • Alifatisk 2 days ago

      Wasn't The Llama 4 maverick and scout a flop?