76 comments

  • tarruda 5 hours ago

    This is probably one of the most underrated LLMs releases in the past few months. In my local testing with a 4-bit quant (https://huggingface.co/ubergarm/Step-3.5-Flash-GGUF/tree/mai...), it surpasses every other LLM I was able to run locally, including Minimax 2.5 and GLM-4.7, though I was only able to run GLM with a 2-bit quant. Some highlights:

    - Very context efficient: SWA by default, on a 128G mac I can run the full 256k context or two 128k context streams. - Good speeds on macs. On my M1 Ultra I get 36 t/s tg and 300 t/s pp. Also, these speeds degrade very slowly as context increases: At 100k prefill, it has 20 t/s tg and 129 t/s pp. - Trained for agentic coding. I think it is trained to be compatible with claude code, but it works fine with other CLI harnesses except for Codex (due to the patch edit tool which can confuse it).

    This is the first local LLM in the 200B parameter range that I find to be usable with a CLI harness. Been using it a lot with pi.dev and it has been the best experience I had with a local LLM doing agentic coding.

    There are a few drawbacks though:

    - It can generate some very long reasoning chains. - Current release has a bug where sometimes it goes into an infinite reasoning loop: https://github.com/ggml-org/llama.cpp/pull/19283#issuecommen...

    Hopefully StepFun will do a new release which addresses these issues.

    BTW StepFun seems to be the same company that released ACEStep (very good music generation model). At least StepFun is mentioned in ComfyUI docs https://docs.comfy.org/tutorials/audio/ace-step/ace-step-v1

    • ipython 3 minutes ago

      Curious on how (if?) changes to the inference engine can fix the issue with infinitely long reasoning loops.

      It’s my layman understanding that would have to be fixed in the model weights itself?

    • terhechte 3 hours ago

      Did you try an MLX version of this model? In theory it should run a bit faster. I'm hesitant to download multiple versions though.

      • tarruda 3 hours ago

        Haven't tried. I'm too used to llama.cpp at this point to switch to something else. I like being able to just run a model and automatically get:

        - OpenAI completions endpoint

        - Anthropic messages endpoint

        - OpenAI responses endpoint

        - A slick looking web UI

        Without having to install anything else.

      • KerrAvon 18 minutes ago

        Is there a reliable way to run MLX models? On my M1 Max, LM Studio seems to output garbage through the API server sometimes even when the LM Studio chat with the same model is perfectly fine. llama.cpp variants generally always just work.

    • lostmsu 2 hours ago

      gpt-oss 120b and even 20b works OK with codex.

      • tarruda an hour ago

        Both gpt-oss are great models for coding in a single turn, but I feel that they forget context too easily.

        For example, when I tried gpt-oss 120b with codex, it very easily forgets something present in the system prompt: "use `rg` command to search and list files".

        I feel like gpt-oss has a lot of potential for agentic coding, but it needs to be constantly reminded of what is happening. Maybe a custom harness developed specifically for gpt-oss could make both models viable for long agentic coding sessions.

  • lm2s an hour ago

    Loved reading the reasoning[0] for the recent "Walk or drive to the carwash" trick.

    [0] https://gist.github.com/lm2s/c4e3260c3ca9052ec200b19af9cfd70...

    Not sure if it's directly accessible, but here's the link: https://stepfun.ai/chats/213451451786883072

  • anentropic 7 hours ago

    > 51.0% on Terminal-Bench 2.0, proving its ability to handle sophisticated, long-horizon tasks with unwavering stability

    I don't know anything about TerminalBench, but on the face of it a 51% score on a test metric doesn't sound like it would guarantee 'unwavering stability' on sophisticated long-horizon tasks

    • networked 5 hours ago

      51% doesn't tell you much by itself. Benchmarks like this are usually not graded on a curve and aren't calibrated so that 100% is the performance level of a qualified human. You could design a superhuman benchmark where 10% was the human level of performance.

      Looking at https://www.tbench.ai/leaderboard/terminal-bench/2.0, I see that the current best score is 75%, meaning 51% is ⅔ SOTA.

      • andai 3 hours ago

        This is interesting, TFA lists Opus at 59. Which is the same as Claude Code with opus on the page you linked here. But it has Droid agent with Opus scoring 69. Which means the CC harness harness loses Opus 10 points on this benchmark.

        I'm reminded of https://swe-rebench.com/ where Opus actually does better without CC. (Roughly same score but half the cost!)

    • pitched 5 hours ago

      That score is on par with Gemini 3 Flash but these scores look much more affected by the agent used than the model, from scrolling through the results.

      • varispeed 5 hours ago

        Gemini 3 Flash is pure rubbish. It can easily get into loop mode and spout information no different than Markov chain and repeat it over and over.

    • YetAnotherNick 4 hours ago

      TerminalBench is like the worst named benchmark. It has almost nothing to do with terminal, but random tools syntax. Also it's not agentic for most tasks if the model memorized some random tool command line flags.

      • esafak 2 hours ago

        What do you mean? It tests whether the model knows the tools and uses them.

        • YetAnotherNick 2 hours ago

          Yeah it's a knowledge benchmark not agentic benchmark.

          • esafak an hour ago

            That's like saying coding benchmarks are about memorizing the language syntax. You have to know what to call when and how. If you get the job done you win.

            • YetAnotherNick an hour ago

              I am saying the opposite. If a coding benchmark just tests the syntax of a esoteric language, it shouldn't be called coding benchmark.

              For a benchmark named terminal bench, I would assume it would require some terminal "interaction", not giving the code and command.

  • hedgehog an hour ago

    In a quick test using a few of my standard test prompts a few observations: 1) the trace was very verbose and written in an odd style reminiscent of chat or those annoying one-sentence-per-paragraph LinkedIn posts; 2) token output rate is very high on the hosted version; 3) conformance to instructions and output quality was better than most of the leading models I've tested (e.g. Opus 4.5)

  • danieltanfh95 12 hours ago

    Hallucinates like crazy. use with caution. Tested it with a simple "Find me championship decks for X pokemon", "How does Y deck work". Opus 4.6, Deepseek and Kimi all performed well as expected.

    • esafak 2 hours ago

      I would use a medium-sized model for execution not its knowledge.

    • mickeyp 8 hours ago

      I mean, is it possible the latter models used Search? Not saying Stepfun's perfect (it is not.) Gemini especially and unsurprisingly uses search a lot and it is ridiculously fast, too.

  • kristianp 15 hours ago

    Recent model released a couple of weeks ago. "Mixture of Experts (MoE) architecture, it selectively activates only 11B of its 196B parameters per token". Beats Kimi K2.5 and GLM 4.7 on more benchmarks than it loses to them.

    Edit: there are 4 bit quants that can be run on an 128GB machine like a GB10 [1], AI Max+ 395, or mac studio.

    [1] https://forums.developer.nvidia.com/t/running-step-3-5-flash...

    • Alifatisk 7 hours ago

      > Beats Kimi K2.5 and GLM 4.7 on more benchmarks than it loses to them.

      Does this really mean anything? I for example, tend to ignore certain benchmarks that are focused towards agentic tasks because that is not my use case. Instruction following, long context reasoning and non-hallucinations has more weight to me.

    • mycall 4 hours ago

      Q4_K_S @ 116 GB

      IQ4_NL @112 GB

      Q4_0 @ 113 GB

      Which of these would be technically better?

      [1] https://huggingface.co/bartowski/stepfun-ai_Step-3.5-Flash-G...

      • KerrAvon 16 minutes ago

        of those, Q4_K_S is better

  • culi 9 hours ago

    It's nice to see more focus on efficiency. All the recent new model releases have come along with massive jumps in certain benchmarks but when you dig into it it's almost always paired with a massive increase in token usage to achieve those results (ahem Google Deep Think ahem). For AI to truly be transformational it needs to solve the electricity problem

    • tankenmate 9 hours ago

      And not just token usage, expensive token usage; when it comes to tokens/joule not all tokens are equal. Efficient use of MoE architectures does have an impact on tokens/joule and tokens/sec.

      • mzl 5 hours ago

        I like the intelligence per watt and intelligence per joule framing in https://arxiv.org/abs/2511.07885 It feels like a very useful measure for thinking about long-term sustainable variants of AI build-outs.

  • mohsen1 8 hours ago

    SWE-bench Verified is nice but we need better SWE benchmarks. Making a fair benchmark is a lot of work and a lot of money needed to run it continuously.

    Most of "live" benchmarks are not running enough with recent models to give you a good picture of which models win.

    The idea of a live benchmark is great! There are thousands of GitHub issues that are resolved with a PR every day.

  • janalsncm 8 hours ago

    Number of params isn’t really the relevant metric imo. Top models don’t support local inference. More relevant is tokens per dollar or per second.

    • dakolli 7 hours ago

      Its an open source model, why wouldn't it be relevant for people who want to self host.....

      • janalsncm 5 minutes ago

        This one is open weights but comparing to Gemini/Claude etc. on number of params isn’t relevant outside of a research context imo. Users don’t care how many params Gemini has as long as it’s fast and cheap.

    • qeternity 3 hours ago

      Number of parameters is at least a proxy for model capability.

      You can achieve incredible tok/dollar or tok/sec with Qwen3 0.6b.

      It just won't be very good for most use cases.

      • janalsncm a minute ago

        Model capability is the other axis on their chart. So they could have put Qwen 0.6b there, it would be in the bottom right corner.

        I know what they are trying to do. They are attempting show a kind of pareto frontier but it’s a little awkward.

    • lm28469 6 hours ago

      It does since you can run this model locally on a < $3k machine

  • tallesborges92 5 hours ago

    I’ve been using this model for a while, and it’s very fast. It spent some time thinking but does fewer calls. For example, yesterday I asked the agent to find the Gemini quota limit for their API, and it took 27 seconds and just 2 calls, Opus 4.6 took 33 seconds, but 5 calls with less thinking

  • Mashimo 7 hours ago

    Holy moly, I made a simple coding promt and the amount of reasoning output could fill a small book.

    > create a single html file with a voxel car that drives in a circle.

    Compared to GLM 4.7 / 5 and kimi 2.5 it took a while. The output was fast, but because it wrote so I had to wait longer. Also output was .. more bare bones compared to others.

    • esafak 2 hours ago

      That's how it compensates for its small size. To accomplish a task of certain difficulty either you know more and think less, or vice versa.

    • Tepix 6 hours ago

      That's been my experience as well. Huge amounts of reasoning. The model itself is good but even if you get twice as many tokens as with another model, the added amount of reasoning may make it slower in the end.

  • wmf 14 hours ago

    That reverse x axis sure is confusing.

    • __mharrison__ 18 minutes ago

      Was going to comment the same thing... Not sure what they are thinking there.

    • esafak 11 hours ago

      I imagine they thought they'd look better this way. I don't think they do.

  • prmph 8 hours ago

    Interesting.

    Each time a Chinese model makes the news, I wonder: How come no major models are coming from Japan or Europe?

    • rester324 7 hours ago

      You would be surprised to see how much the japanese IT industry is behind the times (a decade at least IMO). There is only a very limited startup culture here (both in size and talentpool and business ideas), there is no real risk taking venture capital market here (maybe Masayoshi Son is the exception here, but again he tends to invest in the US mostly) and most software companies use very very very outdated management practices. On top of that most software development had been/has been outsourced to India, Vietnam, China, etc, so management see no value in software talent... SW engineers' social recognition here are mostly on the level of accountants. Under such circumstances japan will never have a chance to contribute to AI meaningfully (other than niche academic research)

      • KerrAvon 11 minutes ago

        Seems like the Japanese have had this major blind spot in software engineering since the 90's. Even Sony didn't bother to use what they learned from the PlayStations to produce their own TV OS, outsourcing it to Google. It's as if the 5th generation stuff not working out just burned out that circuit in Japan entirely.

    • Balinares an hour ago

      Model development takes a massive amount of capital. As far as I can tell, capital in Europe is a lot more risk-averse than in other locales.

    • jstummbillig 8 hours ago

      Have you heard of Mistral? I would consider Mistral major, albeit not frontier.

    • WarmWash 3 hours ago

      https://www.businessinsider.com/openclaw-creator-slams-europ...

      Europe is a bad place to try and be successful in tech.

    • citrin_ru 5 hours ago

      1. The US and China are two biggest economies by GDP. 2. The US is the default destination for worldwide investors (because of historically good returns). China has huge state economy and the state can direct investments into this area.

      • lostmsu 2 hours ago

        EU's GDP is higher than China's

    • Tepix 6 hours ago

      The Koreans have released some good models lately. And Mistral is also release open weights models that aren't too shabby.

    • wazoox 6 hours ago

      Have you heard of Pleias ? Their SML baguettotron is blazingly fast, and surprisingly good at reasoning (but it's not programming-oriented).

    • tonis2 7 hours ago

      Cause Europe only good at writing fines for other tech companies

  • amelius 6 hours ago

    Does it pass the carwash test?

  • lostmsu 2 hours ago

    Any pelicans from non-quantized variants?

  • sinenomine 8 hours ago

    Works impressively well with pi.dev minimal agent.

  • SilverElfin 12 hours ago

    So who exactly is StepFun? What is their business (how do they make money)? Each time I click “About Stepfun” somewhere on their website, it sends me to a generic landing page in a loop.

    • kristopolous 8 hours ago

      They've been around a couple years. This is the first model that has really broken into the anglosphere.

      Keep a tab on aihubmix, the Chinese openrouter, if you want to stay on top of the latest models. They keep track of things like the Baichuan, Doubao, baai (beijing academy), Meituan, 01.AI (yi), xiaomi, etc...

      Much larger chinese coverage than openrouter

      • tarruda 5 hours ago

        > This is the first model that has really broken into the anglosphere.

        Before Step 3.5 Flash, I've been hearing a lot about ACEStep as being the only open weights competitor to Suno.

      • Havoc 8 hours ago

        >first model that has really broken into the anglosphere.

        Do you know of a couple of interesting ones that haven't yet?

        • kristopolous 8 hours ago

          doubao (bytedance) seed models are interesting

          Keep your eye on Baidu's Ernie https://ernie.baidu.com/

          Artificial analysis is generally on top of everything

          https://artificialanalysis.ai/leaderboards/models

          Those two are really the new players

          Nanbeige which they haven't benchmarked just put out a shockingly good 3b model https://huggingface.co/Nanbeige - specifically https://huggingface.co/Nanbeige/Nanbeige4.1-3B

          You have to tweak the hyper parameter like they say but I'm getting quality output, commensurate with maybe a 32b model, in exchange for a huge thinking lag

          It's the new LFM 2.5

          • admiralrohan 5 hours ago

            Never heard of Nanbeige, thanks for sharing. "Good" is subjective though, in which tasks can I use it and where to avoid?

            • kristopolous 5 hours ago

              it's a 3b model. Fire it up. If you have ollama just do this:

                  ollama create nanbeige-custom -f <(curl https://day50.dev/Nanbeige4.1-params.Modelfile)
              
              That has the hyperparameters already in there. Then you can try it out

              It's taking up like 2.5GB of ram.

              my test query is always "compare rust and go with code samples". I'm telling you, the thinking token count is ... high...

              Here's what I got https://day50.dev/rust_v_go.md

              I just tried it on a 4gb raspberry pi and a 2012 era x230 with an i5-3210. Worked.

              It'll take about 45 minutes on the pi which you know, isn't OOM...so there's that....

          • Havoc 4 hours ago

            Thanks!

    • tarruda 5 hours ago

      They seem to be the same company that released ACEStep music generation model: https://acestep.io/

      Though the only mention I found was in ComfyUI docs: https://docs.comfy.org/tutorials/audio/ace-step/ace-step-v1

    • 0x1997 12 hours ago
    • deaux 12 hours ago

      Might want to give it a search.

  • agentifysh 11 hours ago

    what country is behind this one ?