182 comments

  • Tepix 11 hours ago

    Huggingface Link: https://huggingface.co/moonshotai/Kimi-K2.5

    1T parameters, 32b active parameters.

    License: MIT with the following modification:

    Our only modification part is that, if the Software (or any derivative works thereof) is used for any of your commercial products or services that have more than 100 million monthly active users, or more than 20 million US dollars (or equivalent in other currencies) in monthly revenue, you shall prominently display "Kimi K2.5" on the user interface of such product or service.

    • endymi0n 8 hours ago

      One. Trillion. Even on native int4 that’s… half a terabyte of vram?!

      Technical awe at this marvel aside that cracks the 50th percentile of HLE, the snarky part of me says there’s only half the danger in giving something away nobody can run at home anyway…

      • johndough 6 hours ago

        The model absolutely can be run at home. There even is a big community around running large models locally: https://www.reddit.com/r/LocalLLaMA/

        The cheapest way is to stream it from a fast SSD, but it will be quite slow (one token every few seconds).

        The next step up is an old server with lots of RAM and many memory channels with maybe a GPU thrown in for faster prompt processing (low two digits tokens/second).

        At the high end, there are servers with multiple GPUs with lots of VRAM or multiple chained Macs or Strix Halo mini PCs.

        The key enabler here is that the models are MoE (Mixture of Experts), which means that only a small(ish) part of the model is required to compute the next token. In this case, there are 32B active parameters, which is about 16GB at 4 bit per parameter. This only leaves the question of how to get those 16GB to the processor as fast as possible.

        • WhitneyLand 4 hours ago

          Its often pointed out in the first sentence of a comment how a model can be run at home, then (maybe) towards the end of the comment it’s mentioned how it’s quantized.

          Back when 4k movies needed expensive hardware, no one was saying they could play 4k on a home system, then later mentioning they actually scaled down the resolution to make it possible.

          The degree of quality loss is not often characterized. Which makes sense because it’s not easy to fully quantify quality loss with a few simple benchmarks.

          By the time it’s quantized to 4 bits, 2 bits or whatever, does anyone really have an idea of how much they’ve gained vs just running a model that is sized more appropriately for their hardware, but not lobotomized?

          • zozbot234 2 hours ago

            > ...Back when 4k movies needed expensive hardware, no one was saying they could play 4k on a home system, then later mentioning they actually scaled down the resolution to make it possible. ...

            int4 quantization is the original release in this case; it's not been quantized after the fact. It's a bit of a nuisance when running on hardware that doesn't natively support the format (might waste some fraction of memory throughput on padding, specifically on NPU hw that can't do the unpacking on its own) but no one here is reducing quality to make the model fit.

            • WhitneyLand an hour ago

              Good point thanks for the clarification.

              The broader point remains though which is, “you can run this model as home…” when actually the caveats are potentially substantial.

              It would be so incredibly slow…

          • FuckButtons 3 hours ago

            From my own usage, the former is almost always better than the latter. Because it’s less like a lobotomy and more like a hangover, though I have run some quantized models that seem still drunk.

            Any model that I can run in 128 gb in full precision is far inferior to the models that I can just barely get to run after reap + quantization for actually useful work.

            I also read a paper a while back about improvements to model performance in contrastive learning when quantization was included during training as a form of perturbation, to try to force the model to reach a smoother loss landscape, it made me wonder if something similar might work for llms, which I think might be what the people over at minimax are doing with m2.1 since they released it in fp8.

            In principle, if the model has been effective during its learning at separating and compressing concepts into approximately orthogonal subspaces (and assuming the white box transformer architecture approximates what typical transformers do), quantization should really only impact outliers which are not well characterized during learning.

            • WhitneyLand an hour ago

              Interesting.

              If this were the case however, why would labs go through the trouble of distilling their smaller models rather than releasing quantized versions of the flagships?

          • Gracana an hour ago

            The level of deceit you're describing is kind of ridiculous. Anybody talking about their specific setup is going to be happy to tell you the model and quant they're running and the speeds they're getting, and if you want to understand the effects of quantization on model quality, it's really easy to spin up a GPU server instance and play around.

            • jasonjmcghee an hour ago

              > if you want to understand the effects of quantization on model quality, it's really easy to spin up a GPU server instance and play around

              Fwiw, not necessarily. I've noticed quantized models have strange and surprising failure modes where everything seems to be working well and then does a death spiral repeating a specific word or completely failing on one task of a handful of similar tasks.

              8-bit vs 4-bit can be almost imperceptible or night and day.

              This isn't something you'd necessarily see playing around, but when trying to do something specific

          • selfhoster11 3 hours ago

            Except the parent comment said you can stream the weights from an SSD. The full weights, uncompressed. It takes a little longer (a lot longer), but the model at least works without lossy pre-processing.

        • 1dom 5 hours ago

          > The model absolutely can be run at home. There even is a big community around running large models locally

          IMO 1tln parameters and 32bln active seems like a different scale to what most are talking about when they say localLLMs IMO. Totally agree there will be people messing with this, but the real value in localLLMs is that you can actually use them and get value from them with standard consumer hardware. I don't think that's really possible with this model.

          • zamadatix 3 hours ago

            Local LLMs are just LLMs people run locally. It's not a definition of size, feature set, or what's most popular. What the "real" value is for local LLMs will depend on each person you ask. The person who runs small local LLMs will tell you the real value is in small models, the person who runs large local LLMs will tell you it's large ones, those who use cloud will say the value is in shared compute, and those who don't like AI will say there is no value in any.

            LLMs which the weights aren't available are an example of when it's not local LLMs, not when the model happens to be large.

            • 1dom 3 hours ago

              > LLMs which the weights aren't available are an example of when it's not local LLMs, not when the model happens to be large.

              I agree. My point was that most aren't thinking of models this large when they're talking about local LLMs. That's what I said, right? This is supported by the download counts on hf: the most downloaded local models are significantly smaller than 1tln, normally 1 - 12bln.

              I'm not sure I understand what point you're trying to make here?

          • zozbot234 5 hours ago

            32B active is nothing special, there's local setups that will easily support that. 1T total parameters ultimately requires keeping the bulk of them on SSD. This need not be an issue if there's enough locality in expert choice for any given workload; the "hot" experts will simply be cached in available spare RAM.

            • spmurrayzzz 4 hours ago

              When I've measured this myself, I've never seen a medium-to-long task horizon that would have expert locality such that you wouldn't be hitting the SSD constantly to swap layers (not to say it doesn't exist, just that in the literature and in my own empirics, it doesn't seem to be observed in a way you could rely on it for cache performance).

              Over any task that has enough prefill input diversity and a decode phase thats more than a few tokens, its at least intuitive that experts activate nearly uniformly in the aggregate, since they're activated per token. This is why when you do something more than bs=1, you see forward passes light up the whole network.

              • zozbot234 3 hours ago

                > hitting the SSD constantly to swap layers

                Thing is, people in the local llm community are already doing that to run the largest MoE models, using mmap such that spare-RAM-as-cache is managed automatically by the OS. It's a drag on performance to be sure but still somewhat usable, if you're willing to wait for results. And it unlocks these larger models on what's effectively semi-pro if not true consumer hardware. On the enterprise side, high bandwidth NAND Flash is just around the corner and perfectly suited for storing these large read-only model parameters (no wear and tear issues with the NAND storage) while preserving RAM-like throughput.

            • 1dom 4 hours ago

              I never said it was special.

              I was trying to correct the record that a lot of people will be using models of this size locally because of the local LLM community.

              The most commonly downloaded local LLMs are normally <30b (e.g. https://huggingface.co/unsloth/models?sort=downloads). The things you're saying, especially when combined together, make it not usable by a lot of people in the local LLM community at the moment.

          • GeorgeOldfield 3 hours ago

            do you guys understand that different experts are loaded PER TOKEN?

        • dev_l1x_be 5 hours ago

          How do you split the model between multiple GPUs?

        • PlatoIsADisease 4 hours ago

          >The model absolutely can be run at home.

          There is a huge difference between "look I got it to answer the prompt: '1+1='"

          and actually using it for anything of value.

          I remember early on people bought Macs (or some marketing team was shoveling it), and proposing people could reasonably run the 70B+ models on it.

          They were talking about 'look it gave an answer', not 'look this is useful'.

          While it was a bit obvious that 'integrated GPU' is not Nvidia VRAM, we did have 1 mac laptop at work that validated this.

          Its cool these models are out in the open, but its going to be a decade before people are running them at a useful level locally.

          • esafak 3 hours ago

            Hear, hear. Even if the model fits, a few tokens per second make no sense. Time is money too.

            • tempoponet 2 hours ago

              Maybe for a coding agent, but a daily/weekly report on sensitive info?

              If it were 2016 and this technology existed but only in 1 t/s, every company would find a way to extract the most leverage out of it.

              • michaellee8 an hour ago

                If they figured out it can be this useful in 2016 running 1 t/s, they would make it run at least 20 t/s by 2019

              • esafak an hour ago

                But it's 2026 and 'secure' (by executive standards) hosted options exist.

      • wongarsu 7 hours ago

        Which conveniently fits on one 8xH100 machine. With 100-200 GB left over for overhead, kv-cache, etc.

      • the_sleaze_ 26 minutes ago

        3,998.99 for 500gb of RAM on amazon

        "Good Luck" - Kimi <Taken voice>

      • Davidzheng 7 hours ago

        that's what intelligence takes. Most of intelligence is just compute

    • redox99 an hour ago

      Cursor devs, who go out of their way to not mention their Composer model is based on GLM, are not going to like that.

      • msp26 19 minutes ago

        Source? I've heard this rumour twice but never seen proof. I assume it would be based on tokeniser quirks?

    • Imustaskforhelp 10 hours ago

      Hey have they open sourced all Kimi k2.5 (thinking,instruct,agent,agent swarm [beta])?

      Because I feel like they mentioned that agent swarm is available their api and that made me feel as if it wasn't open (weights)*? Please let me know if all are open source or not?

      • XenophileJKO 8 hours ago

        I'm assuming the swarm part is all harness. Well I mean a harness and way of thinking that the weights have just been fine tuned to use.

    • dheera 10 hours ago

      > or more than 20 million US dollars (or equivalent in other currencies) in monthly revenue, you shall prominently display "Kimi K2.5" on the user interface of such product or service.

      Why not just say "you shall pay us 1 million dollars"?

      • vessenes 9 hours ago

        ? They prefer the branding. The license just says you have to say it was them if you make > $250mm a year on the model.

      • viraptor 9 hours ago

        Companies with $20M revenue will not normally have spare $1M available. They'd get more money by charging reasonable subscriptions than by using lawyers to chase sudden company-ending fees.

        • laurentb 7 hours ago

          it's monthly :) $240M revenue companies will absolutely find a way to fork $1M if they need to. Kimi most likely sees the eyeballs of free advertising as more profitable in the grander scheme of things

      • clayhacks 10 hours ago

        I assume this allows them to sue for different amounts. And not discourage too many people from using it.

  • bertili 9 hours ago

    The "Deepseek moment" is just one year ago today!

    Coincidence or not, let's just marvel for a second over this amount of magic/technology that's being given away for free... and how liberating and different this is than OpenAI and others that were closed to "protect us all".

    • motoboi 6 hours ago

      What amazes me is why would someone spend millions to train this model and give it away for free. What is the business here?

      • whizzter 6 hours ago

        Chinese state that maybe sees open collaboration as the way to nullify any US lead in the field, concurrently if the next "search-winner" is built upon their model the Chinese worldview that Taiwan belongs to China and Tiamen Square massacre never happened.

        Also their license says that if you have a big product you need to promote them, remember how Google "gave away" site searche widgets and that was perhaps one of the major ways they gained recognition for being the search leader.

        OpenAI/NVidia is the Pets.com/Sun of our generation, insane valuations, stupid spend, expensive options, expensive hardware and so on.

        Sun hardware bought for 50k USD to run websites in 2000 are less capable than perhaps 5 dollar/month VPS's today?

        "Scaling to AGI/ASI" was always a fools errand, best case OpenAI should've squirreled away money to have a solid engineering department that could focus on algorithmic innovations but considering that Antrophic, Google and Chinese firms have caught up or surpassed them it seems they didn't.

        Once things blows up, those closed options that had somewhat sane/solid model research that handles things better will be left and a ton of new competitors running modern/cheaper hardware and just using models are building blocks.

        • dev_l1x_be 5 hours ago

          > Taiwan belongs to China

          So they are on the same page as the UN and US?

          The One China policy refers to a United States policy of strategic ambiguity regarding Taiwan.[1] In a 1972 joint communiqué with the PRC, the United States "acknowledges that all Chinese on either side of the Taiwan Strait maintain there is but one China and that Taiwan is a part of China" and "does not challenge that position."

          https://en.wikipedia.org/wiki/One_China https://en.wikipedia.org/wiki/Taiwan_and_the_United_Nations

          • 9cb14c1ec0 3 hours ago

            The One China policy is a fiction of foreign policy statecraft, designed to sideline the issue without having to actually deal with it. It is quite clear that apart from the official fiction there is a real policy that is not One China. This is made clear by the weapons sales to Taiwan that specifically calibrated to make a Chinese military action harder.

          • pqtyw 2 hours ago

            Existence of an independent and effectively sovereign state on the island of Taiwan (however one calls it) is a fact. Whatever doublespeak governments of other countries or international organizations engage in due to political reasons does not change that.

        • zozbot234 5 hours ago

          > "Scaling to AGI/ASI" was always a fools errand

          Scaling depends on hardware, so cheaper hardware on a compute-per-watt basis only makes scaling easier. There is no clear definition of AGI/ASI but AI has already scaled to be quite useful.

        • two_tasty 2 hours ago

          I love how Tiananmen square is always brought up as some unique and tragic example of disinformation that could never occur in the west, as though western governments don't do the exact same thing with our worldview. Your veneer of cynicism scarcely hides the structure of naivety behind.

          • igneo676 an hour ago

            The difference is that, in the west, there's an acceptable counter narrative. I can tell you that Ruby Ridge and Waco never should've happened and were examples of government overreach and massacre of it's own citizens. Or <insert pet issue with the government here>

            You can't with Tiananmen square in China

      • deskamess 35 minutes ago

        I think there is a book (Chip War) about how the USSR did not effectively participate in staying at the edge of the semiconductor revolution. And they have suffered for it.

        China has decided they are going to participate in the LLM/AGI/etc revolution at any cost. So it is a sunk cost, and the models are just an end product and any revenue is validation and great, but not essential. The cheaper price points keep their models used and relevant. It challenges the other (US, EU) models to innovate and keep ahead to justify their higher valuations (both monthly plan, and investor). Once those advances are made, it can be bought back to their own models. In effect, the currently leading models are running from a second place candidate who never gets tired and eventually does what they do at a lower price point.

        • kaibee 22 minutes ago

          In some way, the US won the cold war by spending so much on military that the USSR, in trying to keep up, collapsed. I don't see any parallels between that and China providing infinite free compute to their AI labs, why do you ask?

      • Balinares 5 hours ago

        Speculating: there are two connected businesses here, creating the models, and serving the models. Outside of a few moneyed outliers, no one is going to run this at home. So at worst opening this model allows mid-sized competitors to serve it to customers from their own infra -- which helps Kimi gain mindshare, particularly against the large incumbents who are definitely not going to be serving Kimi and so don't benefit from its openness.

        Given the shallowness of moats in the LLM market, optimizing for mindshare would not be the worst move.

      • tokioyoyo 4 hours ago

        Moonshot’s (Kimi’s owner) investors are Alibaba/Tencent et al. Chinese market is stupidly competitive, and there’s a general attitude of “household name will take it all”. However getting there requires having a WeChat-esque user base, through one way or another. If it’s paid, there’ll be friction and it won’t work. Plus, it undermines a lot of other companies, which is a win for a lot of people.

      • ggdG 6 hours ago

        I think this fits into some "Commoditize The Complement" strategy.

        https://gwern.net/complement

      • testfrequency 6 hours ago

        Curious to hear what “OpenAI” thinks the answer to this is

      • YetAnotherNick 6 hours ago

        Hosting the model is cheaper per token, the more batched token you get. So they have big advantage here.

      • WarmWash 3 hours ago

        It's another state project funded at the discretion of the party.

        If you look at past state projects, profitability wasn't really considered much. They are notorious for a "Money hose until a diamond is found in the mountains of waste"

    • jimmydoe 5 hours ago

      It’s not coincidence. Chinese companies tend to do big releases before Chinese new year. So expect more to come before Feb 17.

    • PlatoIsADisease 3 hours ago

      I am convinced that was mostly just marketing. No one uses deepseek as far as I can tell. People are not running it locally. People choose GPT/Gemini/Claude/Grok if you are giving your data away anyway.

      My biggest source of my conspiracy is that I made a reddit thread asking a question: "Why all the deepseek hype" or something like that. And to this day, I get odd, 'pro deepseek' comments from accounts only used every few months. Its not like this was some highly upvoted topic that is in the 'Top'.

      I'd put that deepseek marketing on-par with an Apple marketing campaign.

      • logicprog 3 hours ago

        I don't use DeepSeek, but I prefer Kimi and GLM to closed models for most of my work.

      • mekpro 3 hours ago

        Except that, In OpenRouter, Deepseek always maintain in Top 10 Ranking. Although I did not use it personally, i believe that their main advantage over other model is price/performance.

    • catigula 4 hours ago

      I mean, there are credible safety issues here. A Kimi fine-tune will absolutely be able to help people do cybersecurity related attacks - very good ones.

      In a few years, or less, biological attacks and other sorts of attacks will be plausible with the help of these agents.

      Chinese companies aren't humanitarian endeavors.

  • jumploops 12 hours ago

    > For complex tasks, Kimi K2.5 can self-direct an agent swarm with up to 100 sub-agents, executing parallel workflows across up to 1,500 tool calls.

    > K2.5 Agent Swarm improves performance on complex tasks through parallel, specialized execution [..] leads to an 80% reduction in end-to-end runtime

    Not just RL on tool calling, but RL on agent orchestration, neat!

    • storystarling 2 hours ago

      1,500 tool calls per task sounds like a nightmare for unit economics though. I've been optimizing my own agent workflows and even a few dozen steps makes it hard to keep margins positive, so I'm not sure how this is viable for anyone not burning VC cash.

      • zozbot234 2 hours ago

        "tool call" is just a reference to any elementary interaction with the outside system. It's not calling third-party APIs or anything like that.

    • XCSme 6 hours ago

      > Kimi K2.5 can self-direct an agent swarm

      Is this within the model? Or within the IDE/service that runs the model?

      Because tool calling is mostly just the agent outputting "call tool X", and the IDE does it and returns the data back to AI's context

      • mzl 6 hours ago

        An LLM model only outputs tokens, so this could be seen as an extension of tool calling where it has trained on the knowledge and use-cases for "tool-calling" itself as a sub-agent.

        • XCSme 5 hours ago

          Ok, so agent swarm = tool calling where the tool is a LLM call and the argument is the prompt

          • IanCal 3 hours ago

            Yes largely, although they’ve trained a model specifically for this task rather than using the base model and a bit of prompting.

          • dcre 4 hours ago

            Sort of. It’s not necessarily a single call. In the general case it would be spinning up a long-running agent with various kinds of configuration — prompts, but also coding environment and which tools are available to it — like subagents in Claude Code.

    • mohsen1 7 hours ago

      Parallel agents are such a simple, yet powerful hack. Using it in Claude Code with TeammateTool and getting lots of good results!

  • vinhnx 9 hours ago

    One thing caught my eyes is that besides K2.5 model, Moonshot AI also launched Kimi Code (https://www.kimi.com/code), evolved from Kimi CLI. It is a terminal coding agent, I've been used it last month with Kimi subscription, it is capable agent with stable harness.

    GitHub: https://github.com/MoonshotAI/kimi-cli

    • forgotpwd16 5 hours ago

      >Kimi Code CLI is not only a coding agent, but also a shell.

      That's cool. It also has a zsh hook, allowing you to switch to agent mode wherever you're.

      • vinhnx 4 hours ago

        It is, Kimi Code CLI supports Zed' Agent Client Protocol (http://agentclientprotocol.com/), so it can acts as an external agent that could run in any ACP-compatible client, eg: Zed, Jetbrain, Toad CLI, Minano Notebook. Also, it supports Agent Skills. Moonshot AI developers are actively update the agent and every active. I really like their CLI.

    • esafak 3 hours ago

      Does it support the swarm feature? Does Opencode?

    • Imanari 5 hours ago

      How does it fare against CC?

  • Alifatisk 8 hours ago

    Have you all noted that the latest releases (Qwen3 max thinking, now Kimi k2.5) from Chinese companies are benching against Claude opus now and not Sonnet? They are truly catching up, almost at the same pace?

    • conception 4 hours ago

      https://clocks.brianmoore.com

      K2 is one of the only models to nail the clock face test as well. It’s a great model.

      • DJBunnies 4 hours ago

        Cool comparison, but none of them get both the face and the time correct when I look at it.

    • WarmWash 3 hours ago

      They distill the major western models, so anytime a new SOTA model drops, you can expect the Chinese labs to update their models within a few months.

      • zozbot234 3 hours ago

        This is just a conspiracy theory/urban legend. How do you "distill" a proprietary model with no access to the original weights? Just doing the equivalent of training on chat/API logs has terrible effectiveness (you're trying to drink from a giant firehose through a tiny straw) and gives you no underlying improvements.

      • Balinares an hour ago

        Source?

      • Alifatisk 2 hours ago

        Yes, they do distill. But just saying all they do is distill is not correct and actually kind of unfair. These Chinese labs have done lots of research in this field and publish it to the public, some of not majority contribute with open-weight models making a future of local llm possible! Deepseek, Moonshot, Minimax, Z.a, Alibabai (Qwen).

        They are not just leeching here, they took this innovation, refined it and improved it further. This is what the Chinese is good at.

    • esafak 3 hours ago

      They are, in benchmarks. In practice Anthropic's models are ahead of where their benchmarks suggest.

      • HNisCIS 2 hours ago

        Bear in mind that lead may be, in large part, from the tooling rather than the model

    • zozbot234 8 hours ago

      The benching is sus, it's way more important to look at real usage scenarios.

  • Reubend 11 hours ago

    I've read several people say that Kimi K2 has a better "emotional intelligence" than other models. I'll be interested to see whether K2.5 continues or even improves on that.

    • mohsen1 3 hours ago

      I'll test it out on mafia-arena.com once it is available on Open Router

    • Alifatisk 6 hours ago

      Yup, I experience the same. I don't know what they do to achieve this but it gives them this edge, really curious to learn more about what makes it so good at it.

    • storystarling 10 hours ago

      yes, though this is highly subjective - it 'feels' like that to me as well (comapred to Gemini 3, GPT 5.2, Opus 4.5).

  • Topfi 10 hours ago

    K2 0905 and K2 Thinking shortly after that have done impressively well in my personal use cases and was severely slept on. Faster, more accurate, less expensive, more flexible in terms of hosting and available months before Gemini 3 Flash, I really struggle to understand why Flash got such positive attention at launch.

    Interested in the dedicated Agent and Agent Swarm releases, especially in how that could affect third party hosting of the models.

    • msp26 10 hours ago

      K2 thinking didn't have vision which was a big drawback for my projects.

  • zmmmmm 11 hours ago

    Curious what would be the most minimal reasonable hardware one would need to deploy this locally?

    • NitpickLawyer 10 hours ago

      I parsed "reasonable" as in having reasonable speed to actually use this as intended (in agentic setups). In that case, it's a minimum of 70-100k for hardware (8x 6000 PRO + all the other pieces to make it work). The model comes with native INT4 quant, so ~600GB for the weights alone. An 8x 96GB setup would give you ~160GB for kv caching.

      You can of course "run" this on cheaper hardware, but the speeds will not be suitable for actual use (i.e. minutes for a simple prompt, tens of minutes for high context sessions per turn).

    • simonw 8 hours ago

      Models of this size can usually be run using MLX on a pair of 512GB Mac Studio M3 Ultras, which are about $10,000 each so $20,000 for the pair.

      • PlatoIsADisease 3 hours ago

        You might want to clarify that this is more of a "Look it technically works"

        Not a "I actually use this"

        The difference between waiting 20 minutes to answer the prompt '1+1='

        and actually using it for something useful is massive here. I wonder where this idea of running AI on CPU comes from. Was it Apple astroturfing? Was it Apple fanboys? I don't see people wasting time on non-Apple CPUs. (Although, I did do this for a 7B model)

        • mholm 2 hours ago

          The reason Macs get recommended is the unified memory, which is usable as VRAM for the GPU. People are similarly using the AMD Strix Halo for AI which also has a similar memory architecture. Time to first token for something like '1+1=' would be seconds, and then you'd be getting ~20 tokens per second, which is absolutely plenty fast for regular use. Token/s slows down at the higher end of context, but it's absolutely still practical for a lot of usecases. Though I agree that agentic coding, especially over large projects, would likely get too slow to be practical.

          • zozbot234 2 hours ago

            Not too slow if you just let it run overnight/in the background. But the biggest draw would be no rate limits whatsoever compared to the big proprietary APIs, especially Claude's. No risk of sudden rugpulls either, and the model will have very consistent performance.

          • PlatoIsADisease an hour ago

            We are getting into a debate between particulars and universals. To call the 'unified memory' VRAM is quite a generalization. Whatever the case, we can tell from stock prices that whatever this VRAM is, its nothing compared to NVIDIA.

            Anyway, we were trying to run a 70B model on a macbook(can't remember which M model) at a fortune 20 company, it never became practical. We were trying to compare strings of character length ~200. It was like 400-ish characters plus a pre-prompt.

            I can't imagine this being reasonable on a 1T model, let alone the 400B models of deepseek and LLAMA.

            • Gracana 22 minutes ago

              With 32B active parameters, Kimi K2.5 will run faster than your 70B model.

            • simonw an hour ago

              Here's a video of a previous 1T K2 model running using MLX on a a pair of Mac Studios: https://twitter.com/awnihannun/status/1943723599971443134 - performance isn't terrible.

              • PlatoIsADisease 44 minutes ago

                Is there a catch? I was not getting anything like this on a 70B model.

                EDIT: oh its a marketing account and the program never finished... who knows the validity.

        • simonw an hour ago

          MLX uses the GPU.

          That said, I wouldn't necessarily recommend spending $20,000 on a pair of Mac Studios to run models like this. The performance won't be nearly as good as the server-class GPU hardware that hosted models run on.

        • tucnak 3 hours ago

          Mac studio way is not "AI on CPU," as M2/M4 are complex SoC, that includes a GPU with unified memory access.

          • PlatoIsADisease an hour ago

            If it worked IRL for anything useful, I'd be more interested in the technical differences. But it was a mere toy for a few tests at my fortune 20 company.

            Language is full of issues of particulars vs universals, and you could debate if its just an integrated GPU with different marketing.

            Whatever the case, we couldn't use it in production, and NVIDIAs stock price reflects the reality on the ground.

    • tosh 8 hours ago

      I think you can put a bunch of apple silicon macs with enough ram together

      e.g. in an office or coworking space

      800-1000 gb ram perhaps?

  • throwaw12 8 hours ago

    Congratulations, great work Kimi team.

    Why is that Claude still at the top in coding, are they heavily focused on training for coding or is it their general training is so good that it performs well in coding?

    Someone please beat the Opus 4.5 in coding, I want to replace it.

    • pokot0 4 hours ago

      I don't think that kind of difference in benchmarks has any meaning at all. Your agentic coding tool and the task you are working on introduce a lot more "noise" than that small delta.

      Also consider they are all overfitting on the benchmark itself so there might be that as well (which can go in either directions)

      I consider the top models practically identical for coding applications (just personal experience with heavy use of both GPT5.2 and Opus 4.5).

      Excited to see how this model compares in real applications. It's 1/5th of the price of top models!!

    • Balinares 5 hours ago

      I replaced Opus with Gemini Pro and it's just plain a better coder IMO. It'll restructure code to enable support for new requirements where Opus seems to just pile on more indirection layers by default, when it doesn't outright hardcode special cases inside existing functions, or drop the cases it's failing to support from the requirements while smugly informing you you don't need that anyway.

    • symisc_devel 2 hours ago

      Gemini 3 pro is way better than Opus especially for large codebases.

      • redox99 an hour ago

        My experience is the total opposite.

    • MattRix 6 hours ago

      Opus 4.5 only came out two months ago, and yes Anthropic spends a lot of effort making it particularly good at coding.

  • spaceman_2020 12 hours ago

    Kimi was already one of the best writing models. Excited to try this one out

    • Alifatisk 9 hours ago

      To me, Kimi has been the best with writing and conversing, its way more human like!

  • simonw 8 hours ago
  • hmate9 8 hours ago

    About 600GB needed for weights alone, so on AWS you need an p5.48xlarge (8× H100) which costs $55/hour.

  • Barathkanna 8 hours ago

    A realistic setup for this would be a 16× H100 80GB with NVLink. That comfortably handles the active 32B experts plus KV cache without extreme quantization. Cost-wise we are looking at roughly $500k–$700k upfront or $40–60/hr on-demand, which makes it clear this model is aimed at serious infra teams, not casual single-GPU deployments. I’m curious how API providers will price tokens on top of that hardware reality.

    • wongarsu 7 hours ago

      The weights are int4, so you'd only need 8xH100

    • a2128 5 hours ago

      You don't need to wait and see, Kimi K2 has the same hardware requirements and has several providers on OpenRouter:

      https://openrouter.ai/moonshotai/kimi-k2-thinking https://openrouter.ai/moonshotai/kimi-k2-0905 https://openrouter.ai/moonshotai/kimi-k2-0905:exacto https://openrouter.ai/moonshotai/kimi-k2

      Generally it seems to be in the neighborhood of $0.50/1M for input and $2.50/1M for output

    • reissbaker 8 hours ago

      Generally speaking, 8xH200s will be a lot cheaper than 16xH100s, and faster too. But both should technically work.

      • pama 3 hours ago

        You can do it and may be ok for single user with idle waiting times, but performance/throughput will be roughly halved (closer to 2/3) and free context will be more limited with 8xH200 vs 16xH100 (assuming decent interconnect). Depending a bit on usecase and workload 16xH100 (or 16xB200) may be a better config for cost optimization. Often there is a huge economy of scale with such large mixture of expert models so that it would even be cheaper to use 96 GPU instead of just 8 or 16. The reasons are complicatet and involve better prefill cache, less memory transfer per node.

    • bertili 8 hours ago

      The other realistic setup is $20k, for a small company that needs a private AI for coding or other internal agentic use with two Mac Studios connected over thunderbolt 5 RMDA.

      • Barathkanna 8 hours ago

        That won’t realistically work for this model. Even with only ~32B active params, a 1T-scale MoE still needs the full expert set available for fast routing, which means hundreds of GB to TBs of weights resident. Mac Studios don’t share unified memory across machines, Thunderbolt isn’t remotely comparable to NVLink for expert exchange, and bandwidth becomes the bottleneck immediately. You could maybe load fragments experimentally, but inference would be impractically slow and brittle. It’s a very different class of workload than private coding models.

        • bertili 8 hours ago

          People are running the previous Kimi K2 on 2 Mac Studios at 21tokens/s or 4 Macs at 30tokens/s. Its still premature, but not a completely crazy proposition for the near future, giving the rate of progress.

          • NitpickLawyer 7 hours ago

            > 2 Mac Studios at 21tokens/s or 4 Macs at 30tokens/s

            Keep in mind that most people posting speed benchmarks try them with basically 0 context. Those speeds will not hold at 32/64/128k context length.

        • zozbot234 8 hours ago

          If "fast" routing is per-token, the experts can just reside on SSD's. the performance is good enough these days. You don't need to globally share unified memory across the nodes, you'd just run distributed inference.

          Anyway, in the future your local model setups will just be downloading experts on the fly from experts-exchange. That site will become as important to AI as downloadmoreram.com.

        • omneity 5 hours ago

          RDMA over Thunderbolt is a thing now.

        • YetAnotherNick 6 hours ago

          Depends on if you are using tensor parallelism or pipeline parallelism, in the second case you don't need any sharing.

      • embedding-shape 8 hours ago

        I'd love to see the prompt processing speed difference between 16× H100 and 2× Mac Studio.

        • zozbot234 8 hours ago

          Prompt processing/prefill can even get some speedup from local NPU use most likely: when you're ultimately limited by thermal/power limit throttling, having more efficient compute available means more headroom.

        • Barathkanna 8 hours ago

          I asked GPT for a rough estimate to benchmark prompt prefill on an 8,192 token input. • 16× H100: 8,192 / (20k to 80k tokens/sec) ≈ 0.10 to 0.41s • 2× Mac Studio (M3 Max): 8,192 / (150 to 700 tokens/sec) ≈ 12 to 55s

          These are order-of-magnitude numbers, but the takeaway is that multi H100 boxes are plausibly ~100× faster than workstation Macs for this class of model, especially for long-context prefill.

          • ffsm8 6 hours ago

            You do realize that's entirely made up, right?

            Could be true, could be fake - the only thing we can be sure of is that it's made up with no basis in reality.

            This is not how you use llms effectively, that's how you give everyone that's using them a bad name from association

      • zozbot234 8 hours ago

        That's great for affordable local use but it'll be slow: even with the proper multi-node inference setup, the thunderbolt link will be a comparative bottleneck.

  • teiferer 7 hours ago

    Can we please stop calling those models "open source"? Yes the weights are open. So, "open weight" maybe. But the source isn't open, the thing that allows to re-create it. That's what "open source" used to mean. (Together with a license that allows you to use that source for various things.)

    • Onavo 7 minutes ago

      No major AI lab will admit to training on proprietary or copyrighted data so what you are asking is an impossibility. You can make a pretty good LLM if you train on Anna's Archive but it will either be released anonymously, or with a complete research only non commercial license.

      There aren't enough public domain data to create good LLMs, especially once you get into the newer benchmarks that expect PhD level of domain expertise in various niche verticals.

      It's also a logical impossibility to create a zero knowledge proof that will allow you to attribute to specific training data without admitting to usage.

  • Jackson__ 10 hours ago

    As your local vision nut, their claims about "SOTA" vision are absolutely BS in my tests.

    Sure it's SOTA at standard vision benchmarks. But on tasks that require proper image understanding, see for example BabyVision[0] it appears very much lacking compared to Gemini 3 Pro.

    [0] https://arxiv.org/html/2601.06521v1

    • nostrebored 2 hours ago

      Gemini remains the only usable vision fm :(

  • striking 11 hours ago
  • pu_pe 9 hours ago

    I don't get this "agent swarm" concept. You set up a task and they boot up 100 LLMs to try to do it in parallel, and then one "LLM judge" puts it all together? Is there anywhere I can read more about it?

    • vessenes 9 hours ago

      You can read about this basically everywhere - the term of art is agent orchestration. Gas town, Claude’s secret swarm mode, or people who like to use phrases like “Wiggum loop” will get you there.

      If you’re really lazy - the quick summary is that you can benefit from the sweet spot of context length and reduce instruction overload while getting some parallelism benefits from farming tasks out to LLMs with different instructions. The way this is generally implemented today is through tool calling, although Claude also has a skills interface it has been trained against.

      So the idea would be for software development, why not have a project/product manager spin out tasks to a bunch of agents that are primed to be good at different things? E.g. an architect, a designer, and so on. Then you just need something that can rectify GitHub PRs and bob’s your uncle.

      Gas town takes a different approach and parallelizes on coding tasks of any sort at the base layer, and uses the orchestration infrastructure to keep those coders working constantly, optimizing for minimal human input.

      • IanCal 8 hours ago

        I'm not sure whether there are parts of this done for claude but those other ones are layers on top of the usual LLMs we see. This seems to be a bit different, in that there's a different model trained specifically for splitting up and managing the workload.

    • Rebuff5007 8 hours ago

      I've also been quite skeptical, and I became even more skeptical after hearing a tech talk from a startup in this space [1].

      I think the best way to think about it is that its an engineering hack to deal with a shortcoming of LLMs: for complex queries LLMs are unable to directly compute a SOLUTION given a PROMPT, but are instead able to break down the prompt to intermediate solutions and eventually solve the original prompt. These "orchestrator" / "swarm" agents add some formalism to this and allow you to distribute compute, and then also use specialized models for some of the sub problems.

      [1] https://www.deepflow.com/

    • jonkoops 9 hours ago

      The datacenters yearn for the chips.

    • rvnx 9 hours ago

      You have a team lead that establishes a list of tasks that are needed to achieve your mission

      then it creates a list of employees, each of them is specialized for a task, and they work in parallel.

      Essentially hiring a team of people who get specialized on one problem.

      Do one thing and do it well.

      • XCSme 6 hours ago

        But in the end, isn't this the same idea with the MoE?

        Where we have more specialized "jobs", which the model is actually trained for.

        I think the main difference with agents swarm is the ability to run them in parallel. I don't see how this adds much compared to simply sending multiple API calls in parallel with your desired tasks. I guess the only difference is that you let the AI decide how to split those requests and what each task should be.

        • zozbot234 6 hours ago

          Nope. MoE is strictly about model parameter sparsity. Agents are about running multiple small-scale tasks in parallel and aggregating the results for further processing - it saves a lot of context length compared to having it all in a single session, and context length has quadratic compute overhead so this matters. You can have both.

          One positive side effect of this is that if subagent tasks can be dispatched to cheaper and more efficient edge-inference hardware that can be deployed at scale (think nVidia Jetsons or even Apple Macs or AMD APU's) even though it might be highly limited in what can fit on the single node, then complex coding tasks ultimately become a lot cheaper per token than generic chat.

          • XCSme 5 hours ago

            Yes, I know you can have both.

            My point was that this is just a different way of creating specialised task solvers, the same as with MoE.

            And, as you said, with MoE it's about the model itself, and it's done at training level so that's not something we can easily do ourselves.

            But with agent swarm, isn't it simply splitting a task in multiple sub-tasks and sending each one in a different API call? So this can be done with any of the previous models too, only that the user has to manually define those tasks/contexts for each query.

            Or is this at a much more granular level than this, which would not be feasible to be done by hand?

            I was already doing this in n8n, creating different agents with different system prompts for different tasks. I am not sure if automating this (with swarm) would work well in my most cases, I don't see how this fully complements Tools or Skills

            • zozbot234 5 hours ago

              MoE has nothing whatsoever to do with specialized task solvers. It always operates per token within a single task, you can think of it perhaps as a kind of learned "attention" for model parameters as opposed to context data.

              • XCSme 5 hours ago

                Yes, specific weights/parameters have be trained to solve specific tasks (trained on different data).

                Or did I misunderstand the concept of MoE, and it's not about having specific parts of the model (parameters) do better on specific input contexts?

  • dev_l1x_be 5 hours ago

    I had these weird situations like some models are refusing to use SSH as a tool. Not sure if it was the coding tool limitation or it is baked into in some of the models.

  • jdeng 7 hours ago

    Glad to to see open source models are catching up and treat vision as first-class citizen (a.k.a native multimodal agentic model). GLM and Qwen models takes different approach, by having a base model and a vision variant (glm-4.6 vs glm-4.6v).

    I guess after Kimi K2.5, other vendors are going to the same route?

    Can't wait to see how this model performs on computer automation use cases like VITA AI Coworker.

    https://www.vita-ai.net/

  • monkeydust 9 hours ago

    Is this actually good or just optimized heavily for benchmarks? I am hopefully its the former based on the writeup but need to put it through its paces.

  • pplonski86 11 hours ago

    There are so many models, is there any website with list of all of them and comparison of performance on different tasks?

    • Reubend 11 hours ago

      The post actually has great benchmark tables inside of it. They might be outdated in a few months, but for now, it gives you a great summary. Seems like Gemini wins on image and video perf, Claude is the best at coding, ChatGPT is the best for general knowledge.

      But ultimately, you need to try them yourself on the tasks you care about and just see. My personal experience is that right now, Gemini Pro performs the best at everything I throw at it. I think it's superior to Claude and all of the OSS models by a small margin, even for things like coding.

      • Imustaskforhelp 10 hours ago

        I like Gemini Pro's UI over Claude so much but honestly I might start using Kimi K2.5 if its open source & just +/- Gemini Pro/Chatgpt/Claude because at that point I feel like the results are negligible and we are getting SOTA open source models again.

        • wobfan 8 hours ago

          > honestly I might start using Kimi K2.5 if its open source & just +/- Gemini Pro/Chatgpt/Claude because at that point I feel like the results are negligible and we are getting SOTA open source models again.

          Me too!

          > I like Gemini Pro's UI over Claude so much

          This I don't understand. I mean, I don't see a lot of difference in both UIs. Quite the opposite, apart from some animations, round corners and color gradings, they seem to look very alike, no?

          • Imustaskforhelp 7 hours ago

            Y'know I ended up buying Kimi's moderato plan which is 19$ but they had this unique idea where you can talk to a bot and they could reduce the price

            I made it reduce the price of first month to 1.49$ (It could go to 0.99$ and my frugal mind wanted it haha but I just couldn't have it do that lol)

            Anyways, afterwards for privacy purposes/( I am a minor so don't have a card), ended up going to g2a to get a 10$ Visa gift card essentially and used it. (I had to pay a 1$ extra but sure)

            Installed kimi code on my mac and trying it out. Honestly, I am kind of liking it.

            My internal benchmark is creating pomodoro apps in golang web... Gemini 3 pro has nailed it, I just tried the kimi version and it does have some bugs but it feels like it added more features.

            Gonna have to try it out for a month.

            I mean I just wish it was this cheap for the whole year :< (As I could then move from, say using the completely free models)

            Gonna have to try it out more!

    • coffeeri 11 hours ago
      • XCSme 6 hours ago

        There are many lists, but I find all of them outdated or containing wrong information or missing the actual benchmarks I'm looking for.

        I was thinking, that maybe it's better to make my own benchmarks with the questions/things I'm interested in, and whenever a new model comes out run those tests with that model using open-router.

      • pplonski86 10 hours ago

        Thank you! Exactly what I was looking for

  • DeathArrow 12 hours ago

    Those are some impressive benchmark results. I wonder how well it does in real life.

    Maybe we can get away with something cheaper than Claude for coding.

    • oneneptune 11 hours ago

      I'm curious about the "cheaper" claim -- I checked Kimi pricing, and it's a $200/mo subscription too?

      • NitpickLawyer 11 hours ago

        On openrouter 2.5 is at 0.60/3$ per Mtok. That's haiku pricing.

        • storystarling 10 hours ago

          The unit economics seem tough at that price for a 1T parameter model. Even with MoE sparsity you are still VRAM bound just keeping the weights resident, which is a much higher baseline cost than serving a smaller model like Haiku.

      • mrklol 10 hours ago

        They also have a $20 and $40 tier.

        • esafak 3 hours ago
        • Alifatisk 6 hours ago

          If you bargain with their bot Kimmmmy (not joking), you can even get lower pricing.

          • mohsen1 3 hours ago

            tell me more...

            • Alifatisk 2 hours ago

              Go to kimi chat, there will come up multiple suggestions of use cases. One of them will be the bargain robot. If you download their mobile app, the challenge to bargain will probably popup too!

              Depending on how well you bargain with the robot, you can go as low as 0,99$ (difficult). Either way, their moderate plan doesn’t have to be 20$. The agent wants a good reason for why it should lower the price for you.

              Here’s the direct link to Kimmmmy:

              https://www.kimi.com/kimiplus/sale

              I’ll send an invite link too if you don’t mind:

              https://www.kimi.com/kimiplus/sale?activity_enter_method=h5_...

  • mangolie 12 hours ago

    they cooked

  • lrvick 11 hours ago

    Actually open source, or yet another public model, which is the equivalent of a binary?

    URL is down so cannot tell.

    • Tepix 11 hours ago

      It's open weights, not open source.

    • typ 10 hours ago

      The label 'open source' has become a reputation reaping and marketing vehicle rather than an informative term since the Hugging Face benchmark race started. With the weights only, we cannot actually audit that if a model is a) contaminated by benchmarks, b) built with deliberate biases, or c) trained on copyrighted/privacy data, let alone allowing other vendors to replicate the results. Anyways, people still love free stuff.

      • Der_Einzige 9 hours ago

        Just accept that IP laws don't matter and the old "free software" paradigm is dead. Aaron Swartz died so that GenAI may live. RMS and his model of "copyleft" are so Web 1.0 (not even 2.0). No one in GenAI cares AT ALL about the true definition of open source. Good.

  • billyellow 12 hours ago

    Cool

  • rvz 10 hours ago

    The chefs at Moonshot have cooked once again.