Kimi K2.5 Technical Report [pdf]

(github.com)

321 points | by vinhnx 21 hours ago ago

122 comments

  • zeroxfe 18 hours ago

    I've been using this model (as a coding agent) for the past few days, and it's the first time I've felt that an open source model really competes with the big labs. So far it's been able to handle most things I've thrown at it. I'm almost hesitant to say that this is as good as Opus.

    • rubslopes 13 hours ago

      Also my experience. I've been going back and forth between Opus and Kimi for the last few days, and, at least for my CRUD webapps, I would say they are both on the same level.

    • armcat 18 hours ago

      Out of curiosity, what kind of specs do you have (GPU / RAM)? I saw the requirements and it's a beyond my budget so I am "stuck" with smaller Qwen coders.

      • zeroxfe 17 hours ago

        I'm not running it locally (it's gigantic!) I'm using the API at https://platform.moonshot.ai

        • HarHarVeryFunny a minute ago

          It is possible to run locally though ... I saw a video of someone running one of the heavily quantized versions on a Mac Studio, and performing pretty well in terms of speed.

          I'm guessing a 256GB Mac Studio, costing $5-6K, but that wouldn't be an outrageous amount to spend for a professional tool if the model capability justified it.

        • BeetleB 17 hours ago

          Just curious - how does it compare to GLM 4.7? Ever since they gave the $28/year deal, I've been using it for personal projects and am very happy with it (via opencode).

          https://z.ai/subscribe

          • InsideOutSanta 17 hours ago

            There's no comparison. GLM 4.7 is fine and reasonably competent at writing code, but K2.5 is right up there with something like Sonnet 4.5. it's the first time I can use an open-source model and not immediately tell the difference between it and top-end models from Anthropic and OpenAI.

          • Alifatisk 3 hours ago

            Kimi k2.5 is a beast, speaks very human like (k2 was also good at this) and completes whatever I throw at it. However, the glm quarterly coding plan is too good of a deal. The Christmas deal ends today, so I’d still suggest to stick to it. There will always come a better model.

          • zeroxfe 17 hours ago

            It's waaay better than GLM 4.7 (which was the open model I was using earlier)! Kimi was able to quickly and smoothly finish some very complex tasks that GLM completely choked at.

          • segmondy 16 hours ago

            The old Kimi K2 is better than GLM4.7

          • cmrdporcupine 17 hours ago

            From what people say, it's better than GLM 4.7 (and I guess DeepSeek 3.2)

            But it's also like... 10x the price per output token on any of the providers I've looked at.

            I don't feel it's 10x the value. It's still much cheaper than paying by the token for Sonnet or Opus, but if you have a subscribed plan from the Big 3 (OpenAI, Anthropic, Google) it's much better value for $$.

            Comes down to ethical or openness reasons to use it I guess.

            • esafak 17 hours ago

              Exactly. For the price it has to beat Claude and GPT, unless you have budget for both. I just let GLM solve whatever it can and reserve my Claude budget for the rest.

          • akudha 17 hours ago

            Is the Lite plan enough for your projects?

            • BeetleB 16 hours ago

              Very much so. I'm using it for small personal stuff on my home PC. Nothing grand. Not having to worry about token usage has been great (previously was paying per API use).

              I haven't stress tested it with anything large. Both at work and home, I don't give much free rein to the AI (e.g. I examine and approve all code changes).

              Lite plan doesn't have vision, so you cannot copy/paste an image there. But I can always switch models when I need to.

        • jgalt212 44 minutes ago

          What's the point of using an open source model if you're not self-hosting?

          • dimava 34 minutes ago

            Open source models costs are determined only by electricity usage, as anyone can rent a GPU qnd host them Closed source models cost x10 more just because they can A simple example is Claude Opus, which costs ~1/10 if not less in Claude Code that doesn't have that price multiplier

          • elbear 37 minutes ago

            * It's cheaper than proprietary models

            * Maybe you don't want to have your conversations used for training. The providers listed on OpenRouter mention whether they do that or not.

        • rc1 16 hours ago

          How long until this can be run on consumer grade hardware or a domestic electricity supply I wonder.

          Anyone have a projection?

          • johndough 16 hours ago

            You can run it on consumer grade hardware right now, but it will be rather slow. NVMe SSDs these days have a read speed of 7 GB/s (EDIT: or even faster than that! Thank you @hedgehog for the update), so it will give you one token roughly every three seconds while crunching through the 32 billion active parameters, which are natively quantized to 4 bit each. If you want to run it faster, you have to spend more money.

            Some people in the localllama subreddit have built systems which run large models at more decent speeds: https://www.reddit.com/r/LocalLLaMA/

            • hedgehog 16 hours ago

              High end consumer SSDs can do closer to 15 GB/s, though only with PCI-e gen 5. On a motherboard with two m.2 slots that's potentially around 30GB/s from disk. Edit: How fast everything is depends on how much data needs to get loaded from disk which is not always everything on MoE models.

              • greenavocado 13 hours ago

                Would RAID zero help here?

                • hedgehog 12 hours ago

                  Yes, RAID 0 or 1 could both work in this case to combine the disks. You would want to check the bus topology for the specific motherboard to make sure the slots aren't on the other side of a hub or something like that.

          • segmondy 16 hours ago

            You can run it on a mac studio with 512gb ram, that's the easiest way. I run it at home on a multi rig GPU with partial offload to ram.

            • johndough 16 hours ago

              I was wondering whether multiple GPUs make it go appreciably faster when limited by VRAM. Do you have some tokens/sec numbers for text generation?

          • heliumtera 16 hours ago

            You need 600gb of VRAM + MEMORY (+ DISK) to fit the model (full) or 240 for the 1b quantized model. Of course this will be slow.

            Through moonshot api it is pretty fast (much much much faster than Gemini 3 pro and Claude sonnet, probably faster than Gemini flash), though. To get similar experience they say at least 4xH200.

            If you don't mind running it super slow, you still need around 600gb of VRAM + fast RAM.

            It's already possible to run 4xH200 in a domestic environment (it would be instantaneous for most tasks, unbelievable speed). It's just very very expensive and probably challenging for most users, manageable/easy for the average hacker news crowd.

            Expensive AND hard to source high end GPUs, if you manage to source for the old prices around 200 thousand dollars to get maximum speed I guess, you could probably run decently on a bunch of high end machines, for let's say, 40k (slow).

      • Carrok 18 hours ago

        Not OP but OpenCode and DeepInfra seems like an easy way.

      • observationist 14 hours ago

        API costs on these big models over private hosts tend to be a lot less than API calls to the big 4 American platforms. You definitely get more bang for your buck.

      • tgrowazay 17 hours ago

        Just pick up any >240GB VRAM GPU off your local BestBuy to run a quantized version.

        > The full Kimi K2.5 model is 630GB and typically requires at least 4× H200 GPUs.

        • CamperBob2 16 hours ago

          You could run the full, unquantized model at high speed with 8 RTX 6000 Blackwell boards.

          I don't see a way to put together a decent system of that scale for less than $100K, given RAM and SSD prices. A system with 4x H200s would cost more like $200K.

          • ttul 8 hours ago

            That would be quite the space heater, too!

    • timwheeler 8 hours ago

      Did you use Kimi Code or some other harness? I used it with OpenCode and it was bumbling around through some tasks that Claude handles with ease.

      • zedutchgandalf 7 hours ago

        Are you on the latest version? They pushed an update yesterday that greatly improved Kimi K2.5’s performance. It’s also free for a week in OpenCode, sponsored by their inference provider

        • ekabod 4 hours ago

          But it may be a quantized model for the free version.

    • thesurlydev 18 hours ago

      Can you share how you're running it?

      • eknkc 17 hours ago

        I've been using it with opencode. You can either use your kimi code subscription (flat fee), moonshot.ai api key (per token) or openrouter to access it. OpenCode works beautifully with the model.

        Edit: as a side note, I only installed opencode to try this model and I gotta say it is pretty good. Did not think it'd be as good as claude code but its just fine. Been using it with codex too.

        • Imustaskforhelp 17 hours ago

          I tried to use opencode for kimi k2.5 too but recently they changed their pricing from 200 tool requests/5 hour to token based pricing.

          I can only speak from the tool request based but for some reason anecdotally opencode took like 10 requests in like 3-4 minutes where Kimi cli took 2-3

          So I personally like/stick with the kimi cli for kimi coding. I haven't tested it out again with OpenAI with teh new token based pricing but I do think that opencode might add more token issue.

          Kimi Cli's pretty good too imo. You should check it out!

          https://github.com/MoonshotAI/kimi-cli

          • nl 14 hours ago

            I like Kimi-cli but it does leak memory.

            I was using it for multi-hour tasks scripted via an self-written orchestrator on a small VM and ended up switching away from it because it would run slower and slower over time.

      • JumpCrisscross 13 hours ago

        > Can you share how you're running it?

        Not OP, but I've been running it through Kagi [1]. Their AI offering is probably the best-kept secret in the market.

        [1] https://help.kagi.com/kagi/ai/assistant.html

        • deaux 9 hours ago

          Doesn't list Kimi 2.5 and seems to be chat-only, not API, correct?

          • lejalv 7 minutes ago

            > Doesn't list Kimi 2.5 and seems to be chat-only, not API, correct?

            Yes, it is chat only, but that list is out of date - Kimi 2.5 (with or without reasoning) is available, as are ChatGPT 5.2, Gemini 3 Pro (Preview), etc

      • zeroxfe 17 hours ago

        Running it via https://platform.moonshot.ai -- using OpenCode. They have super cheap monthly plans at kimi.com too, but I'm not using it because I already have codex and claude monthly plans.

        • esafak 17 hours ago

          Where? https://www.kimi.com/code starts at $19/month, which is same as the big boys.

        • UncleOxidant 17 hours ago

          so there's a free plan at moonshot.ai that gives you some number of tokens without paying?

      • explorigin 18 hours ago
        • KolmogorovComp 17 hours ago

          To save everyone a click

          > The 1.8-bit (UD-TQ1_0) quant will run on a single 24GB GPU if you offload all MoE layers to system RAM (or a fast SSD). With ~256GB RAM, expect ~10 tokens/s. The full Kimi K2.5 model is 630GB and typically requires at least 4× H200 GPUs. If the model fits, you will get >40 tokens/s when using a B200. To run the model in near full precision, you can use the 4-bit or 5-bit quants. You can use any higher just to be safe. For strong performance, aim for >240GB of unified memory (or combined RAM+VRAM) to reach 10+ tokens/s. If you’re below that, it'll work but speed will drop (llama.cpp can still run via mmap/disk offload) and may fall from ~10 tokens/s to <2 token/s. We recommend UD-Q2_K_XL (375GB) as a good size/quality balance. Best rule of thumb: RAM+VRAM ≈ the quant size; otherwise it’ll still work, just slower due to offloading.

          • Gracana 17 hours ago

            I'm running the Q4_K_M quant on a xeon with 7x A4000s and I'm getting about 8 tok/s with small context (16k). I need to do more tuning, I think I can get more out of it, but it's never gonna be fast on this suboptimal machine.

            • segmondy 16 hours ago

              you can add 1 more GPU so you can take advantage of tensor parallel. I get the same speed with 5 3090's with most of the model on 2400mhz ddr4 ram, 8.5tk almost constant. I don't really do agents but chat, and it holds up to 64k.

              • Gracana 16 hours ago

                That is a very good point and I would love to do it, but I built this machine in a desktop case and the motherboard has seven slots. I did a custom water cooling manifold just to make it work with all the cards.

                I'm trying to figure out how to add another card on a riser hanging off a slimsas port, or maybe I could turn the bottom slot into two vertical slots.. the case (fractal meshify 2 xl) has room for a vertical mounted card that wouldn't interfere with the others, but I'd need to make a custom riser with two slots on it to make it work. I dunno, it's possible!

                I also have an RTX Pro 6000 Blackwell and an RTX 5000 Ada.. I'd be better off pulling all the A7000s and throwing both of those cards in this machine, but then I wouldn't have anything for my desktop. Decisions, decisions!

            • esafak 17 hours ago

              The pitiful state of GPUs. $10K for a sloth with no memory.

      • indigodaddy 12 hours ago

        Been using K2.5 Thinking via Nano-GPT subscription and `nanocode run` and it's working quite nicely. No issues with Tool Calling so far.

      • gigatexal 18 hours ago

        Yeah I too am curious. Because Claude code is so good and the ecosystem so just it works that I’m Willing to pay them.

        • epolanski 17 hours ago

          You can plug another model in place of Anthropic ones in Claude Code.

          • zeroxfe 17 hours ago

            That tends to work quite poorly because Claude Code does not use standard completions APIs. I tried it with Kimi, using litellm[proxy], and it failed in too many places.

            • AnonymousPlanet 17 hours ago

              It worked very well for me using qwen3 coder behind a litellm. Most other models just fail in weird ways though.

            • samtheprogram 16 hours ago

              opencode is a good alternative that doesnt flake out in this way.

          • miroljub 15 hours ago

            If you don't use Antrophic models there's no reason to use Claude Code at all. Opencode gives so much more choice.

        • Imustaskforhelp 17 hours ago

          I tried kimi k2.5 and first I didn't really like it. I was critical of it but then I started liking it. Also, the model has kind of replaced how I use chatgpt too & I really love kimi 2.5 the most right now (although gemini models come close too)

          To be honest, I do feel like kimi k2.5 is the best open source model. It's not the best model itself right now tho but its really price performant and for many use cases might be nice depending on it.

          It might not be the completely SOTA that people say but it comes pretty close and its open source and I trust the open source part because I feel like other providers can also run it and just about a lot of other things too (also considering that iirc chatgpt recently slashed some old models)

          I really appreciate kimi for still open sourcing their complete SOTA and then releasing some research papers on top of them unlike Qwen which has closed source its complete SOTA.

          Thank you Kimi!

  • tomaskafka 27 minutes ago

    It is amazing, but "open source model" means "model I can understand and modify" (= all the training data and processes).

    Open weights is an equivalent of binary driver blobs everyone hates. "Here is an opaque thing, you have to put it on your computer and trust it, and you can't modify it."

  • unleaded 13 hours ago

    Seems that K2.5 has lost a lot of the personality from K2 unfortunately, talks in more ChatGPT/Gemini/C-3PO style now. It's not explictly bad, I'm sure most people won't care but it was something that made it unique so it's a shame to see it go.

    examples to illustrate

    https://www.kimi.com/share/19c115d6-6402-87d5-8000-000062fec... (K2.5)

    https://www.kimi.com/share/19c11615-8a92-89cb-8000-000063ee6... (K2)

    • Grosvenor 8 hours ago

      Both models of Kimi are shit. A NeXT cube is a perfectly cromulent computing device. Where else can you run Lotus Improv, Framemaker, and Mathematica at once?

      Plus it looks boss - The ladies will be moist.

      • Grimblewald 6 hours ago

        Disagree, i've found kimi useful in solving creative coding problems gemini, claude, chatgpt etc failed at. Or, it is far better at verifying, augmenting and adding to human reviews of resumes for positions. It catches missed detials humans and other llm's routinley miss. There is something special to K2.

    • zozbot234 12 hours ago

      It's hard to judge from this particular question, but the K2.5 output looks at least marginally better AIUI, the only real problem with it is the snarky initial "That's very interesting" quip. Even then a British user would probably be fine with it.

    • logicprog 13 hours ago

      I agree. K2 was blunt, straightforward, pretty... rational? K2.5 has a much stronger slop vibe.

    • orbital-decay 7 hours ago

      K2 in your example is using the GPT reply template (tl;dr - terse details - conclusion, with contradictory tendencies), there's nothing unique about it. That's exactly how GPT-5.0 talked. The only model with a strong "personality" vibe was Claude 3 Opus.

  • extr 7 hours ago

    I tried this today. It's good - but it was significantly less focused and reliable than Opus 4.5 at implementing some mostly-fleshed-out specs I had lying around for some needed modifications to an enterprise TS node/express service. I was a bit disappointed tbh, the speed via fireworks.ai is great, they're doing great work on the hosting side. But I found the model had to double-back to fix type issues, broken tests, etc, far more than Opus 4.5 which churned through the tasks with almost zero errors. In fact, I gave the resulting code to Opus, simply said it looked "sloppy" and Opus cleaned it up very quickly.

  • Imanari 14 hours ago

    I have been very impressed with this model and also with the Kimi CLI. I have been using it with the 'Moderato' plan (7 days free, then 19$). A true competitor to Claude Code with Opus.

  • eager_learner 2 hours ago

    I tried Kimi 2.5 Swarm Agent version and it was way better than any AI model I've tried so far.

  • zzleeper 15 hours ago

    Do any of these models do well with information retrieval and reasoning from text?

    I'm reading newspaper articles through a MoE of gemini3flash and gpt5mini, and what made it hard to use open models (at the time) was a lack of support for pydantic.

    • jychang 15 hours ago

      That roughly correlates with tool calling capabilities. Kimi K2.5 is a lot better than previous open source models in that regard.

      You should try out K2.5 for your use case, it might actually succeed where previous generation open source models failed.

  • logicprog 12 hours ago

    Kimi K2T was good. This model is outstanding, based on the time I've had to test it (basically since it came out). It's so good at following my instructions, staying on task, and not getting context poisoned. I don't use Claude or GPT, so I can't say how good it is compared to them, but it's definitely head and shoulders above the open weight competitors

  • derac 18 hours ago

    I really like the agent swarm thing, is it possible to use that functionality with OpenCode or is that a Kimi CLI specific thing? Does the agent need to be aware of the capability?

    • zeroxfe 17 hours ago

      It seems to work with OpenCode, but I can't tell exactly what's going on -- I was super impressed when OpenCode presented me with a UI to switch the view between different sub-agents. I don't know if OpenCode is aware of the capability, or the model is really good at telling the harness how to spawn sub-agents or execute parallel tool calls.

    • esafak 16 hours ago

      Has anyone tried it and decided it's worth the cost; I've heard it's even more profligate with tokens?

      • swyx 10 hours ago

        Yes. https://x.com/swyx/status/2016381014483075561?s=20 it's not crazy, they cap it to 3 credits, and also YSK agent swarm is a closed source product

        Would i use it a gain compared to Deep Research products elsewhere? Maybe, probably not but only bc it's hard to switch apps

  • epolanski 17 hours ago

    It's interesting to note that a model that can OpenAI is valued almost 400 times more than moonshotai, despite their models being surprisingly close.

    • famouswaffles 16 hours ago

      OpenAI is a household name with nearly a billion weekly active users. Not sure there's any reality where they wouldn't be valued much more than Kimi regardless of how close the models may be.

    • moffkalast 17 hours ago

      Well to be the devil's advocate: One is a household name that holds most of the world's silicon wafers for ransom, and the other sounds like a crypto scam. Also estimating valuation of Chinese companies is sort of nonsense when they're all effectively state owned.

      • epolanski 16 hours ago

        There isn't a single % that is state owned in Moonshot AI.

        And don't start me with the "yeah but if the PRC" because it's gross when US can de facto ban and impose conditions even on European companies, let alone the control it has on US ones.

        • moffkalast 4 hours ago

          I'm not sure if that is accurate, most of the funding they've got is from Tencent and Alibaba, and we know what happened to Jack Ma the second he went against the party line. These two are defacto state owned enterprises. Moonshot is unlikely to be for sale in any meaningful way so its valuation is moot.

          [0] https://en.wikipedia.org/wiki/Moonshot_AI#Funding_and_invest...

        • swyx 10 hours ago

          Funny because that's how us Americans feel about your European cookie banner litter and unilateral demands on privacy

    • m3kw9 14 hours ago

      Unless they can beat their capabilities by a clear magical step up and has infrastructure to capture the users

  • threethirtytwo 7 hours ago

    When will hardware get cheap enough so people can run this locally? That’s the world I’m waiting for.

    • vanviegen 3 hours ago

      2042. But by then you won't want to run this model anymore.

  • oxqbldpxo 13 hours ago

    This Kimi K2 is so far the best. Gemini is also great, but google is stock in the academic bias of Stanford and MIT and can't think outside the box. China definitely ahead on Ai. Wish somehow someone here in the US, would think different.

    • dfsegoat 13 hours ago

      > but google is stock in the academic bias of Stanford and MIT and can't think outside the box

      Can you clarify what you mean? I am not sure I follow.

  • sreekanth850 9 hours ago

    Calude give 100% passmark for code generated by kimi and sometimes it say, its better than what claude proposed. Absolutely best os model.

  • miroljub 17 hours ago

    I've been quite satisfied lately with MiniMax M-2.1 in opencode.

    How does Kimi 2.5 compare to it in real world scenarios?

    • viraptor 17 hours ago

      A lot better in my experience. M2.1 to me feels between haiku and sonnet. K2.5 feels close to opus. That's based on my testing of removing some code and getting it to reimplement based on tests. Also the design/spec writing feels great. You can still test k2.5 for free in OpenCode today.

      • miroljub 17 hours ago

        Well, Minimax was the equivalent of Sonnet in my testing. If Kimi approach Opus, that would be great.

        • samtheprogram 16 hours ago

          Kimi K2.5 approaches Sonnet as well from what I can tell, it's just slower to get to the result.

  • tallesborges92 8 hours ago

    I’ve added the api key support to kimi on my agentic coding: https://github.com/tallesborges/zdx

  • margorczynski 18 hours ago

    I wonder how K2.5 + OpenCode compares to Opus with CC. If it is close I would let go of my subscription, as probably a lot of people.

    • eknkc 17 hours ago

      It is not opus. It is good, works really fast and suprisingly through about its decisions. However I've seen it hallucinate things.

      Just today I asked for a code review and it flagged a method that can be `static`. The problem is it was already static. That kind of stuff never happens with Opus 4.5 as far as I can tell.

      Also, in an opencode Plan mode (read only). It generated a plan and instead of presenting it and stopping, decided to implement it. Could not use the edit and write tools because the harness was in read only mode. But it had bash and started using bash to edit stuff. Wouldn't just fucking stop even though the error messages it received from opencode stated why. Its plan and the resulting code was ok so I let it go crazy though...

      • esafak 16 hours ago

        Some models have a mind of their own. I keep them on a leash with `permission` blocks in OC -- especially for rm/mv/git.

    • naragon 17 hours ago

      I've been using K2.5 with OpenCode to do code assessments/fixes and Opus 4.5 with CC to check the work, and so far so good. Very impressed with it so far, but I don't feel comfortable canceling my Claude subscription just yet. Haven't tried it on large feature implementations.

    • ithkuil 17 hours ago

      I also wonder if CC can be used with k2.5 with the appropriate API adapter

    • jauntywundrkind 12 hours ago

      I've been drafting plans/specs in parallel with Opus and Kimi. Then asking them to review the others plan.

      I still find Opus is "sharper" technically, tackles problems more completely & gets the nuance.

      But man Kimi k2.5 can write. Even if I don't have a big problem description, just a bunch of specs, Kimi is there, writing good intro material, having good text that more than elaborates, that actually explains. Opus, GLM-4.7 have both complemented Kimi on it's writing.

      Still mainly using my z.ai glm-4.7 subscription for the work, so I don't know how capable it really is. But I do tend to go for some Opus in sticky spots, and especially given the 9x price difference, I should try some Kimi. I wish I was set up for better parallel evaluation; feels like such a pain to get started.

  • storus 14 hours ago

    Do I need to have two M3U 512GB MacStudios to run this?

  • syndacks 14 hours ago

    How do people evaluate creative writing and emotional intelligence in LLMs? Most benchmarks seem to focus on reasoning or correctness, which feels orthogonal. I’ve been playing with Kimmy K 2.5 and it feels much stronger on voice and emotional grounding, but I don’t know how to measure that beyond human judgment.

  • cmrdporcupine 15 hours ago

    DeepSeek is likely to release a new model soon, and judging from the past it's likely to be more cost effective and just as or more powerful than Kimi 2.5.

    DeepSeek 3.2 was already quite compelling. I expect its successor will be competitive.

  • gedy 16 hours ago

    Sorry if this is an easy-answerable question - but by open we can download this and use totally offline if now or in the future if we have hardware capable? Seems like a great thing to archive if the world falls apart (said half-jokingly)

    • fancy_pantser 8 hours ago

      Sure. Someone on /r/LocalLLaMA was seeing 12.5 tokens/s on dual Strix Halo 128GB machines (run you $6-8K total?) with 1.8bits per parameter. It performs far below the unquantized model, so it would not be my personal pick for a one-local-LLM-forever, but it is compelling because it has image and video understanding. You lose those features if you choose, say, gpt-oss-120B.

      Also, that's with no context, so it would be slower as it filled (I don't think K2.5 uses the Kimi-Linear KDA attention mechanism, so it's sub-quadratic but not their lowest).

    • Tepix 14 hours ago

      You could buy five Strix Halo systems at $2000 each, network them and run it.

      Rough estimage: 12.5:2.2 so you should get around 5.5 tokens/s.

      • j-bos 14 hours ago

        Is the software/drivers for networking LLMs on Strix Halo there yet? I was under the impression a few weeks ago that it's veeeery early stages and terribly slow.

        • Tepix 8 hours ago

          Llama.cpp with its rpc-server

    • fragmede 14 hours ago

      Yes but the hardware to run it decently gonna cost you north of $100k, so hopefully you and your bunkermates allocated the right amount to this instead of guns or ammo.

    • Carrok 16 hours ago

      Yes.

    • cmrdporcupine 15 hours ago

      Yes, but you'll need some pretty massive hardware.

  • llmslave 17 hours ago

    The benchmarks on all these models are meaningless

    • alchemist1e9 17 hours ago

      Why and what would a good benchmark look like?

      • moffkalast 17 hours ago

        30 people trying out all models on the list for their use case for a week and then checking what they're still using a month after.

  • behnamoh 18 hours ago

    It's a decent model but works best with kimi CLI, not CC or others.

    • rubslopes 12 hours ago

      I haven't use Kimi CLI, but it works very well with OpenCode.

    • alansaber 17 hours ago

      Why do you think that is?

      • chillacy 17 hours ago

        I heard it's because the labs fine tune their models for their own harness. Same reason why claude does better in claude code than cursor.

      • segmondy 16 hours ago

        read the tech report

  • firasd 16 hours ago

    I'm not very familiar with this model series but tried it when prototyping a potential 'Vibesbench' (my eval) conversation I tried it. Here's Kimi on a Robocop scene

    kimi k2 thinking turbo

    "He’s not being poetic; he’s weaponizing nostalgia. Quoting Casablanca—the ultimate “beautiful sacrifice” romance—to taunt Morton about corporate backstabbing is pure executive sleaze. It’s the same move as his mahogany desk and flag pin: draping sentimental Americana over naked power. He’s essentially saying, “Isn’t it romantic, Bob? You fought the good fight, now you die.” "

    grok 4.1

    "So when Dick Jones smirks and says, “Time goes by, it’s the same old story… the fight for love and glory, huh, Bob?” that’s pure Ed Neumeier flexing his love of old Hollywood while simultaneously mocking how these psychopathic ’80s execs think they’re in some grand cinematic drama instead of just murdering a guy for market share."

    chatgpt-4o

    "The reason the Casablanca reference matters is RoboCop itself is partially a noir—and also a tragedy built inside a genre engine. [… ] So yeah, calling back to Casablanca is not just some meta-joke—it’s a knowing wink that RoboCop is a noir trapped in cybernetic skin. // Even Clarence Boddicker, for all his cartoonish sadism, plays like a twisted stand-in for a noir heavy—talking smooth, dropping nicknames, throwing out casual violence like it’s a punchline."

    opus 4.5

    "Jones is performing avuncular menace. He's the kind of executive who'd quote old movies while having you murdered, because it's all performance to him, all style. The substance is just power."

  • tonychang430 6 hours ago

    Love to see Open source models doing better than SOTA