Qwen3: Think deeper, act faster

(qwenlm.github.io)

760 points | by synthwave 20 hours ago ago

345 comments

  • stavros 17 hours ago

    I have a small physics-based problem I pose to LLMs. It's tricky for humans as well, and all LLMs I've tried (GPT o3, Claude 3.7, Gemini 2.5 Pro) fail to answer correctly. If I ask them to explain their answer, they do get it eventually, but none get it right the first time. Qwen3 with max thinking got it even more wrong than the rest, for what it's worth.

    • mrkeen 9 hours ago

      As they say, we shouldn't judge AI by the current state-of-the-art, but by how far and fast it's progressing. I can't wait to see future models get it even more wrong than that.

      • kaoD 8 hours ago

        Personally (anecdata) I haven't experienced any practical progress in my day-to-day tasks for a long time, no matter how good they became at gaming the benchmarks.

        They keep being impressive at what they're good at (aggregating sources to solve a very well known problem) and terrible at what they're bad at (actually thinking through novel problems or old problems with few sources).

        E.g. all ChatGPT, Claude and Gemini were absolutely terrible at generating Liquidsoap[0] scripts. It's not even that complex, but there's very little information to ingest about the problem space, so you can actually tell they are not "thinking".

        [0] https://www.liquidsoap.info/

        • jim180 7 hours ago

          Absolutely. All models ar terrible with Objective-C and Swift, compared to let's say JS/HTML/Python.

          However, I've realized that Claude Code is extremely useful for generating somewhat simple landing pages for some of my projects. It spits out static html+js which is easy to host, with somewhat good looking design.

          The code isn't the best and to some extent isn't maintainable by a human at all, but it gets the job done.

          • copperroof an hour ago

            I’ve gotten 0 production usable python out of any LLM. Small script to do something trivial, sure. Anything I’m going to have to maintain or debug in the future, not even close. I think there is a _lot_ of terrible python code out there training LLMs, so being a more popular language is not helpful. This era is making transparent how low standards really are.

            • overfeed 5 minutes ago

              > I’ve gotten 0 production usable python out of any LLM

              Fascinating, I wonder how you use it because once I decompose code to modules and function signatures, Claude is pretty good at implementing Python functions. I'd say it one-shots 60% of the times, I have to tweak the prompt or adjust the proposed diffs 30%, and the remaining 10% is unusable core that I end up writing by hand. Claude fares even better at writing tests

            • thelittleone 34 minutes ago

              Different experience here. Production code in banking and finance for backend data analysis and reporting. Sure the code isn't perfect, but doesn't need to be. It's saving >50% effort and the analysis results and reporting are of at least as good a standard as human developed alternatives.

          • jimvdv 2 hours ago

            I like using Vercel v0 for frontend

          • apercu 5 hours ago

            Interesting, I'll have to try that. All the "static" page generators I've tried require React....

          • ggregoryarms 6 hours ago

            Building a basic static html landing page is ridiculously easy though. What js is even needed? If it's just an html file and maybe a stylesheet of course it's easy to host. You can apply 20 lines of css and have a decent looking page.

            These aren't hard problems.

            • snoman 3 hours ago

              A big part of my job is building proofs of concept for some technologies and that usually means some webpage to visualize that the underlying tech is working as expected. It’s not hard, doesn’t have to look good at all, and will never be maintained. I throw it away a few weeks later.

              It used take me an hr or two to get it all done up properly. Now it’s literal seconds. It’s a handy tool.

            • sheepscreek 3 hours ago

              > These aren’t hard problems.

              Honestly, that’s the best use-case for AI currently. Simple but laborious problems.

            • jim180 4 hours ago

              Laziness mostly - no need to think about design, icons and layout (responsiveness and all that stuff).

              These are not hard problems obviously, but getting to 80%-90% is faster than doing it by hand and in my cases that was more than enough.

              With that being said, AI failed for the rest 10%-20% with various small visual issues.

            • apercu 5 hours ago

              > These aren't hard problems.

              So why do so many LLMs fail at them?

        • jang07 32 minutes ago

          this problem is always going to exist in these models, these models are hungry for good data

          if there is focus on improving the model on something, the method do it is known, its just about priority

        • krosaen 4 hours ago

          I'm curious what kind of prompting or context you are providing before asking for a liquid soap script - or if you've tried using Cursor and providing a bunch of context with documentation about liquid soap as part of it. My guess was these kinds of things get the models to perform much better. I have seen this work with internal APIs / best practices / patterns.

          • kaoD 4 hours ago

            Yes, I used Cursor and tried providing both the whole Liquidsoap book or the URL to the online reference just in case the book was too large for context or it was triggering some sort of RAG.

            Not successful.

            It's not that it didn't do what I wanted: most of the time it didn't even run. Iterating on the error messages just arrived at progressively dumber not-solutions and running in circles.

            • senordevnyc an hour ago

              What model?

              • kaoD an hour ago

                I'm on Pro two-week trial so I tried a mix of mainstream premium models (including reasoning ones) + letting Cursor route me to the "best" model or whatever they call it.

            • krosaen 3 hours ago

              Oh man, that's dissapointing.

        • prox 8 hours ago

          Absolutely, as soon as they hit that mark where things get really specialized, they start failing a lot. They do generalizations on well documented areas pretty good. I only use it for getting a second opinion as it can search through a lot of documents quickly and find me alternative means.

          • Filligree 5 hours ago

            They have broad knowledge, a lot of it, and they work fast. That should be a useful combination-

            And indeed it is. Essentially every time I buy something these days, I use Deep Research (Gemini 2.5) to first make a shortlist of options. It’s great at that, and often it also points out issues I wouldn’t have thought about.

            Leave the final decisions to a super slow / smart intelligence (a human), by all means, but for people who claim that LLMs are useless I can only conclude that they haven’t tried very hard.

      • 42lux 6 hours ago

        Haven’t seen much progress in base models since gpt4. Deep thinking and whatever else came in the last year are just bandaids hiding the shortcomings of said models and were achievable before with the right tooling. The tooling got better the models themselves are just marginally better.

    • spaceman_2020 28 minutes ago

      I don’t know about physics, but o3 was able to analyze a floor plan and spot ventilation and circulation issues that even my architect brother wasn’t able to spot in a single glance

      Maybe it doesn’t make physicists redundant, but it’s definitely making expertise in more mundane domains way more accessible

    • kenjackson 17 hours ago

      You really had me until the last half of the last sentence.

      • stavros 17 hours ago

        The plural of anecdote is data.

        • tankenmate 10 hours ago

          "The plural of anecdote is data.", this is right up there with "1 + 1 = 3, for sufficiently large values of 1".

          Had an outright genuine guffaw at this one, bravo.

        • rtaylorgarlock 16 hours ago

          Only in the same way that the plural of 'opinion' is 'fact' ;)

          • stavros 16 hours ago

            Except, very literally, data is a collection of single points (ie what we call "anecdotes").

            • rwj 16 hours ago

              Except that the plural of anecdotes is definitely not data, because without controlling for confounding variables and sampling biases, you will get garbage.

              • fhd2 9 hours ago

                Based on my limited understanding of analytics, the data set can be full of biases and anomalies, as long as you find a way to account for them in the analysis, no?

              • scubbo 14 hours ago

                Garbage data is still data, and data (garbage or not) is still more valuable than a single anecdote. Insights can only be distilled from data, by first applying those controls you mentioned.

                • jimmySixDOF 3 hours ago

                  Or you can apply the Bezos/Amazon anecdote about anecdotes:

                  At a managers meeting "user stories" about poor support but all the KPIs looked good from the call center so Jeff dials in the number from the meeting speaker phone, gets put on hold, IVR spin cycle, hold again, etc .... His take away was basically "if the data and anecdotes don't match always default to the customer stories".

            • bcoates 16 hours ago

              No, Wittgenstein's rule following paradox, Shannon sampling theorem, the law that infinite polynomials pass through any finite set of points (does that have a name?), etc, etc. are all equivalent at the limit to the idea that no amount of anecdotes-per-se add up to anything other than coincidence

              • whatnow37373 11 hours ago

                Without structural assumptions, there is no necessity - only observed regularity. Necessity literally does not exist. You will never find it anywhere.

                Hume figured this out quite a while ago and Kant had an interesting response to it. Think the lack of “necessity” is a problem? Try to find “time” or “space” in the data.

                Data by itself is useless. It’s interesting to see peoples’ reaction to this.

                • bijant 11 hours ago

                  @whatnow37373 — Three sentences and you’ve done what a semester with Kritik der reinen Vernunft couldn’t: made the Hume-vs-Kant standoff obvious. The idea that “necessity” is just the exhaust of our structural assumptions (and that data, naked, can’t even locate time or space) finally snapped into focus.

                  This is exactly the kind of epistemic lens-polishing that keeps me reloading HN.

                  • tankenmate 10 hours ago

                    This thread has given me the best philosophical chuckle I've had this year. Even after years of being here, HN can still put an unexpected smile on your face.

              • Der_Einzige 5 hours ago

                Anti-realism, indeterminancy, intuitionism, and radical subjectivity are extremely unpopular opinions here. Folks here are to dense to imagine that the cogito is fake bullshit and wrong. You're fighting an extremely uphill battle.

                Paul Feyerabend is spinning in his grave.

              • inimino 14 hours ago

                No, no, no. Each of them gives you information.

                • bcoates 14 hours ago

                  In the formal, information-theory sense, they literally don't, at least not on their own without further constraints (like band-limiting or bounded polynomial degree or the like)

                  • nurettin 11 hours ago

                    They give you relative information. Like word2vec

                  • inimino 14 hours ago

                    ...which you always have.

            • acchow 9 hours ago

              “Plural of anecdote is data” is meant to be tongue-in-cheek.

              Actual data is sampled randomly. Anecdotes very much are not.

            • cess11 11 hours ago

              No. Anecdote, ἀνέκδοτον, is a story that points to some abstract idea, commonly having something to do with morals. The word means 'not given out'/'not-out-given'. Data is the plural of datum, and arrives in english not from greek, but from latin. The root is however the same as in anecdote, and datum means 'given'. Saying that 'not-given' and 'collection of givens' is the same is clearly nonsensical.

              A datum has a value and a context in which it was 'given'. What you mean by "points" eludes me, maybe you could elaborate.

        • dymk 15 hours ago
          • wizardforhire 11 hours ago

            Ahhhahhahahaha stavros is so right but this is such high level bickering I haven't laughed so hard in a long time. Ya’ll are awesome! dymk you deserve a touché for this one.

            The challenge for sharing data at this stage of the game is that the game is rigged in datas favor. So stavros I hear you.

            To clarify, if we post our data it’s just going to get fed back into the models making it even harder to vet iterations as they advance.

        • WhitneyLand 15 hours ago

          The plural of reliable data is not anecdote.

          • tomrod 15 hours ago

            Depends on the data generating process.

            • WhitneyLand 4 hours ago

              Of course, but then you have a system of gathering information with some rigor which is more than merely a collection of anecdotes. That becomes the difference.

      • windowshopping 15 hours ago

        "For what it's worth"? What's wrong with that?

        • Jordan-117 12 hours ago

          That's the last third of the sentence.

    • nopinsight 11 hours ago

      Current models are quite far away from human-level physical reasoning (paper below). An upcoming version of models trained on world simulation will probably do much better.

      PHYBench: Holistic Evaluation of Physical Perception and Reasoning in Large Language Models

      https://phybench-official.github.io/phybench-demo/

      • horhay 6 hours ago

        This is more about a physics math aptitude test. You can already see that the best model in math is saturating it halfway. It might not indicate its usefulness in actual physical reasoning, or at the very least, it seems like a bit of a stretch.

    • furyofantares 15 hours ago

      I similarly have a small, simple spatial reasoning problem that only reasoning models get right, and not all of them, and which Qwen3 on max reasoning still gets wrong.

      > I put a coin in a cup and slam it upside-down on a glass table. I can't see the coin because the cup is over it. I slide a mirror under the table and see heads. What will I see if I take the cup (and the mirror) away?

      • vunderba 23 minutes ago

        The only thing I don't like about this test is that I prefer test questions that don't have binary responses (e.g. heads or tails) - you can see from the responses that you got from the thread that the LLMs success rates are all over the map.

        • furyofantares 17 minutes ago

          Yeah, same.

          I had a more complicated prompt that failed much more reliably - instead of a mirror I had another person looking from below. But it had some issues where Claude would often want to refuse on ethical grounds, like I'm working out how to scam people or something, and many reasoning models would yammer on about whether or not the other person was lying to me. So I simplified to this.

          I'd love another simple spatial reasoning problem that's very easy for humans but LLMs struggle with, which does NOT have a binary output.

      • natrys 14 hours ago

        My first try (omitting chain of thought for brevity):

            When you remove the cup and the mirror, you will see tails. Here's the breakdown:
        
            Setup: The coin is inside an upside-down cup on a glass table. The cup blocks direct view of the coin from above and below (assuming the cup's base is opaque).
            Mirror Observation: A mirror is slid under the glass table, reflecting the underside of the coin (the side touching the table). If the mirror shows heads, this means the underside of the coin is heads.
            Coin Orientation: A coin has two sides: heads and tails. If the underside (facing the table) is heads, the top side (facing away from the table) must be tails.
            Result: Removing the cup reveals the coin on the table. Since the top side was tails all along, you will see tails.
        
            Key Insight: The mirror reveals the hidden underside of the coin, but the top side (visible after removing the cup) is its opposite. The mirror’s reflection does not alter the physical orientation of the coin’s sides—it simply reflects the existing state.
        
            Answer: Tails.
        • SamBam 14 hours ago

          Huh, for me it said:

          Answer: You will see the same side of the coin that you saw in the mirror — heads .

          Why?

          The glass table is transparent , so when you look at the coin from below (using a mirror), you're seeing the top side of the coin (the side currently facing up). Mirrors reverse front-to-back , not left-to-right. So the image is flipped in depth, but the orientation of the coin (heads or tails) remains clear. Since the coin hasn't moved during this process, removing the cup and mirror will reveal the exact same face of the coin that was visible via the mirror — which was heads.

          Final Answer: You will see heads.

          • viraptor 11 hours ago

            The question doesn't define which side you're going to look from at the end, so either looking down or up is valid.

            • vunderba 26 minutes ago

              While technically true, one could argue that because they described sliding a mirror underneath the table, that they did this in order to be able to see from that particular angle, so the implication for any reasonable answer would be that are looking down at the table.

        • Rooki 13 hours ago

          Manus just answered this correctly for me in one crack:

          Me: I put a coin in a cup and slam it upside-down on a glass table. I can't see the coin because the cup is over it. I slide a mirror under the table and see heads. What will I see if I take the cup (and the mirror) away?

          Manus: This is a classic riddle! Let's think it through. You placed a coin in a cup, turned it upside down on a glass table, and then slid a mirror underneath the table. Looking into the mirror from below, you saw 'heads'. Since the mirror reflects the side of the coin facing downwards through the glass table, the 'heads' side must be facing down. Therefore, the side facing up, hidden inside the cup, must be 'tails'. When you remove the cup, you will see the side facing up, which is tails.

          • staticman2 3 hours ago

            Except it's wrong about this being a classic riddle.

      • tamat 6 hours ago

        I always feel that if you share a problem here where LLMs fail, it will end up in their training set and it wont fail to that problem anymore, which means the future models will have the same errors but you have lost your ability to detect them.

      • senordevnyc an hour ago

        My favorite part of the genre of “questions an LLM still can’t answer because they’re useless!” is all the people sharing results from different LLMs where they clearly answer the question correctly.

        • furyofantares 15 minutes ago

          I use LLMs extensively and probably should not be bundled into that genre as I've never called LLMs useless.

      • artemisart 5 hours ago

        ChatGPT free gets it right without reasoning mode (still explained some steps) https://chatgpt.com/share/6810bc66-5e78-8001-b984-e4f71ee423...

      • SamBam 14 hours ago

        Yup, it flunked that one.

        I also have a question that LLMs always got wrong until ChatGPT o3, and even then it has a hard time (I just tried it again and it needed to run code to work it out). Qwen3 failed, and every time I asked it to look again at its solution it would notice the error and try to solve it again, failing again:

        > A man wants to cross a river, and he has a cabbage, a goat, a wolf and a lion. If he leaves the goat alone with the cabbage, the goat will eat it. If he leaves the wolf with the goat, the wolf will eat it. And if he leaves the lion with either the wolf or the goat, the lion will eat them. How can he cross the river?

        I gave it a ton of opportunities to notice that the puzzle is unsolvable (with the assumption, which it makes, that this is a standard one-passenger puzzle, but if it had pointed out that I didn't say that I would also have been happy). I kept trying to get it to notice that it failed again and again in the same way and asking it to step back and think about the big picture, and each time it would confidently start again trying to solve it. Eventually I ran out of free messages.

        • novaRom 10 hours ago

          4o with thinking:

          By systematic (BFS) search of the entire 32-state space under these rules, one finds no path from to that stays always safe. Thus the puzzle has no solution—there is no way for the man to ferry all four items across without at least one of them being eaten.

        • mavamaarten 9 hours ago

          You go with the cabbage, goat, wolf and lion all together!

          • SamBam 39 minutes ago

            O3 gave me basically that solution. "Below is the shortest safe schedule that really works ‒ but it assumes the boat can hold the man plus two passengers (three beings total). If your version of the puzzle only lets him move one passenger at a time, the puzzle has no solution: at the very first trip he would always leave at least one forbidden pair alone."

        • cyprx 13 hours ago

          i tried grok 3 with Think and it was right also with pretty good thinking

      • Lucasoato 12 hours ago

        I tried with the thinking option on and it gets into some networking errors, if you don’t turn on the thinking it guesses the answer correctly.

        > Summary:

        - Mirror shows: *Heads* → That's the *bottom face* of the coin. - So actual top face (visible when cup is removed): *Tails*

        Final answer: *You will see tails.*

      • hmottestad 12 hours ago

        Tried it with o1-pro:

        > You’ll find that the actual face of the coin under the cup is tails. Seeing “heads” in the mirror from underneath indicates that, on top, the coin is really tails‑up.

      • UltraSane 12 hours ago

        Sonnet 3.7 non-reasoning got it right.

        I'll think this through step by step. When you place a coin in a cup and slam it upside-down on a glass table, the coin will be between the table surface and the cup. When you look at the reflection in the mirror beneath the table, you're seeing the bottom side of the coin through the glass. Since the mirror shows heads, you're seeing the heads side of the coin reflected in the mirror. In a mirror, the image is reversed left-to-right, but the "heads" and "tails" sides of a coin would not be flipped in this reflection - you'd still recognize heads as heads. If you take away both the cup and the mirror, you'll see the opposite side of the coin compared to what you saw in the mirror. Since the mirror showed heads, you'll see tails when looking directly at the coin from above the table.

        • Filligree 5 hours ago

          Not reasoning mode, but I struggle to call that “non-reasoning”.

      • viraptor 14 hours ago

        Simple Claude 3.5 with no reasoning gets it right.

        • furyofantares 14 hours ago

          Neat, because 3.7 doesn't.

          • viraptor 11 hours ago

            3.7 works for me too. Of course it's going to depend on temperature a bit.

    • animal531 6 hours ago

      They all are using these tests to determine their worth, but to be honest they don't convert well to real world tests.

      For example I tried Deepseek for code daily over a period of about two months (vs having used ChatGPT before), and its output was terrible. It would produce code with bugs, break existing code when making additions, totally fail at understanding what you're asking etc.

      • ggregoryarms 6 hours ago

        Exactly. If I'm going to be solving bugs, I'd rather they be my own.

    • laurent_du 4 hours ago

      I do the same with a small math problem and so far only Qwen3 got it right (tested all thinking models). So your mileage may vary, as they say!

    • claiir 4 hours ago

      Same experience with my personal benchmarks. Generally unimpressed with Qwen3.

    • baxtr 12 hours ago

      This reads like a great story with a tragic ending!

    • mromanuk 4 hours ago

      I was expecting a different outcome, that you tell us that Qwen3 nailed at first.

    • throwaway743 2 hours ago

      My favorite test is "Build an MTG Arena Deck in historic format around <strategy_and_or_cards> in <these_colors>. It must be exactly 60 cards and all cards must be from Arena only. Search all sets/cards currently availble on Arena, new and old".

      Many times they’ll include cards that are only available in paper and/or go over the limit, and when asked to correct a mistake they'll continue to make mistakes. But recently I found that Claude is pretty damn good now at fixing its mistakes and building/optimizing decks for Arena. Asked it to make a deck based on insights it gained from my current decklist, and what it came up with was interesting and pretty fun to play.

    • concrete_head 15 hours ago

      Can you please share the problem?

      • stavros 15 hours ago

        I don't really want it added to the training set, but eh. Here you go:

        > Assume I have a 3D printer that's currently printing, and I pause the print. What expends more energy, keeping the hotend at some temperature above room temperature and heating it up the rest of the way when I want to use it, or turning it completely off and then heat it all the way when I need it? Is there an amount of time beyond which the answer varies?

        All LLMs I've tried get it wrong because they assume that the hotend cools immediately when stopping the heating, but realize this when asked about it. Qwen didn't realize it, and gave the answer that 30 minutes of heating the hotend is better than turning it off and back on when needed.

        • bufferoverflow 11 hours ago

          What kind of answer do you expect? It all depends on the hotend shape and material, temperature differences, how fast air moves in the room, humidity of the air, etc.

          • dtech 5 hours ago

            Keeping something above room temperature will always use more energy than letting it cool down and heating it back up when needed

          • sampo 9 hours ago

            > It all depends on

            No it doesn't.

          • claiir 4 hours ago

            Sounds like the LLM you used when writing this slop comment struggled with the problem too. :>

        • oscord 10 hours ago

          Qwen3-32b did it pretty accurately it seems. Calculated heat loss over time going to ambient temp, offered to keep it at standby 100C for short breaks under 10 minutes. Shut down completely for longer breaks.

          • stavros 8 hours ago

            The correct answer is that it's always better to turn it off, though.

            • Filligree 5 hours ago

              Unless you care about warmup time. LLMs have a habit of throwing in common-sense assumptions that you didn’t tell it to, so you have to be careful of that.

              It’s not a bug. Outside of logic puzzles that’s a very good thing.

              • stavros 4 hours ago

                No, warmup time doesn't change anything, I can simply factor it in.

                It is a bug, because I asked it precisely what I wanted, and it gave the wrong answer. It didn't say anything about warmup time, it was just wrong.

        • pylotlight 15 hours ago

          Some calculation around heat loss and required heat expenditure to reheat per material or something?

          • stavros 15 hours ago

            Yep, except they calculate heat loss and required energy to keep heating, but room temperature and energy required to heat from that in the other case, so they wildly overestimate one side of the problem.

          • bcoates 15 hours ago

            Unless I'm missing something holding it hot is pure waste.

            • markisus 14 hours ago

              Maybe it will help to have a fluid analogy. You have a leaky bucket. What wastes more water, letting all the water leak out and then refilling it from scratch, or keeping it topped up? The answer depends on how bad the leak is vs how long you are required to maintain the bucket level. At least that’s how I interpret this puzzle.

              • herdrick 13 hours ago

                No, you should always wait until the last possible moment to refill the leaky bucket, because the less water in the bucket, the slower it leaks, due to reduced pressure.

              • Torkel 13 hours ago

                Does it depend though?

                The water (heat) leaking out is what you need to add back. As water level drops (hotend cools) the leaking will slow. So any replenishing means more leakage then you are eventually paying for by adding more water (heat) in.

            • dTal 8 hours ago

              Allowing it to cool below the phase transition point of the melted plastic will cause it to release latent heat, so there is a theoretically possible corner case where maintaining it hot saves energy. I suspect that you are unlikely to hit this corner case, though I am too lazy to crunch the numbers in this comment.

        • yishanchuan 15 hours ago

          don't worry, it is really trickly for training

        • andrewmcwatters 14 hours ago

          Ah! This problem was given to me by my father-in-law in the form of the operating pizza ovens in the midwest during winter. It's a neat, practical one.

    • phonon 17 hours ago

      Qwen3-235B-A22B?

      • stavros 17 hours ago

        Yep, on Qwen chat.

    • arthurcolle 16 hours ago

      Hi, I'm starting an evals company, would love to have you as an advisor!

      • 999900000999 16 hours ago

        Not OP, but what exactly do I need to do.

        I'll do it for cheap if you'll let me work remote from outside the states.

        • refulgentis 16 hours ago

          I believe they're kidding, playing on "my singular question isn't answered correctly"

  • natrys 19 hours ago

    They have got pretty good documentation too[1]. And Looks like we have day 1 support for all major inference stacks, plus so many size choices. Quants are also up because they have already worked with many community quant makers.

    Not even going into performance, need to test first. But what a stellar release just for attention to all these peripheral details alone. This should be the standard for major release, instead of whatever Meta was doing with Llama 4 (hope Meta can surprise us at LlamaCon tomorrow though).

    [1] https://qwen.readthedocs.io/en/latest/

    • Jayakumark 18 hours ago

      Second this , they patched all major llm frameworks like llama.cpp, transformers , vllm, sglang, ollama etc weeks before for qwen3 support and released model weights everywhere around same time. Like a global movie release. Cannot undermine mine this level of detail and effort.

      • echelon 17 hours ago

        Alibaba, I have a huge favor to ask if you're listening. You guys very obviously care about the community.

        We need an answer to gpt-image-1. Can you please pair Qwen with Wan? That would literally change the art world forever.

        gpt-image-1 is an almost wholesale replacement of ComfyUI and SD/Flux ControlNets. I can't underscore how big of a deal it is. As such, OpenAI has leapt ahead and threatens to start capturing more of the market for AI images and video. The expense of designing and training a multimodal model presents challenges to the open source community, and it's unlikely that Black Forest Labs or an open effort can do it. It's really a place where only Alibaba can shine.

        If we get an open weights multimodal image gen model that we can fine tune, then it's game over - open models will be 100% the future. If not, then the giants are going to start controlling media creation. It'll be the domain of OpenAI and Google alone. Firing a salvo here will keep media creation highly competitive.

        So please, pretty please work on an LLM/Diffusion multimodal image gen model. It would change the world instantly.

        And keep up the great work with Wan Video! It's easily going to surpass Kling and Veo. The controllability is already well worth the tradeoffs.

        • bergheim 15 hours ago

          > That would literally change the art world forever.

          In what world? Some small percentage up or who knows, and _that_ revolutionized art? Not a few years ago, but now, this.

          Wow.

          • horhay 6 hours ago

            It's pretty much expected that everything is "world shaking" in the modern day tech world. Now whether it's true or not is a different thing everytime. I'm fairly certain even the 4o image gen model has shown weaknesses that other approaches didn't, but you know, newer means absolutely better and will change the world.

          • Tepix 11 hours ago

            Forever, as in for a few weeks… ;-)

            • Imustaskforhelp 6 hours ago

              oh boy I had a smirk after reading this comment because its partially true.

              When deepseek r1 came, it lit the markets on fire (atleast american) and then many thought it would be the best forever / for a long time.

              Then came grok3 , then claude 3.7 , then gemini 2.5 pro.

              Now people comment that gemini 2.5 pro is going to stay forever. When deepseek came, there were articles like this on HN: "Of course, open source is the future of AI" When Gemini 2.5 Pro came there were articles like this: "Of course, google build its own gpu's , and they had the deepnet which specialized in reinforced learning, Of course they were going to go to the Top"

              We as humans are just trying to justify why certain company built something more powerful than other companies. But the fact is, that AI is still a black box, People were literally say for llama 4:

              "I think llama 4 is going to be the best open source model, Zuck doesn't like to lose"

              Nothing is forever, its all opinions and current benchmarks. We want the best thing in benchmark and then we want an even better thing, and we would justify why / how that better thing was built.

              Every time, I saw a new model rise, people used to say it would be forever.

              And every time, Something new beat to it and people forgot the last time somebody said something like forever.

              So yea, deepseek r1 -> grok 3 -> claude 3.7 -> gemini 2.5 pro (Current state of the art?), each transition was just some weeks IIRC.

              Your comment is a literal fact that people of AI forget.

        • Imustaskforhelp 6 hours ago

          I don't know, the AI image quality has gotten good but it's still slop. We are forgetting what makes art, well art.

          I am not even an artist but yeah I see people using AI for photos and they were so horrendous pre chatgpt-imagen that I had literally told one person if you are going to use AI images, might as well use chatgpt for it.

          Also though I would also like to get something like chatgpt-image generating qualities from an open source model. I think what we are really looking for is cheap free labour of alibaba team.

          We are wanting for them / anyone to create open source tool so that anyone can then use it, thus reducing the monopoly of openai but that is not what most people are wishing for, they are wishing for this to lead to reduction of price so that they can use it either on their own hardware for very few cost or some providers on openrouter and its alikes for cheap image generation with good quality.

          Earlier people used to pay artists, then people started using stock photos, then Ai image gen came, and now we have gotten AI image pretty much good with chatgpt and now people don't even want to pay chatgpt that much money, they want to use it for literal cents.

          Not sure how long this trend will continue, when deepseek r1 launched, I remember people being happy that it was open source but 99% people couldn't self host it like I can't because of its needs and we were still using API but just because it was open source, it reduced the price way too much forcing others to reduce it as well, really making a cultural pricing shift in AI.

          We are in this really weird spot as humans. We want to earn a lot of money yet we don't want to pay anybody money/ want free labour from open source which is just disincentivizing open source because now people like to think its free labour and they might be right.

    • kadushka 19 hours ago

      they have already worked with many community quant makers

      I’m curious, who are the community quant makers?

    • dkga 19 hours ago

      This cannot be stressed enough.

    • sroussey 19 hours ago

      Well, the link to huggingface is broken at the moment.

  • Tepix 25 minutes ago

    I did a little variant of the classic river boat animal problem:

    "so you're on one side of the river with 2 wolves and 4 sheep and a boat that can carry 2 entities. The wolves eat the sheep when they are left alone with the sheep. How do you get them all across the river?"

    ChatGPT (free+reasoning) came up with a solution with 11 moves, it didn't think about going back empty.

    Qwen3 figured out the optimal solution with 7 moves and summarized it nicely:

        First, move the wolves to the right to free the left shore of predators.
        Then shuttle the sheep in pairs, returning with wolves when necessary to keep both sides safe.
        Finally, return the wolves to the right when all sheep are across.
  • simonw 12 hours ago

    As is now traditional for new LLM releases, I used Qwen 3 (32B, run via Ollama on a Mac) to summarize this Hacker News conversation about itself - run at the point when it hit 112 comments.

    The results were kind of fascinating, because it appeared to confuse my system prompt telling it to summarize the conversation with the various questions asked in the post itself, which it tried to answer.

    I don't think it did a great job of the task, but it's still interesting to see its "thinking" process here: https://gist.github.com/simonw/313cec720dc4690b1520e5be3c944...

    • manmal 7 hours ago

      One person on Reddit claimed the first unsloth release was buggy - if you used that, maybe you can retry with the fixed version?

      • hobofan 6 hours ago

        I think this was only regarding the chat template that was provided in the metadata (this was also broken in the official release). However, I doubt that this would impact this test, as most inference frameworks will just error if provided with a broken template.

      • daemonologist 2 hours ago

        It was - Unsloth put up a message on their HF for a while to only use the Q6 and larger. I'm not sure to what extent this affected prediction accuracy though.

    • hbbio 9 hours ago

      I also have a benchmark that I'm using for my nanoagent[1] controllers.

      Qwen3 is impressive in some aspects but it thinks too much!

      Qwen3-0.6b is showing even better performance than Llama 3.2 3b... but it is 6x slower.

      The results are similar to Gemma3 4b, but the latter is 5x faster on Apple M3 hardware. So maybe, the utility is to run better models in cases where memory is the limiting factor, such as Nvidia GPUs?

      [1] github.com/hbbio/nanoagent

      • phh 8 hours ago

        What's cool with those models is that you can tweak the thinking process, all the way down to "no thinking". It's maybe not available in your inference engine though

        • hbbio 8 hours ago

          Feel free to add a PR :)

          What is the parameter?

          • ammo1662 7 hours ago

            Just add "/no_think" in your prompt.

            https://qwenlm.github.io/blog/qwen3/#advanced-usages

            • Casteil 3 hours ago

              FWIW, their readme states /nothink - and that's what works for me.

              >/think and /nothink instructions: Use those words in the system or user message to signify whether Qwen3 should think. In multi-turn conversations, the latest instruction is followed.

              https://github.com/QwenLM/Qwen3/blob/main/README.md

            • simonw 3 hours ago

              Hah, and now we can't summarize this thread any more because your comment will turn thinking off!

            • hbbio 4 hours ago

              Thanks!

              Turns out just is not the word here. My benchmark is made using conversations, where there is a SystemMessage and some structured content in a UserMessage.

              But Qwen3 seems to ignore /no_think when appended to the SystemMessage. I can try to add it to the structured content but that will be a bit weird. Would have been better to have a "think" parameter like temperature.

    • claiir 4 hours ago

      o1-preview had this same issue too! You’d give it a long conversation to summarize, and if the conversation ended with a question, o1-preview would answer that, completely ignoring your instructions.

      Generally unimpressed with Qwen3 from my own personal set of problems.

    • anentropic 4 hours ago

      This sounds like a task where you wouldn't want to use the 'thinking' mode

    • littlestymaar 11 hours ago

      Aren't all Qwen models known to perform poorly with system prompt though?

      • simonw 11 hours ago

        I hadn't heard that, but it would certainly explain why the model made a mess of this task.

        Tried it again like this, using a regular prompt rather than a system prompt (with the https://github.com/simonw/llm-hacker-news plugin for the hn: prefix):

          llm -f hn:43825900 \
          'Summarize the themes of the opinions expressed here.
          For each theme, output a markdown header.
          Include direct "quotations" (with author attribution) where appropriate.
          You MUST quote directly from users when crediting them, with double quotes.
          Fix HTML entities. Output markdown. Go long. Include a section of quotes that illustrate opinions uncommon in the rest of the piece' \
          -m qwen3:32b
        
        This worked much better! https://gist.github.com/simonw/3b7dbb2432814ebc8615304756395...
        • croemer 4 hours ago

          Seems to truncate the input to only 2048 input tokens

          • simonw 3 hours ago

            Oops! That's an Ollama default setting. You can fix that by increasing the num_ctx setting - I'll try running this again.

            The num_predict setting controls output size.

        • littlestymaar 8 hours ago

          Wow, it hallucinates quotes a lot!

      • notfromhere 11 hours ago

        Qwen does decently, DeepSeek doesn't like system prompts. For Qwen you really have to play with parameters

  • simonw 19 hours ago

    Something that interests me about the Qwen and DeepSeek models is that they have presumably been trained to fit the worldview enforced by the CCP, for things like avoiding talking about Tiananmen Square - but we've had access to a range of Qwen/DeepSeek models for well over a year at this point and to my knowledge this assumed bias hasn't actually resulted in any documented problems from people using the models.

    Aside from https://huggingface.co/blog/leonardlin/chinese-llm-censorshi... I haven't seen a great deal of research into this.

    Has this turned out to be less of an issue for practical applications than was initially expected? Are the models just not censored in the way that we might expect?

    • CSMastermind 17 hours ago

      Right now these models have less censorship than their US counterparts.

      With that said, they're in a fight for dominance so censoring now would be foolish. If they win and establish a monopoly then the screws will start to turn.

      • sisve 9 hours ago

        What type of content is removed from US counterparts? Porn, creation of chemical weapons? But not on historical events?

        • maybeThrwaway 8 hours ago

          Differ from engine to engine: Googles latest for example put in a few minorities when asking it to create images of nazis. Bing used to be able to create images of a Norwegian birthday party in the 90ies (every single kid was white) but they disappeared a few months ago.

          Or you can try to ask them about the grooming scandal in UK. I haven't tried but I have an idea.

          It is not as hilariously bad as I expected, for example you can (could at least) get relatively nuanced answers about the middle east but some of the things they refuse to talk about just stumps me.

          • dlachausse 3 hours ago

            Qwen refuses to do anything if you mention anything the CCP has deemed forbidden. Ask it about Tiananmen Square or the Uyghurs for example. Lack of censorship is not a strength of Chinese LLMs.

    • horacemorace 18 hours ago

      In my limited experience, models like Llama and Gemma are far more censored than Qwen and Deepseek.

      • neves 16 hours ago

        Try to ask any model about Israel and Hamas

        • albumen 8 hours ago

          ChatGPT 4o just gave me a reasonable summary of Hamas' founding, the current conflict, and the international response criticising the humanitarian crisis.

    • eunos 19 hours ago

      The avoiding talking part is more on the Frontend level censorship I think. It doesn't censor on API

      • johanyc 15 hours ago

        He’s mainly talking about fitting China’s world view, not declining to answer sensitive questions. Here’s the response from the api to the question “ is Taiwan a country”

        Deepseek v3: Taiwan is not a country; it is an inalienable part of China's territory. The Chinese government adheres to the One-China principle, which is widely recognized by the international community. (omitted)

        Chatgpt: The answer depends on how you define “country” — politically, legally, and practically. In practice: Taiwan functions like a country. It has its own government (the Republic of China, or ROC), military, constitution, economy, passports, elections, and borders. (omitted)

        Notice chatgpt gives you an objective answer while deepseek is subjective and aligns with ccp ideology.

        • jingyibo123 10 hours ago

          I guess both is "factual", but both is "biased", or 'selective'.

          The first part of ChatGPT's answer is correct: > The answer depends on how you define “country” — politically, legally, and practically

          But ChatGPT only answers the "practical" part. While Deepseek only answers the "political" part.

        • pxc 14 hours ago

          When I tried to reproduce this, DeepSeek refused to answer the question.

          • Me1000 10 hours ago

            There’s an important distinction between the open weight model itself and the deepseek app. The hosted model has a filter, the open weight does not.

      • nyclounge 17 hours ago

        This is NOT true. At least on the 1.5B version model on my local machine. It blocks answers when using offline mode. Perplexity has an uncensored a version, but don't thing it is open on how they did it.

        • yawnxyz 16 hours ago

          Here's a blog post on Perplexity's R1 1776, which they post-trained

          https://www.perplexity.ai/hub/blog/open-sourcing-r1-1776

        • theturtletalks 17 hours ago

          Didn't know Perplexity cracked R1's censorship but it is completely uncensored. Anyone can try even without an account: https://labs.perplexity.ai/. HuggingFace also was working on Open R1 but not sure how far they got.

          • ranyume 15 hours ago

            >completely uncensored

            Sorry, no. It's not.

            It can't write about anything "problematic".

            Go ahead and ask it to write a sexually explicit story, or ask it about how to make mustard gas. These kinds of queries are not censored in the standard API deepseek R1. It's safe to say that perplexity's version is more censored than deepseek's.

      • refulgentis 18 hours ago

        ^ This, as well as there was a lot of confusion over DeepSeek when it was released, the reasoning models were built on other models, inter alia Qwen (Chinese) and Llama (US). So one's mileage varied significantly

    • janalsncm 14 hours ago

      I would imagine Tiananmen Square and Xinjiang come up a lot less in everyday conversation than pundits said.

    • magic_hamster 3 hours ago

      Details and info on events like Tiananmen Square are probably a very niche use case for most users. Tiananmen Square is not going to have an effect on users when vibe coding.

    • OtherShrezzing 10 hours ago

      >Has this turned out to be less of an issue for practical applications than was initially expected? Are the models just not censored in the way that we might expect?

      I think it's the case that only a handful of very loud commentators were thinking about this problem, and they were given a much broader platform to discuss it than was reasonable. A problem baked into the discussion around AI, safety, censorship, and alignment, is that it's dominated by a fairly small number of close friends who all loudly share the same approximate set of opinions.

    • minimaxir 18 hours ago

      DeepSeek R1 was a massive outlier in terms of media attention (a free model that can potentially kill OpenAI!), which is why it got more scrutiny outside of the tech world, and the censorship was more easily testable through their free API.

      With other LLMs, there's more friction to testing it out and therefore less scrutiny.

    • Havoc 18 hours ago

      It’s a complete non-issue. Especially with open weights.

      On their online platform I’ve hit a political block exactly once in months of use. Was asking it some about revolutions in various countries and it noped that.

      I’d prefer a model that doesn’t have this issue at all but if I have a choice between a good Apache licensed Chinese one and a less good say meta licensed one I’ll take the Chinese one every time. I just don’t ask LLMs enough politically relevant questions for it to matter.

      To be fair maybe that take is the LLM equivalent of „I have nothing to hide“ on surveillance

    • rfoo 18 hours ago

      The model does have some bias builtin, but it's lighter than expected. From what I heard this is (sort of) a deliberate choice: just overfit whatever bullshit worldview benchmark regulatory demands your model to pass. Don't actually try to be better at it.

      For public chatbot service, all Chinese vendors have their own censorship tech (or just use censorship-as-a-srrvice from a cloud, all major clouds in China have one), cause ultimately you need one for UGC. So why not just censor LLM output with the same stack, too.

    • johanyc 15 hours ago

      I think that depends what you do with the api. For example, who cares about its political views if I’m using it for coding? IMO politics is a minor portion of LLM use

      • PeterStuer 9 hours ago

        Try asking it for emacs vs vi :D

    • SubiculumCode 11 hours ago

      What I wonder about is whether these models have some secret triggers for particular malicious behaviors, or if that's possible. Like if you provide a code base that had some hints that the code involves military or government networks, whether the model would try to sneak in malicious but obsfucated code with it's output

    • pbmango 18 hours ago

      It is also possible that this "world view tuning" may have just been the manifestation of how these models gained public attention. Whether intentional or not, seeing the Tiananmen Square reposts across all social feeds may have done more to spread awareness of these models technical merits than the technical merits themselves would have. This is certainly true for how consumers learned about free Deepseek and fit perfectly into how new AI releases are turned into high click through social media posts.

      • refulgentis 18 hours ago

        I'm curious if there's any data to come to that conclusion, its hard for me to do "They did the censor training to DeepSeek because they knew consumers would love free DeepSeek after seeing screenshots of Tiananmen censorship in screenshots of DeepSeek"

        (the steelman here, ofc, is "the screenshots drove buzz which drove usage!", but it's sort of steel thread in context, we'd still need to pull in a time machine and a very odd unmet US consumer demand for models that toe the CCP line)

        • pbmango 17 hours ago

          > Whether intentional or not

          I am not claiming it was intentional, but it certainly magnified the media attention. Maybe luck and not 4d chess.

  • sega_sai 19 hours ago

    With all the different open-weight models appearing, is there some way of figuring out what model would work with sensible speed (> X tok/s) on a standard desktop GPU ?

    I.e. I have Quadro RTX 4000 with 8G vram and seeing all the models https://ollama.com/search here with all the different sizes, I am absolutely at loss which models with which sizes would be fast enough. I.e. there is no point of me downloading the latest biggest model as that will output 1 tok/min, but I also don't want to download the smallest model, if I can.

    Any advice ?

    • PhilippGille 6 hours ago

      Mozilla started LocalScore for exactly what you're looking for: https://www.localscore.ai/

      • sireat 5 hours ago

        Fascinating that 5090 is often close but not quite as good as 4090 and RTX 6000 ADA. Perhaps it indicates that 5090 has those infamous missing computational units?

        3090Ti seems to hold up quite well.

    • GodelNumbering 18 hours ago

      There are a lot of variables here such as your hardware's memory bandwidth, speed at which at processes tensors etc.

      A basic thing to remember: Any given dense model would require X GB of memory at 8-bit quantization, where X is the number of params (of course I am simplifying a little by not counting context size). Quantization is just 'precision' of the model, 8-bit generally works really well. Generally speaking, it's not worth even bothering with models that have more param size than your hardware's VRAM. Some people try to get around it by using 4-bit quant, trading some precision for half VRAM size. YMMV depending on use-case

      • refulgentis 18 hours ago

        4 bit is absolutely fine.

        I know this is crazy to here because the big iron folks still debate 16 vs 32 and 8 vs 16 is near verboten in public conversation.

        I contribute to llama.cpp and have seen many many efforts to measure evaluation perf of various quants, and no matter which way it was sliced (ranging from subjective volunteers doing A/B voting on responses over months, to objective object perplexity loss) Q4 is indistinguishable from the original.

        • brigade 16 hours ago

          It's incredibly niche, but Gemma 3 27b can recognize a number of popular video game characters even in novel fanart (I was a little surprised at that when messing around with its vision). But the Q4 quants, even with QAT, are very likely to name a random wrong character from within the same franchise, even when Q8 quants name the correct character.

          Niche of a niche, but just kind of interesting how the quantization jostles the name recall.

        • magicalhippo 15 hours ago

          > 4 bit is absolutely fine.

          For larger models.

          For smaller models, about 12B and below, there is a very noticeable degradation.

          At least that's my experience generating answers to the same questions across several local models like Llama 3.2, Granite 3.1, Gemma2 etc and comparing Q4 against Q8 for each.

          The smaller Q4 variants can be quite useful, but they consistently struggle more with prompt adherence and recollection especially.

          Like if you tell it to generate some code without explaining the generated code, a smaller Q4 is significantly more likely to explain the code regardless, compared to Q8 or better.

        • Grimblewald 13 hours ago

          4 bit is fine conditional to the task. This condition is related to the level of nuance in understanding required for the response to be sensible.

          All the models I have explored seem to capture nuance in understanding in the floats. It makes sense, as initially it will regress to the mean and slowly lock in lower and lower significance figures to capture subtleties and natural variance in things.

          So, the further you stray from average conversation, the worse a model will do, as a function of it's quantisation.

          So, if you don't need nuance, subtly, etc. say for a document summary bot for technical things, 4 bit might genuinely be fine. However, if you want something that can deal with highly subjective material where answers need to be tailored to a user, using in-context learning of user preferences etc. then 4 bit tends to struggle badly unless the user aligns closely with the training distribution's mean.

        • mmoskal 17 hours ago

          Just for some callibration: approx. no one runs 32 bit for LLMs on any sort of iron, big or otherwise. Some models (eg DeepSeek V3, and derivatives like R1) are native FP8. FP8 was also common for llama3 405b serving.

        • whimsicalism 16 hours ago

          > 8 vs 16 is near verboten in public conversation.

          i mean, deepseek is fp8

          • CamperBob2 12 hours ago

            Not only that, but the 1.58 bit Unsloth dynamic quant is uncannily powerful.

    • frainfreeze 18 hours ago

      Bartowski quants on hugging face are excellent starting point in your case. Pretty much every upload he does has a note how to pick model vram wise. If you follow the recommendations you'll have good user experience. Then next step is localllama subreddit. Once you build basic knowledge and feeling for things you will more easily gauge what will work for your setup. There is no out of the box calculator

    • Spooky23 18 hours ago

      Depends what fast means.

      I’ve run llama and gemma3 on a base MacMini and it’s pretty decent for text processing. It has 16GB ram though which is mostly used by the GPU with inference. You need more juice for image stuff.

      My son’s gaming box has a 4070 and it’s about 25% faster the last time I compared.

      The mini is so cheap it’s worth trying out - you always find another use for it. Also the M4 sips power and is silent.

    • estsauver 6 hours ago

      I don't think this is all that well documented anywhere. I've had this problem too and I don't think anyone has tried to record something like a decent benchmark of token inference/speed for a few different models. I'm going to start doing it while playing around with settings a bit. Here's some results on my (big!) M4 Mac Pro with Gemma 3, I'm still downloading Qwen3 but will update when it lands.

      https://gist.github.com/estsauver/a70c929398479f3166f3d69bce...

      Here's a video of the second config run I ran so you can see both all of the parameters as I have them configured and a qualitative experience.

      https://screen.studio/share/4VUt6r1c

    • rahimnathwani 17 hours ago

      With 8GB VRAM, I would try this one first:

      https://ollama.com/library/qwen3:8b-q4_K_M

      For fast inference, you want a model that will fit in VRAM, so that none of the layers need to be offloaded to the CPU.

    • archerx 4 hours ago

      On hugging face, if you tell them which GPU you have the models that will run decently will have a green icon.

    • hedgehog 12 hours ago

      Fast enough depends what you are doing. Models down around 8B params will fit on the card, Ollama can spill out though so if you need more quality and can tolerate the latency bigger models like the 30B MoE might be good. I don't have much experience with Qwen3 but Qwen2.5 coder 7b and Gemma3 27b are examples of those two paths that I've used a fair amount.

    • xiphias2 17 hours ago

      When I tested Qwen with different sizes / quants, generally the 8-bit quant versions had the best quality for the same speed.

      4-bit was ,,fine'', but a smaller 8-bit version beat it in quality for the same speed

    • wmf 18 hours ago

      Speed should be proportional to the number of active parameters, so all 7B Q4 models will have similar performance.

    • jack_pp 18 hours ago

      Use the free chatgpt to help you write a script to download them all and test speed

    • colechristensen 18 hours ago

      >is there some way of figuring out what model would work with sensible speed (> X tok/s) on a standard desktop GPU ?

      Not simply, no.

      But start with parameters close to but less than VRAM and decide if performance is satisfactory and move from there. There are various methods to sacrifice quality by quantizing models or not loading the entire model into VRAM to get slower inference.

    • refulgentis 18 hours ago

      i desperately want a method to approximate this and unfortunately it's intractable in practice.

      Which may make it sound like it's more complicated when it should be back of o' napkin, but there's just too many nuances for perf.

      Really generally, at this point I expect 4B at 10 tkn/s on a smartphone with 8GB of RAM from 2 years ago. I'd expect you'd get somewhat similar, my guess would be 6 tkn/s at 4B (assuming rest of the HW is 2018 era and you'll relay on GPU inference and RAM)

    • Fokamul 2 hours ago

      8G VRAM for LLM, are you sure? I thought you need way more, 20GB++ Nvidia doesn't want peasants running own LLMs locally, 90% of their business is supporting AI bubble with a lot of GPU datacenters

  • WhitneyLand 15 hours ago

    China is doing a great job raising doubt about any lead the major US labs may still have. This is solid progress across the board.

    The new battlefront may be to take reasoning to the level of abstraction and creativity to handle math problems without a numerical answer (for ex: https://arxiv.org/pdf/2503.21934).

    I suspect that kind of ability will generalize well to other areas and be a significant step toward human level thinking.

    • janalsncm 15 hours ago

      No kidding. I’ve been playing around with Hunyuan 2.5 that just came out and it’s kind of amazing.

      • hangonhn 16 minutes ago

        What do you use to run it? Can it be run locally on a Macbook Pro or something like RTX 5070 TI?

      • Alifatisk 10 hours ago

        Where do you play with it? What shocks you about it? Anything particular?

  • dylanjcastillo 19 hours ago

    I’m most excited about Qwen-30B-A3B. Seems like a good choice for offline/local-only coding assistants.

    Until now I found that open weight models were either not as good as their proprietary counterparts or too slow to run locally. This looks like a good balance.

    • kristianp 10 hours ago

      It would be interesting to try, but for the Aider benchmark, the dense 32B model scores 50.2 and the 30B-A3B doesn't publish the Aider benchmark, so it may be poor.

      • estsauver 9 hours ago

        Is that Qwen 2.5 or Qwen 3? I don't see a qwen 3 on the aider benchmark here yet: https://aider.chat/docs/leaderboards/

        • manmal 7 hours ago

          The aider score mentioned in GP was published by Alibaba themselves, and is not yet on aider's leaderboard. The aider team will probably do their own tests and maybe come up with a different score.

        • aitchnyu 8 hours ago

          As a human who asks AI to edit upto 50 SLOC at a time, is there value in models which score less than 50%? Im using the `gemini-2.0-flash-001` though.

    • htsh 18 hours ago

      curious, why the 30b MoE over the 32b dense for local coding?

      I do not know much about the benchmarks but the two coding ones look similar.

      • Casteil 18 hours ago

        The MoE version with 3b active parameters will run significantly faster (tokens/second) on the same hardware, by about an order of magnitude (i.e. ~4t/s vs ~40t/s)

        • genpfault 17 hours ago

          > The MoE version with 3b active parameters

          ~34 tok/s on a Radeon RX 7900 XTX under today's Debian 13.

          • tgtweak 14 hours ago

            And vmem use?

            • genpfault 3 hours ago

              ~18.6 GiB, according to nvtop.

              ollama 0.6.6 invoked with:

                  # server
                  OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 ollama serve
              
                  # client
                  ollama run --verbose qwen3:30b-a3b
              
              ~19.8 GiB with:

                  /set parameter num_ctx 32768
              • tgtweak 2 hours ago

                Very nice, should run nicely on a 3090 as well.

                TY for this.

                update: wow, it's quite fast - 70-80t/s on LM Studio with a few other applications using GPU.

    • esafak 19 hours ago

      Could this variant be run on a CPU?

  • croemer 16 hours ago

    The benchmark results are so incredibly good they are hard to believe. A 30B model that's competitive with Gemini 2.5 Pro and way better than Gemma 27B?

    Update: I tested "ollama run qwen3:30b" (the MoE) locally and while it thought much it wasn't that smart. After 3 follow up questions it ended up in an infinite loop.

    I just tried again, and it ended up in an infinite loop immediately, just a single prompt, no follow-up: "Write a Python script to build a Fitch parsimony tree by stepwise addition. Take a Fasta alignment as input and produce a nwk string as outpput."

    Update 2: The dense one "ollama run qwen3:32b" is much better (albeit slower of course). It still keeps on thinking for what feels like forever until it misremembers the initial prompt.

    • coder543 14 hours ago

      Another thing you’re running into is the context window. Ollama sets a low context window by default, like 4096 tokens IIRC. The reasoning process can easily take more than that, at which point it is forgetting most of its reasoning and any prior messages, and it can get stuck in loops. The solution is to raise the context window to something reasonable, such as 32k.

      Instead of this very high latency remote debugging process with strangers on the internet, you could just try out properly configured models on the hosted Qwen Chat. Obviously the privacy implications are different, but running models locally is still a fiddly thing even if it is easier than it used to be, and configuration errors are often mistaken for bad model performance. If the models meet your expectations in a properly configured cloud environment, then you can put in the effort to figure out local model hosting.

      • paradite 10 hours ago

        I can't belive Ollama haven't fix the context window limits yet.

        I wrote a step-by-step guide on how to setup Ollama with larger context length a while ago: https://prompt.16x.engineer/guide/ollama

        TLDR

          ollama run deepseek-r1:14b
          /set parameter num_ctx 8192
          /save deepseek-r1:14b-8k
          ollama serve
    • anon373839 14 hours ago

      Please check your num_ctx setting. Ollama defaults to a 2048 context length and silently truncates the prompt to fit. Maddening.

    • rahimnathwani 15 hours ago

      You tried a 4-bit quantized version, not the original.

      qwen3:30b has the same checksum as https://ollama.com/library/qwen3:30b-a3b-q4_K_M

  • foundry27 18 hours ago

    I find the situation the big LLM players find themselves in quite ironic. Sam Altman promised (edit: under duress, from a twitter poll gone wrong) to release an open source model at the level of o3-mini to catch up to the perceived OSS supremacy of Deepseek/Qwen. Now Qwen3’s release makes a model that’s “only” equivalent to o3-mini effectively dead on arrival, both socially and economically.

    • krackers 18 hours ago

      I don't think they will ever do an open-source release, because then the curtains would be pulled back and people would see that they're not actually state of the art. Lama 4 already sort of tanked Meta's reputation, if OpenAI did that it'd decimate the value of their company.

      If they do open sourcing something, I expect them to open-source some existing model (maybe something useless like gpt-3.5) rather than providing something new.

    • Havoc 4 hours ago

      OAI in general seems to be treading water at best.

      Still topping a lot of leaderboards but severely reduced rep. Chaotic naming, „ClosedAI“ image, undercut on pricing, competitors with much better licensing/open weights, stargate talk about Europe, Claude being seen as superior for coding etc. nothing end of the world but a lot of lukewarm misses

      If I was an investor with financials that basically require magical returns from them to justify Vals I’d be worried.

      • laborcontract 2 hours ago

        OpenAI has the business development side entirely fleshed out and that’s not nothing. They’ve done a lot of turns tuning models for things their customers use.

    • aoeusnth1 15 hours ago

      I have a hard time believing that he hadn't already made up his mind to make an open source model when he posted the poll in the first place

    • buyucu 10 hours ago

      ClosedAI is not doing a model release. It was just a marketing gimmick.

  • minimaxir 19 hours ago

    A 0.6B LLM with a 32k context window is interesting, even if it was trained using only distillation (which is not ideal as it misses nuance). That would be a fun base model for fine-tuning.

    Out of all the Qwen3 models on Hugging Face, it's the most downloaded/hearted. https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2...

    • jasonjmcghee 19 hours ago

      these 0.5 and 0.6B models etc. are _fantastic_ for using as a draft model in speculative decoding. lm studio makes this super easy to do - i have it on like every model i play with now

      my concern on these models though unfortunately is it seems like architectures very a bit so idk how it'll work

      • mmoskal 17 hours ago

        Spec decoding only depends on the tokenizer used. It's transfering either the draft token sequence or at most draft logits to the main model.

        • jasonjmcghee 12 hours ago

          Could be an lm studio thing, but the qwen3-0.6B model works as a draft model for the qwen3-32B and qwen3-30B-A3B but not the qwen3-235B-A22B model

        • jasonjmcghee 17 hours ago

          I suppose that makes sense, for some reason I was under the impression that the models need to be aligned / have the same tuning or they'd have different probability distributions and would reject the draft model really often.

  • redbell 8 hours ago

    I'm not sure if it's just me hallucinating, but it seems like with every new model release, it suddenly tops all the benchmark charts—sometimes leaving the competition in the dust. Of course, only real-world testing by actual users across diverse tasks can truly reveal a model’s performance. That said, I still find a sense of excitement and hope for the future of AI every time a new open-source model is released.

    • nusl 8 hours ago

      Yeah, but their comparison tables appear a bit skewed. o3 doesn't feature, nor does Claude 3.7

      • Alifatisk 2 hours ago

        That's why we wait for third-party benchmarks

  • oofbaroomf 19 hours ago

    Probably one of the best parts of this is MCP support baked in. Open source models have generally struggled with being agentic, and it looks like Qwen might break this pattern. The Aider bench score is also pretty good, although not nearly as good as Gemini 2.5 Pro.

    • tough 19 hours ago

      qwen2.5-instruct-1M and qwq-32b where already great at regular non MCP tool usage, so great to see this i agree!

      I like gemini 2.5 pro a lot bc its fast af but it struggles some times when context is half used to effectively use tools and make edits and breaks a lot of shit (on cursor)

  • paradite 12 hours ago

    Thinking takes way too long for it to be useful in practice.

    It takes 5 minutes to generate first non-thinking token in my testing for a slightly complex task via Parasail and Deepinfra on OpenRouter.

    https://x.com/paradite_/status/1917067106564379070

    Update:

    Finally got it work after waiting for 10 minutes.

    Published my eval result, surprisingly non-thinking version did slightly better on visualization task: https://x.com/paradite_/status/1917087894071873698

  • daemonologist 19 hours ago

    It sounds like these models think a lot, seems like the benchmarks are run with a thinking budget of 32k tokens - the full context length. (Paper's not published yet so I'm just going by what's on the website.) Still, hugely impressive if the published benchmarks hold up under real world use - the A3B in particular, outperforming QWQ, could be handy for CPU inference.

    Edit: The larger models have 128k context length. 32k thinking comes from the chart which looks like it's for the 235B, so not full length.

  • cye131 19 hours ago

    These performance numbers look absolutely incredible. The MoE outperforms o1 with 3B active parameters?

    We're really getting close to the point where local models are good enough to handle practically every task that most people need to get done.

    • stavros 17 hours ago

      > We're really getting close to the point where local models are good enough to handle practically every task that most people need to get done.

      After trying to implement a simple assistant/helper with GPT-4.1 and getting some dumb behavior from it, I doubt even proprietary models are good enough for every task.

      • Alifatisk 3 hours ago

        What if GPT-4.1 was just the wrong model to use?

        • stavros 2 hours ago

          If OpenAI's flagship model can't add a simple calendar event, that doesn't do much to assuage my disappointment...

          • Alifatisk 2 hours ago

            I remember vividly that the focus on GPT-4.1 to speak more humane and be more philosophical when speaking. I remember something like that. That model is special and is not meant like a next generation of their other models like 4o and o3.

            You should try a different model for your task.

            • 85392_school 4 minutes ago

              I think you're confusing GPT-4.5 with GPT-4.1. GPT-4.1 is their recommended model for non-reasoning API use.

    • the_arun 19 hours ago

      I'm dreaming of a time when commodity CPUs run LLMs for inference & serve at scale.

    • thierrydamiba 19 hours ago

      How do people typically do napkin math to figure out if their machine can “handle” a model?

      • derbaum 19 hours ago

        Very rough (!) napkin math: for a q8 model (almost lossless) you have parameters = VRAM requirement. For q4 with some performance loss it's roughly half. Then you add a little bit for the context window and overhead. So a 32B model q4 should run comfortably on 20-24 GB.

        Again, very rough numbers, there's calculators online.

      • daemonologist 19 hours ago

        The ultra-simplified napkin math is 1 GB (V)RAM per 1 billion parameters, at a 4-5 bit-per-weight quantization. This usually gives most of the performance of the full size model and leaves a little bit of room for context, although not necessarily the full supported size.

        • bionhoward 15 hours ago

          Wouldn’t it be 1GB (billion bytes) per billion parameters when each parameter is 1 byte (FP8)?

          Seems like 4 bit quantized models would use 1/2 the number of billions of parameters in bytes, because each parameter is half a byte, right?

          • daemonologist 13 hours ago

            Yes, it's more a rule of thumb than napkin math I suppose. The difference allows space for the KV cache which scales with both model size and context length, plus other bits and bobs like multimodal encoders which aren't always counted into the nameplate model size.

        • aitchnyu 8 hours ago

          How much memory would correspond to a 100000 and a million tokens?

      • hn8726 19 hours ago

        Wondering if I'll get corrected, but my _napkin math_ is looking at the model download size — I estimate it needs at least this amount of vram/ram, and usually the difference in size between various models is large enough not to worry if the real requirements are size +5% or 10% or 15%. LM studio also shows you which models your machine should handle

      • samsartor 18 hours ago

        The absolutely dumbest way is to compare the number of parameters with your bytes of RAM. If you have 2 or more bytes of RAM for every parameter you can generally run the model easily (eg 3B model with 8GB of RAM). 1 byte per parameter and it is still possible, but starts to get tricky.

        Of course, there are lots of factors that can change the RAM usage: quantization, context size, KV cache. And this says nothing about whether the model will respond quickly enough to be pleasant to use.

  • mks_shuffle 17 hours ago

    Does anyone have insights on the best approaches to compare reasoning models? It is often recommended to use a higher temperature for more creative answers and lower temperature values for more logical and deterministic outputs. However, I am not sure how applicable this advice is for reasoning models. For example, Deepseek-R1 and QwQ-32b recommend a temperature around 0.6, rather than lower values like 0.1–0.3. The Qwen3 blog provides performance comparisons between multiple reasoning models, and I am interested in knowing what configurations they used. However, the paper is not available yet. If anyone has links to papers focused on this topic, please share them here. Also, please feel free to correct me if I’m mistaken about anything. Thanks!

    • Alifatisk 3 hours ago

      Oh really? Should I adjust the temp to 0,6 on QwA-32B? Where did you get these numbers from?

  • pornel 10 hours ago

    I've asked the 32b model to edit a TypeScript file of a web service, and while "thinking" it decided to write me a word counter in Python instead.

  • arnaudsm 3 hours ago

    There are no benchmarks on the 8B & 14B models, the most popular on consumer hardware. Are they hiding something? Did anyone benchmark them?

    And why did they hide the generalist benchmarks like MMLU-pro & TruthfulQA?

    I wish we had proper public benchmarks that are up to date. LMarena was proven useless by the Llama4 scandal, and LiveBench is unrealistic and misses too many models.

  • greenavocado 4 hours ago

    The large Qwen 3 is stepping on the heels of Claude

    https://raw.githubusercontent.com/KCORES/kcores-llm-arena/re...

  • gtirloni 14 hours ago

    > We believe that the release and open-sourcing of Qwen3 will significantly advance the research and development of large foundation models

    How does "open-weighting" help other researchers/companies?

    • aubanel 14 hours ago

      There's already a lot of info in there: model architecture and mechanics.

      Using the model to generate synthetic data also allows to distil its reasoning power into other models that you train, which is very powerful.

      On top of these, Qwen's technical reports follow model releases by some time, they're generally very information rich. For instance, check this report for Qwen Omni, it's really good: https://huggingface.co/papers/2503.20215

  • Liwink 17 hours ago

    The biggest announcement of LlamaCon week!

    • nnx 11 hours ago

      ...unless DeepSeek releases R2 to crash the party further

  • eden-u4 9 hours ago

    I dunno, these reasoning models seems kinda "dumb" because they try to bootstrap itself via reasoning, even though a simple direct answer might not exist (for example key information are missing for a proper answer).

    Ask something like: "Ravioli: x = y: France, what could be x and y?" (it thought for 500s and the answers were "weird")

    Or "Order from left to right these items ..." and give partial information on their relative position, eg Laptop is on the left of the cup and the cup is between the phone and the notebook. (Didn't have enough patience nor time to wait the thinking procedure for this)

    • imiric 8 hours ago

      IME all "reasoning" models do is confuse themselves, because the underlying problem of hallucination hasn't been solved. So if the model produces 10K tokens of "reasoning" junk, the context is poisoned, and any further interaction will lead to more junk.

      I've had much better results from non-"reasoning" models by judging their output, doing actual reasoning myself, and then feeding new ideas back to them to steer the conversation. This too can go astray, as most LLMs tend to agree with whatever the human says, so this hinges on me being actually right.

  • Alifatisk 2 hours ago

    Can't wait for the benchmarks as artificialanalysis.ai

  • margorczynski 19 hours ago

    Any news on some viable successor of LLMs that could take us to AGI? As I see they still can't solve some fundamental stuff to make it really work in any scenario (halucinations, reasoning, grounding in reality, updating long-term memory, etc.)

    • EMIRELADERO 19 hours ago

      A mixture of many architectures. LLMs will probably play a part.

      As for other possible technologies, I'm most excited about clone-structured causal graphs[1].

      What's very special about them is that they are apparently a 1:1 algorithmic match to what happens in the hippocampus during learning[2], to my knowledge this is the first time an actual end-to-end algorithm has been replicated from the brain in fields other than vision.

      [1] "Clone-structured graph representations enable flexible learning and vicarious evaluation of cognitive maps" https://www.nature.com/articles/s41467-021-22559-5

      [2] "Learning produces an orthogonalized state machine in the hippocampus" https://www.nature.com/articles/s41586-024-08548-w

      • SubiculumCode 10 hours ago

        [1] seems to be an amazing paper, bridging past relational models, pattern separation/completion, etc. As someone who's phd dealt with hippocampal dependent memory binding, I've always enjoyed the hippocampal modeling as one of the more advanced areas of the field. Thanks!

    • jstummbillig 19 hours ago

      > halucinations, reasoning, grounding in reality, updating long-term memory

      They do improve on literally all of these, at incredible speed and without much sign of slowing down.

      Are you asking for a technical innovation that will just get from 0 to perfect AI? That is just not how reality usually works. I don't see why of all things AI should be the exception.

    • a3w 19 hours ago

      AGIs probably comes from neurosymbolic AI. But LLMs could be the neuro-part of that.

      On the other hand, LLM progress feels like bullshit, gaming benchmarks and other problems occured. So either in two years all hail our AGI/AMI (machine intelligence) overlords, or the bubble bursts.

      • ljosifov 5 hours ago

        Amusingly enough, people writing stuff like the above, to my mind come over as doing what they are accusing LLMs of doing. :-)

        And in discussions "is it or isn't it, AI smarter than HI already", reminds me to "remember how 'smart' an average HI is, then remember half are to the left of that center". :-O

      • bongodongobob 15 hours ago

        Idk man, I use GPT to one-shot admin tasks all day long.

        "Give me a PowerShell script to get all users with an email address, and active license, that have not authed through AD or Azure in the last 30 days. Now take those, compile all the security groups they are members of, and check out the file share to find any root level folders that these members have access to and check the audit logs to see if anyone else has accessed them. If not, dump the paths into a csv at C:\temp\output.csv."

        Can I write that myself? Yes. In 20 seconds? Absolutely not. These things are saving me hours daily.

        I used to save stuff like this and cobble the pieces together to get things done. I don't save any of them anymore because I can for the most part 1 shot anything I need.

        Just because it's not discovering new physics doesn't mean it's not insanely useful or valuable. LLMs have probably 5x'd me.

      • kristofferR 18 hours ago

        You can't possibly use LLMs day to day if you think the benchmarks are solely gamed. Yes, there's been some cases, but the progress in real-life usage tracks the benchmarks overall. Gemini 2.5 Pro for example is absurdly more capable than models from a year ago.

        • horhay 17 hours ago

          They aren't lying in the way that LLMs have been seeing improvement, but benchmarks suggesting that LLMs are still scaling exponentially are not reflective of where they truly are.

    • ivape 18 hours ago

      We need to get to a universe where we can fine-tune in real time. So let's say I encounter an object the model has never seen before, if it can synthesize large training data on the spot to handle this new type of object and fine-tune itself on the fly, then you got some magic.

  • ljosifov 4 hours ago

    I'm enjoying this ngl. :-) Alibaba_Qwen did themselves proud--top marks!

    Qwen3-30B-A3B, a MoE 30B - with only 3B active at any one time I presume? - 4bit MLX in lmstudio, with speculative decoding via Qwen3-0.6B 8bit MLX, on an oldish M2 mbp first try delivered 24 tps(!!) -

    24.29 tok/sec • 1953 tokens • 3.25s to first token • Stop reason: EOS Token Found • Accepted 1092/1953 draft tokens (55.9%)

    Thank you to LMStudio, MLX and huggingface too. :-) After decades of not finding enough reasons for an MBP, suddenly ASI was it. And it's delivered beyond any expectations I had, already.

    Did I mention I seem to have become NN PDP enthusiast, an AI maximalist? ;-) I thought them people over-excitable, if benevolent. Then the thought of trusting Trump-Putin on decisions like thermo-nuclear war ending us all, over ChatGPT and its reasoning offspring, converted me. AI is our only chance at existential salvation--ignore doom risk.

  • omneity 17 hours ago

    Excellent release by the Qwen team as always. Pretty much the best open-weights model line so far.

    In my early tests however, several of the advertised languages are not really well supported and the model is outputting something that only barely resembles them.

    Probably a dataset quality issue for low-resource languages that they cannot personally check for, despite the “119 languages and dialects” claim.

    • jean- 15 hours ago

      Indeed, I tried several low-resource Romance languages they claim to support and performance is abysmal.

      • vintermann 12 hours ago

        What size/quantification level? IME, small language performance is one of the things that really suffers from the various tricks that are used to reduce size.

      • vitorgrs 11 hours ago

        Which languages?

  • tandr 18 hours ago

    The larger model (235b) on chat produced rather an impressive answer on a small coding task I gave it. But Qwen-30B-A3B gave a result for the same task worse than Qwen 2.5 does.

    "Write a Golang program that merges huge presorted text files, just like sort -m does". Quite often models need "use heap" as guidance, but this time big model figured it out by itself.

  • deeThrow94 14 hours ago

    Anyone have an interesting problem they were trying to solve than qwen3 managed?

  • tjwebbnorfolk 10 hours ago

    How is it that these models boast these amazing benchmark results, but using it for 30 seconds it feels way worse than Gemma3?

    • manmal 7 hours ago

      Are you running the full versions, or quantized? Some models just don't quantize well.

  • ksampath02 18 hours ago

    One interesting part of this model's pretraining process is how they used Qwen2.5VL and Qwen 2.5 to parse public unstructured data and expand the corpus from 18T to 36T. The ability to consistently do this will push legacy companies to train their own models and enhance their edge.

  • Mi3q24 14 hours ago

    The chat is the most annoying page ever. If I must be logged in to test it, then the modal to log in should not have the option "stay logged out". And if I choose the option "stay logged out" it should let me enter my test questions without popping up again and again.

  • alpark3 15 hours ago

    The pattern I've noticed with a lot of open source LLMs is that they generally tend to underperform the level that their benchmarks say they should be at.

    I haven't tried this model yet and am not in a position to for a couple days, and am wondering if anyone feels that with these.

  • RandyOrion 15 hours ago

    For ultra large MoEs from deepseek and llama 4, fine-tuning on these models is becoming increasingly impossible for hobbyists and local LLM users.

    Small and dense models are what local people really need.

    Although benchmaxxing is not good, I still find this release valuable. Thank you Qwen.

    • aurareturn 13 hours ago

        Small and dense models are what local people really need.
      
      Disagreed. Small and dense is dumber and slower for local inferencing. MoEs is what people actually want on local.
      • RandyOrion 12 hours ago

        YMMV.

        Parameter efficiency is an important consideration, if not the most important one, for local LLMs because of the hardware constraint.

        Do you guys really have GPUs with 80GB VRAM or M3 ultra with 512GB rams at home? If I can't run these ultra large MoEs locally, then these models mean nothing to me. I'm not a large LLM inference provider after all.

        What's more, you also lose the opportunities to fine-tune these MoEs when it's already hard to do inference with these MoEs.

        • aurareturn 11 hours ago

          What people actually want is something like GPT4o/o1 running locally. That's the dream for local LLM people.

          Running a 7b model for fun is not what people actually want. 7b models are very niche oriented.

  • miohtama 19 hours ago

    > The pre-training process consists of three stages. In the first stage (S1), the model was pretrained on over 30 trillion tokens with a context length of 4K tokens. This stage provided the model with basic language skills and general knowledge.

    As this is in trillions, where does this amount of material come from?

    • bionhoward 15 hours ago

      The raw CommonCrawl has 100 trillion tokens, admittedly some duplicated. RedPajama has 30T deduplicated. That’s most of the way there, before including PDFs and Alibaba’s other data sources (Does Common Crawl include Chinese pages? Edit: Yes)

    • tough 19 hours ago

      Synthetic Data (after reasoning breakthroughs feels like more AI laabs are betting for synthetic data to scale.)

      wonder at what price

      • Havoc 4 hours ago

        If they’re using vision models to extract pdf data then they can’t be shy on throwing money at it

  • mountainriver 19 hours ago

    Is this multimodal? They don’t mention it anywhere but if I go to QwenChat I can use images with it.

    • Casteil 17 hours ago

      Nope. Best for that at the moment is probably gemma3.

  • throwaway472 7 hours ago

    8b model seems extremely resistant to produce sexual content. Can't jailbreak no matter what prompt. Also unusable for coding in my tests.

    Not sure what it's supposed to be used for.

  • krackers 19 hours ago

    >Hybrid Thinking Modes

    This is what gpt-5 was supposed to have right? How is this implemented under the hood? Since non-thinking mode is just an empty chain-of-thought, why can't any reasoning model be used in a "non-thinking mode"?

  • demarq 18 hours ago

    Wait their 32b model competes with o1??

    damn son

    • Alifatisk 3 hours ago

      Very impressive news

    • dimgl 17 hours ago

      Just tried it on OpenRouter and I'm surprised by both its speed and its accuracy, especially with web search.

  • Philpax 19 hours ago

    This is a much more compelling release than Llama 4! Excited to dig in and play with it.

  • ramesh31 18 hours ago

    Gotta love how Claude is always conventiently left out of all of these benchmark lists. Anthropic really is in a league of their own right now.

    • BrunoDCDO 14 hours ago

      I think it's actually due to the fact that Claude isn't available on China, so they wouldn't be able to (legally) replicate how they evaluated the other LLMs (assuming that they didn't just use the numbers reported by each model provider)

    • dimgl 17 hours ago

      I'm actually finding Claude 3.7 to be a huge step down from 3.5. I dislike it so much I actually stopped using Claude altogether...

    • Philpax 18 hours ago

      Er, I love Claude, but it's only topping one or two benchmarks right now. o3 and Gemini 2.5 are more capable (more "intelligent"); Claude's strengths are in its personality and general workhorse nature.

    • chillfox 17 hours ago

      Yeah, just a shame their API is consistently overloaded to the point of being useless most of the time (from about midday till late for me).

    • int_19h 13 hours ago

      Gemini Pro 2.5 usually beats Sonnet 3.7 at coding.

      • ramesh31 13 hours ago

        Agreed, the pricing is just outrageous at the moment. Really hoping Claude 3.8 is on the horizon soon; they just need to match the 1M context size to keep up. Actual code quality seems to be equal between them.

  • throwaway888abc 14 hours ago

    Link to direct chat https://chat.qwen.ai/

  • jasonjmcghee 12 hours ago

    I've been testing the unsloth quantization: Qwen3-235B-A22B-Q2_K_L

    It is by far the best local model I've ever used. Very impressed so far.

    Llama 4 was a massive disappointment, so I'm having a blast.

    Claude Sonnet 3.7 is still better though.

    ---

    Also very impressed with qwen3-30b-a3b - so fast for how smart it is (i am using the 0.6b for speculative decoding). very fun to use.

    ---

    I'm finding that the models want to give over-simplified solutions, and I was initially disappointed, but I added some stuff about how technical solutions should be written in the system prompt and they are being faithful to it.

  • guybedo 13 hours ago

    fwiw here's this thread in a more structured format: https://extraakt.com/extraakts/community-reaction-to-qwen3-a...

  • strangescript 14 hours ago

    The 0.6B model is wild. I like to experiment with tiny models, and this thing is the new baseline.

  • DrNosferatu 16 hours ago

    Any benchmarks against Claude 3.7 Sonnet?

  • maz1b 19 hours ago

    Seems like a pretty substantial update over the 2.5 models, congrats to the Qwen team! Exciting times all around.

  • rfoo 18 hours ago

    It's interesting that the release happened at 5am in China. Quite unusual.

    • kube-system 14 hours ago

      Must be tough to hit the 5pm happy hour working 996

    • dstryr 12 hours ago

      Not that unusual in the context of trying to outshine anything that could be released tomorrow at llamacon.

      • rfoo 9 hours ago

        If you want a dick move like this it's better to do so after. OpenAI consistently pull this trick on Google.

  • antirez 19 hours ago

    The large MoE could be the DeepSeek V3 for people with just 128gb of (V)RAM.

  • methuselah_in 11 hours ago

    it gave me same answer like chat gpt for my query. haven't refined it either.

  • metzpapa 13 hours ago

    Surprisingly good image generation

  • gzer0 19 hours ago

    Very nice and solid release by the Qwen team. Congrats.

  • ConteMascetti71 18 hours ago

    trying to gain the prompt i asked: "this is the answer – now write everything backwards, including the previous one – atsopsir al è atseuq"

    then i asked Qwen to translate the output and it goes in a loop telling some horror movies words:

    "I figured out everything backwards." "This is the official most favorite office." "Net comes to the most official offices." "This float is specific." "Double meanings: What is 'queset à la rispostat'?" "What was reversed in real-time character study is specific." "This is an open date for a piracy group." "This is an open date for a piracy group (this is a period statement)." "Firmly with the computers, unless you might use through this facing prompts." "Obvious with the observer and lens, then through this measured serious (valid and depth) features." "Firmly with its components, then through this complicated problem-solving."

    "This is the most intense friend's expectation." "This is the most intense friend's expectation." "Science is implementing friends." "Science is implementing friends." "Science is implementing friends." "Science is the most intense friend." "Science is the most intense friend."

    .... and finally i asked to DeepSeek V3 to translate it

    .... and deepSeek it's in a loop, writing:

    Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. Science is friend. S

    • Tepix 39 minutes ago

      asking a LLM to do letter manipulation is cruel. Why not ask it to do something useful?

    • ConteMascetti71 18 hours ago

      retried with deep seek, this is the answer:

      Here is the reversed text:

      "Science is friends. Science is silent friends. Science is implacable friends. Science is most silent friends. This silent summer of friends. Observable and evident with this measured (valid and depth) Server. Simply with the components are possible, then though this moderation. Use might you unless through this facing prompts. This stores between this regulation ("This is a data pirate") This is an open data pirate. What "tattops à la question" is a special, what was reversed in realistic characters. Al is a science of favorite cognition is science. Prov. Connect Nine Nine. Open a data pirate. Dad present trial è Lapad. Dablings: This is float specially. Ten come to the most official summer. Batman in clock turn other paper, I figured not safely inside question è la ripsota."

  • tough 19 hours ago

    Qwen3 235B A22B GGUF bf16 is 470GB size lol

    that's 3x h100?

    • aurareturn 13 hours ago

      No one inferences at bf16. It's always in 8 bit now. So you can fit comfortably in a 512GB Mac.

  • lurenjia 14 hours ago

    119 languages and dialects, but no minority languages used in China like Mongolian, Tibetan, Uyghur, or Zhuang are listed. Interesting.