71 comments

  • gardnr 4 hours ago

    This is a 30B parameter MoE with 3B active parameters and is the successor to their previous 7B omni model. [1]

    You can expect this model to have similar performance to the non-omni version. [2]

    There aren't many open-weights omni models so I consider this a big deal. I would use this model to replace the keyboard and monitor in an application while doing the heavy lifting with other tech behind the scenes. There is also a reasoning version, which might be a bit amusing in an interactive voice chat if it pronounces the thinking tokens while working through to a final answer.

    1. https://huggingface.co/Qwen/Qwen2.5-Omni-7B

    2. https://artificialanalysis.ai/models/qwen3-30b-a3b-instruct

    • red2awn 2 hours ago

      This is a stack of models:

      - 650M Audio Encoder

      - 540M Vision Encoder

      - 30B-A3B LLM

      - 3B-A0.3B Audio LLM

      - 80M Transformer/200M ConvNet audio token to waveform

      This is a closed source weight update to their Qwen3-Omni model. They had a previous open weight release Qwen/Qwen3-Omni-30B-A3B-Instruct and a closed version Qwen3-Omni-Flash.

      You basically can't use this model right now since none of the open source inference framework have the model fully implemented. It works on transformers but it's extremely slow.

    • gardnr 3 hours ago

      I can't find the weights for this new version anywhere. I checked modelscope and huggingface. It looks like they may have extended the context window to 200K+ tokens but I can't find the actual weights.

    • plipt an hour ago

      I dont think the Flash model discussed in the article is 30B

      Their benchmark table shows it beating Qwen3-235B-A22B

      Does "Flash" in the name of a Qwen model indicate a model-as-a-service and not open weights?

      • red2awn an hour ago

        Flash is a closed weight version of https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct (it is 30B but with addtional training on top of the open weight release). They deploy the flash version on Qwen's own chat.

        • plipt an hour ago

          Thanks

          Was it being closed weight obvious to you from the article? Trying to understand why I was confused. Had not seen the "Flash" designation before

          Also 30B models can beat a semi-recent 235B with just some additional training?

          • red2awn an hour ago

            They had a Flash variant released alongside the original open weight release. It is also mentioned in Section 5 of the paper: https://arxiv.org/pdf/2509.17765

            For the evals it's probably just trained on a lot of the benchmark adjacent datasets compared to the 235B model. Similar thing happened on other model today: https://x.com/NousResearch/status/1998536543565127968 (a 30B model trained specifically to do well in maths get near SOTA scores)

    • olafura 3 hours ago
      • coder543 3 hours ago

        No... that website is not helpful. If you take it at face value, it is claiming that the previous Qwen3-Omni-Flash wasn't open either, but that seems wrong? It is very common for these blog posts to get published before the model weights are uploaded.

        • red2awn 2 hours ago

          The previous -Flash weight is closed source. They do have weights for the original model that is slightly behind in performance https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct

          • coder543 an hour ago

            Based on things I had read over the past several months, Qwen3-Flash seemed to just be a weird marketing term for the Qwen3-Omni-30B-A3B series, not a different model. If they are not the same, then that is interesting/confusing.

            • red2awn an hour ago

              It is an in-house closed weight model for their own chat platform, mentioned in Section 5 of the original paper: https://arxiv.org/pdf/2509.17765

              I've seen it in their online materials too but can't seem to find it now.

    • tensegrist 3 hours ago

      > There is also a reasoning version, which might be a bit amusing in an interactive voice chat if it pronounces the thinking tokens while working through to a final answer.

      last i checked (months ago) claude used to do this

    • andy_xor_andrew 3 hours ago

      > This is a 30B parameter MoE with 3B active parameters

      Where are you finding that info? Not saying you're wrong; just saying that I didn't see that specified anywhere in the linked page, or on their HF.

      • plipt 2 hours ago

        The link[1] at the top of their article to HuggingFace goes to some models named Qwen3-Omni-30B-A3B that were last updated in September. None of them have "Flash" in the name.

        The benchmark table shows this Flash model beating their Qwen3-235B-A22B. I dont see how that is possible if it is a 30B-A3B model.

        I don't see a mention of a parameter count anywhere in the article. Do you? This may not be an open weights model.

        This article feels a bit deceptive

        1: https://huggingface.co/collections/Qwen/qwen3-omni

  • sosodev 4 hours ago

    Does Qwen3-Omni support real-time conversation like GPT-4o? Looking at their documentation it doesn't seem like it does.

    Are there any open weight models that do? Not talking about speech to text -> LLM -> text to speech btw I mean a real voice <-> language model.

    edit:

    It does support real-time conversation! Has anybody here gotten that to work on local hardware? I'm particularly curious if anybody has run it with a non-nvidia setup.

    • red2awn an hour ago

      None of inference frameworks (vLLM/SGLang) supports the full model, let alone non-nvidia.

      • AndreSlavescu 17 minutes ago

        We actually deployed working speech to speech inference that builds on top of vLLM as the backbone. The main thing was to support the "Talker" module, which is currently not supported on the qwen3-omni branch for vLLM.

        Check it out here: https://models.hathora.dev/model/qwen3-omni

      • sosodev 38 minutes ago

        That's unfortunate but not too surprising. This type of model is very new to the local hosting space.

    • dsrtslnd23 4 hours ago

      it seems to be able to do native speech-speech

      • sosodev 4 hours ago

        It does for sure. I did some more digging and it does real-time too. That's fascinating.

    • ivape an hour ago

      That's exciting. I doubt there are any polished voice chat local apps yet that you can easily plug this into (I doubt the user experience is "there" yet). Even stuff like Silly Tavern is near unusable, lots of work to be done on the local front. Local voice models are what's going to enable that whole Minority Report workflow soon enough (especially if commands and intent are determined at the local level, and the meat of the prompt is handled by a larger remote model).

      This is part of programming that I think is the new field. There will be tons of work for those that can build the new workflows which will need to be primarily natural language driven.

  • terhechte 3 hours ago

    Is there a way to run these Omni models on a Macbook quantized via GGUF or MLX? I know I can run it in LMStudio or Llama.cpp but they don't have streaming microphone support or streaming webcam support.

    Qwen usually provides example code in Python that requires Cuda and a non-quantized model. I wonder if there is by now a good open source project to support this use case?

  • dvh 4 hours ago

    I asked: "How many resistors are used in fuzzhugger phantom octave guitar pedal?". It replied 29 resistors and provided a long list. Answer is 2 resistors: https://tagboardeffects.blogspot.com/2013/04/fuzzhugger-phan...

    • iFire 4 hours ago

      > How many resistors are used in fuzzhugger phantom octave guitar pedal?

      Weird, as someone not having a database of the web, I wouldn't be able to calculate either result.

      • dvh 4 hours ago

        "I don't know" would be perfectly reasonable answer

        • MaxikCZ 2 hours ago

          I feel like theres a time in near future where LLMs will be too cautious to answer any questions they arent sure about, and most of the human effort will go into pleading the LLM to at least try to give an answer, which will almost always be correct anyways.

          • plufz 24 minutes ago

            That would be a great if you could have a setting like temperature 0.0-1.0 (Only answer if you are 100% to guess as much as you like).

      • kaoD 4 hours ago

        > as someone not having a database of the web, I wouldn't be able to calculate either result

        And that's how I know you're not an LLM!

      • iFire 4 hours ago

        I tend to pick things where I think the answer is in the introduction material like exams that test what was taught.

    • esafak 4 hours ago

      This is just trivia. I would not use it to test computers -- or humans.

      • littlestymaar 2 hours ago

        It's good way to assess the model with respect to hallucinations though.

        I don't think a model should know the answer, but it must be able to know that it doesn't know if you want to use it reliably.

        • esafak 2 hours ago

          No model is good at this yet. I'd expect the flagships to solve the first.

      • parineum 4 hours ago

        Everything is just trivia until you have a use for the answer.

        OP provided a we link with the answer, aren't these models supposed to be trained on all of that data?

        • esafak 4 hours ago

          There is nothing useful you can do with this information. You might as well memorize the phone book.

          The model has a certain capacity -- quite limited in this case -- so there is an opportunity cost in learning one thing over another. That's why it is important to train on quality data; things you can build on top of.

        • DennisP 3 hours ago

          Just because it's in the training data doesn't mean the model can remember it. The parameters total 60 gigabytes, there's only so much trivia that can fit in there so it has to do lossy compression.

    • brookst 4 hours ago

      Where did you try it? I don’t see this model listed in the linked Qwen chat.

    • strangattractor 2 hours ago

      Maybe it thinks some of those 29 are in series:)

  • vessenes an hour ago

    Interesting - when I asked the omni model at qwen.com what version it was, I got a testy "I don't have a version" and then was told my chat was blocked for inappropriate content. A second try asking for knowledge cutoff got me the more equivocal "2024, but I know stuff after that date, too".

    No idea how to check if this is actually deployed on qwen.com right now.

    • zamadatix an hour ago

      > No idea how to check if this is actually deployed on qwen.com right now.

      Assuming you mean qwen.ai, when you run a query it should take you to chat.qwen.ai with the list of models in the top left. None of the options appear to be the -Omni variant (at least when anonymously accessing it).

      • vessenes an hour ago

        Thanks - yes - I did. The blog post suggests clicking the 'voice' icon on the bottom right - that's what I did.

  • devinprater 2 hours ago

    Wow, just 32B? This could almost run on a good device with 64 GB RAM. Once it gets to Ollama I'll have to see just what I can get out of this.

    • plipt 2 hours ago

      I see that their HuggingFace link goes to some Qwen3-Omni-30B-A3B models that show a last updated date of September

      The benchmark table in their article shows Qwen3-Omni-Flash-2025-12-01 (and the previous Flash) as beating Qwen3-235B-A22B. How is that possible if this is only a 30B-A3B model? Also confusing how that comparison column starts out with one model but changes them as you descend down the table.

      I don't see any FLASH variant listed on their Hugginface. Am i just missing it or are these specifying a model only used for their API service and there are no open weights to download?

    • apexalpha 2 hours ago

      I run these on a 48gb Mac because of the universal ram.

  • banjoe 4 hours ago

    Wow, crushing 2.5 Flash on every benchmark is huge. Time to move all of my LLM workloads to a local GPU rig.

    • embedding-shape 4 hours ago

      Just remember to benchmark it yourself first with you private task collection, so you can actually measure them against each other. Pretty much any public benchmark is unreliable at this moment, and making model choices based on other's benchmarks is bound to leave you disappointed.

      • MaxikCZ 2 hours ago

        This. Last benchmarks of DSv3.2spe hinted at beating basically everything, yet in my testing even sonnet is miles ahead both in terms of speed and accuracy

    • red2awn 22 minutes ago

      Why would you use an Omni model for text only workload... There is Qwen3-30B-A3B.

  • sim04ful 3 hours ago

    The main issue I'm facing with realtime responses (speech output) is how to separate non-diegetic outputs (e.g thinking, structured outputs) from outputs meant to be heard by the end user.

    I'm curious how anyone has solved this

    • artur44 2 hours ago

      A simple way is to split the model’s output stream before TTS. Reasoning/structured tokens go into one bucket, actual user-facing text into another. Only the second bucket is synthesized. Most thinking out loud issues come from feeding the whole stream directly into audio.

      • pugio an hour ago

        There is no TTS here. It's a native audio output model which outputs audio tokens directly. (At least, that's how the other real-time models work. Maybe I've misunderstood the Qwen-Omni architecture.)

        • artur44 33 minutes ago

          True, but even with native audio-token models you still need to split the model’s output channels. Reasoning/internal tokens shouldn't go into the audio stream only user-facing content should be emitted as audio. The principle is the same, whether the last step is TTS or audio token generation.

  • binsquare 4 hours ago

    Does anyone else find that there's hard to pin down reason of life-lessness in the speech of these voice models?

    Especially in the fruit pricing portion of the video for this model. Sounds completely normal but I can immediately tell it is ai. Maybe it's intonation or the overly stable rate of speech?

    • Lapel2742 4 hours ago

      IMHO it's not lifeless. It's just not overly emotional. I definitely prefer it that way. I do not want the AI to be excited. It feels so contrived.

      On the video itself: Interesting, but "ideal" was pronounced wrong in German. For a promotional video, they should have checked that with native speakers. On the other hand its at least honest.

      • nunodonato 3 hours ago

        I hate with a passion the over-americanized "accent" of chatgpt voices. Give me a bland one any day of the week

    • vessenes an hour ago

      I'm not convinced its end-to-end multimodal - in that case, you'll have a speech synthesis section and this will be some of the result. You could test by having it sing or do some accents, or have it talk back to you in an accent you give it.

    • sosodev 4 hours ago

      I think it's because they've crammed vision, audio, multiple voices, prosody control, multiple languages, etc into just 30 billion parameters.

      I think ChatGPT has the most lifelike speech with their voice models. They seem to have invested heavily in that area while other labs focused elsewhere.

    • esafak 4 hours ago

      > Sounds completely normal but I can immediately tell it is ai.

      Maybe that's a good thing?

    • colechristensen 4 hours ago

      I'm perfectly ok with and would prefer an AI "accent".

  • aschobel 3 hours ago

    Looks to be API only. Bummer.

  • mettamage 4 hours ago

    I wonder if with that music analysis mode, you can also make your own synths

  • Aissen 3 hours ago

    Is this a new proprietary model?

  • rarisma 4 hours ago

    GPT4o in the charts is crazy.

    • BoorishBears 4 hours ago

      Why? gpt-realtime is finalized gpt-4o. Gemini Live is still 2.5.

      Not their fault frontier labs are letting their speech to speech offerings languish.

  • stevenhuang 3 hours ago