Wan – Open-source alternative to VEO 3

(github.com)

185 points | by modinfo 20 hours ago ago

29 comments

  • bsenftner 13 hours ago

    If you want to play with this, as in really play, with over a dozen variant models with acceleration loras and a vibrant community, ya gotta check out:

    https://github.com/deepbeepmeep/Wan2GP

    And the discord community: https://discord.gg/g7efUW9jGV

    "Wan2GP" is AI video and images "for the GPU poor", get all this operating with as little as 6GB VRAM, Nvidia only.

    • bobajeff 11 hours ago

      If having only 6GB VRAM is GPU poor then I must be GPU destitute.

      • hirako2000 10 hours ago

        It's hard to get an nvidia consumer having then less than 12GB of VRAM, not just these days.

        By GPU poor they didn't mean GPUless or GPU of the previous decade. It's on the readme that only Nvidia is supported.

        • hypercube33 8 hours ago

          I wish they'd state suggested or required hardware upfront.

          Also disappointing that I haven't seen anything target the new Ryzen AI chips that can do 96gb since they seem pretty capable. I'm not sure how much memory m4 pro on the apple side can be utilized for this but it seems like the typical machines are 48 or 64gb these days. Lot more bang for your buck than an Nvidia card on paper?

    • diggan 12 hours ago

      On the other side, is there any projects focusing on performance instead? I have the VRAM available to run Wan2.1, but still takes minutes per frame. Basically something like what vLLM is for running local LLM weights, but for video/WAN?

      • bsenftner 11 hours ago

        This person here has accelerator loras that reduce the compute from 30+ steps to 4 and 8 steps with minimal quality loss: https://huggingface.co/Kijai/WanVideo_comfy

        There are a lot of people focused on performance, various methods, just as there are a lot of people focused on non-performance issues like fine tunes that add aspects the models lack, such as terminology linking professional media terms to the model, the pop culture terminology the model does not know, accuracy of body posture during fight, dance, gymnastic, and sports activity, and then less flashy but pragmatic actions like proper use of tableware, chopsticks, keyboards and musical instruments - complex actions that stand out when done incorrectly or never shown. The model knowledge is high but has limits, which people are adding.

        • bsenftner 11 hours ago

          There is also a ton of Wan video activity in the ComfyUI community. Everyday for a while, about two weeks ago, ComfyUI had updates specific to Wan 2.2 video integrations in the standard installation. ComfyUI is more complex application, significantly, than Wan2GP though.

  • CosmicShadow 6 hours ago

    Wan2.1 was great, but Wan2.2 is really awesome! Here's some samples I made locally with my 5090:

    - https://imgur.com/a/VeTn4Ej

    - https://imgur.com/a/CujxVX3

    Those were both Image to Video and then I upscaled them to 4k. I made the images using Flux Dev Krea.

    Took about 3-4 minutes per video to generate and another 2-3 to upscale. Images took 20-40s to generate.

    • scroogey 5 hours ago

      What did you use to upscale them?

  • ahmedhawas123 7 hours ago

    Are there video generation benchmarks similar to how there are benchmarks for LLMs? Reason I ask is because with lots of these models you have to go through a long cycle to get them up and running before you see an output, and often they will break with basic tasks requiring physics, state, etc. Would love to see some comparison of models across basic things like that.

  • franky47 11 hours ago

    Quick, someone make a UI for this and call it Obi.

  • cuuupid 9 hours ago

    I’ve been using this via Replicate for a while and it’s honestly amazing while being way cheaper. China is definitely leading on open source

  • cubefox 14 hours ago

    Arguably most interesting facts about the new Wan 2.2 model:

    - they are now using a 27B MoE architecture (with two 14B experts, for low level and high level detail), which were usually only used for autoregressive LLMs rather than diffusion models

    - the smaller 5B model supports up to 720p24 video and runs on 24 GB of VRAM, e.g. an RTX 4090, a consumer graphics card

    - if their benchmarks are reliable, the model performance is SOTA even compared to closed source models

    • liuliu 4 hours ago

      Some facts are wrong:

      - The 27B "MoE" are not the MoE commonly referred to in LLM world. It is not MoE on FFN layers. It simply means two different models used for different denoising timestep ranges (exactly the same as SDXL-Base / SDXL-Refiner). Calling it MoE is not technically wrong. But claiming "which were usually only used for autoregressive LLMs rather than diffusion models" is just wrong (not to mention HiDream I1 is a model actually incorporated MoE layers (in FFN layer) and is a diffusion model).

      - The A14B models can run on 24GiB VRAM too, with CPU offloading and quantization.

      - Yes, it is SotA even including some closed source models.

    • mandeepj 10 hours ago

      > - the smaller 5B model supports up to 720p24 video and runs on 24 GB of VRAM, e.g. an RTX 4090, a consumer graphics card

      Seems like you can run it 2 Gpus each having 12 GB VRAM. At least, a breakdown on their GitHub page implied so.

  • ProofHouse 15 hours ago

    How can they manage that but not the website?

  • ivape 6 hours ago

    Censored?

  • esseph 16 hours ago

    Ugh hate they used this name

    • yorwba 16 hours ago

      You can call it Wanxiang (万相, ten thousand pictures) if you want. Similarly, Qwen is Qianwen (千问, one thousand questions).

      • CapsAdmin 15 hours ago

        Its original name was WanX, but the gen ai community found that to be too funny / unfortunate, so they changed it to just Wan.

        • bn-l 10 hours ago

          It’s probably a more appropriate name to be fair.

      • latentsea 16 hours ago

        They should just pretend it's an acronym. Wide Art Network.

      • qiine 13 hours ago

        ha TIL, very cool names!

    • diggan 10 hours ago

      Why "hate" this name more than any other name? At least justify your semi-spam.

      • esseph 6 hours ago
        • diggan 4 hours ago

          I'm familiar with that, but would people really confuse a video generation model for a type of computer networks?

          • esseph 2 hours ago

            That assumes you know what VEO 3 is by reading the title.

            But, I guess sometimes you use a plane to build a plane while the material is aligned to a particular plane.

    • ProofHouse 15 hours ago

      HATE