Wan 2.2 Has Landed
Big news for AI video creators: Wan 2.2 is here, and it’s a serious step forward.
Here’s what makes it worth your time:
  • Dual Generation Modes – Text-to-Video, Image-to-Video, or a hybrid combo. One model does it all.
  • Cinematic Control – Prompts can now tap into lighting, composition, color, and real cinematic camera language. Think more “film look” out of the box.
  • Smoother Motion – Trained on way more data than v2.1, it handles body movement, facial expressions, and complex choreography with a lot more fluidity.
  • Smarter Architecture – A Mixture-of-Experts setup means the model uses different “brains” for rough vs. refined detail, so you get higher capacity without crushing your GPU.
  • Runs on Consumer Hardware – The lighter TI2V-5B version can crank out 720p/24fps clips on cards like the RTX 4090.
  • Benchmark Winner – On WAN-Bench 2.0, it outperforms even some big closed-source players like Sora and Hailuo 02.
In short: better visuals, better motion, more filmmaker-friendly.
I’ll be testing it out in some workflows for Lumarka soon, but I’d love to hear what you’re seeing. If you’ve run prompts through Wan 2.2 already, what’s impressed you, or what’s still frustrating?
Drop your clips, tips, or thoughts below 👇
5
1 comment
Lawrence Jordan
5
Wan 2.2 Has Landed
AIography: The AI Creators Hub
From film prod to web dev. Learn how AI can assist you in bringing your creative visions to life. Join our community of creators on the cutting edge.
Leaderboard (30-day)
Powered by