Click to upload character image
PNG, JPG, WEBP supported
Click to upload reference video
MP4, MOV, AVI supported (max 120s)
Add additional context or specific instructions for video generation
Higher resolution requires more credits
Wan 2.2 Animate - Video gallery
🎉 Speed up film directing with cutting-edge AI video generator 🎉
motion-retargeting module
trained on 8.2M human/character video pairs
pose-to-pose mapping with texture/lighting preservation
3D-aware attention to handle occlusions in complex movements
What is Wan Animate
Wan Animate is a unified AI framework for character animation and replacement. Developed by Alibaba Tongyi Lab, it brings static characters to life with realistic expressions and movements.
- Character AnimationAnimate any character from a single static image using reference video movements and expressions.
- Character ReplacementReplace characters in videos while preserving original lighting, background, and environmental context.
- Open Source ModelAccess the complete Wan 2.2 Animate model with weights and source code available on GitHub.
Key Features of Wan 2.2 Animate
Advanced AI capabilities for realistic character animation and video enhancement.
Facial Expression Transfer
Capture and replicate detailed facial expressions with high fidelity using advanced neural networks.
Body Motion Replication
Spatially aligned skeleton system for accurate body movement and gesture replication.
Environmental Matching
Automatic relighting module ensures character appearance matches the scene lighting and color tone.
Dual Mode Support
Animation Mode for static character animation and Replacement Mode for character substitution in videos.
High-Quality Output
Generate 720P videos at 24fps with cinematic-level aesthetic controls and detailed motion.
Open Source Model
Complete model weights and source code available on Hugging Face and GitHub platforms.
How to Use Wan Animate
Animate your characters in four simple steps:
Why Choose Wan Animate
Experience state-of-the-art AI character animation with unprecedented quality and realistic motion replication.



Frequently Asked Questions About Wan Animate
Have another question? Check our GitHub repository or contact the community.
What exactly is Wan Animate and how does it work?
Wan Animate is a unified AI framework developed by Alibaba Tongyi Lab for character animation and replacement. It uses advanced neural networks to capture facial expressions and body movements from reference videos and apply them to static character images, creating realistic animated videos.
What's the difference between Animation and Replacement mode?
Animation mode generates new animations for the character in the input image based on the input video, while Replacement mode swaps the character in the video with a new one. Both modes use the same underlying technology but are optimized for different use cases.
What types of characters can I animate with Wan Animate?
Wan Animate supports a wide range of character types including real people in photos, anime characters, cartoon illustrations, and artistic portraits. The model is trained to work with diverse character styles and can adapt to different visual aesthetics.
How long does it typically take to generate animations with Wan Animate?
Processing time depends on video length and resolution, but typically ranges from a few minutes to an hour for standard content. The model is optimized for consumer GPUs and can generate 720P videos at 24fps efficiently.
What's included in the Wan Animate model release?
The release includes the complete Wan2.2-Animate-14B model weights, training code, inference scripts, documentation, and examples. Everything is open-source and available on Hugging Face and GitHub platforms.
Can I use Wan Animate for commercial projects?
Yes! Wan Animate is released as an open-source model, making it suitable for both research and commercial applications. Check the specific license terms on the GitHub repository for detailed usage rights and restrictions.