IBL News | New York
stability.ai released this week Stable Video Diffusion, a foundation model for generative video based on the image model Stable Diffusion.
Adaptable to numerous video applications, the released model is intended for research but not for real-world or commercial applications at this stage.
Stable Video Diffusion was released in the form of two image-to-video models, capable of generating 14 and 25 frames at customizable frame rates between 3 and 30 frames per second.
“Stable Video Diffusion is a proud addition to our diverse range of open-source models. Spanning across modalities including image, language, audio, 3D, and code, our portfolio is a testament to Stability AI’s dedication to amplifying human intelligence,” said the company.
In addition, stability.ai opens up a waitlist to access a new upcoming web experience featuring a Text-To-Video interface. This tool showcases the practical applications of Stable Video Diffusion in numerous sectors, including Advertising, Education, Entertainment, and beyond.
Stability just released Stable Diffusion Video this week. It’s quite something.
The text-to-video model is fully open source and let’s you generate 14 or 25 frames at 576 x 1024.
It can also do do multi-view generation, 3D scene understanding, and camera control via LoRA.… pic.twitter.com/Ic1KARd40q
— Lior⚡ (@AlphaSignalAI) November 23, 2023