Stability.AI Introduces a Research Model for Generative Video

IBL News | New York

stability.ai released this week Stable Video Diffusion, a foundation model for generative video based on the image model Stable Diffusion.

Adaptable to numerous video applications, the released model is intended for research but not for real-world or commercial applications at this stage.

Stable Video Diffusion was released in the form of two image-to-video models, capable of generating 14 and 25 frames at customizable frame rates between 3 and 30 frames per second.

“Stable Video Diffusion is a proud addition to our diverse range of open-source models. Spanning across modalities including image, language, audio, 3D, and code, our portfolio is a testament to Stability AI’s dedication to amplifying human intelligence,” said the company.

In addition, stability.ai opens up a waitlist to access a new upcoming web experience featuring a Text-To-Video interface. This tool showcases the practical applications of Stable Video Diffusion in numerous sectors, including Advertising, Education, Entertainment, and beyond.
.