Stability AI, a leading open-source AI research company, today announced the release of Stable Video Diffusion, a generative video model. This model represents a significant step forward in the development of AI-powered video creation tools, offering users the ability to generate realistic and creative videos from text descriptions.
Stable Video Diffusion builds upon the success of Stability AI's Stable Diffusion image model, which has been widely praised for its ability to produce high-quality images from text prompts. The video model extends this capability to the realm of video, enabling users to create short clips of up to 25 frames in length.
The model is currently available in research preview, with the code and weights required to run it locally available on Stability AI's GitHub repository and Hugging Face page. The company is also planning to release a web-based Text-To-Video interface in the near future.
In addition to its ability to generate videos from text prompts, Stable Video Diffusion can also be adapted to a variety of downstream tasks, such as multi-view synthesis from a single image. This versatility makes the model a valuable tool for a wide range of applications, including advertising, education, and entertainment.
Stability AI emphasizes that Stable Video Diffusion is still in its early stages of development and should not be used for real-world or commercial applications at this time. The company is seeking feedback from users to help refine the model for its eventual release.
About Stability AI
Stability AI is an open-source AI research company that is dedicated to amplifying human intelligence. The company develops and releases cutting-edge AI models that are accessible to everyone, free of charge. Stability AI believes that AI has the power to solve some of the world's most pressing challenges, and the company is committed to making AI tools available to everyone who can benefit from them.