In the dynamic landscape of technology, generative AI has emerged as the hottest topic, revolutionizing various tasks from generating images based on text prompts to solving complex equations at lightning speed.
Among the notable players in this field is Runway, a specialised generative AI tool for content creation. This tool stands out by effortlessly producing audio, images, videos, and 3D structures with a simple prompt, and the best part is — it’s free to get started with.
Runway’s new motion brush is wild.
Brought me right back to the platform.
This isn’t perfect…w/e.
Way too much fun.
Tools Used:
Images: Midjourney
Image Editing: Photoshop Beta
Animation + Editing: Runway Gen2
Music: Soundraw#midjourney #runwayml #aiart #AIArtCommuity pic.twitter.com/6Unn8xpFLT— Rory Flynn (@Ror_Fly) November 23, 2023
Runway’s capabilities extend to converting any image, including those generated on models like Midjourney, into videos using tools like Runway Motion Brush.
The latest addition to its offerings is Runway Gen-2, a multimodal AI system capable of generating images, videos, and videos with text. Adding to the convenience, there’s even an iOS app for Runway, allowing users to generate multimedia content on smartphones.
With Runway Gen-2, users can create new videos with straightforward text prompts. Free account holders can generate four-second videos, downloadable and shareable on any platform, although they will bear a watermark.
Today we’re releasing new features and updates to provide more control, greater fidelity and even more expressiveness when using Runway.
We are excited to introduce Motion Brush, Gen-2 Style Presets, updated Camera Controls and more.
Thread below pic.twitter.com/VdF6MyCUSK
— Runway (@runwayml) November 20, 2023
Each second of video generation consumes five credits, and free users receive 500 credits. For those seeking enhanced tools and capabilities, a subscription plan is available at $12 per month, offering more customization options for the output.
Motion brush in runway…damn good.
Much better control.
This was very needed.#runwayml #midjourneyV52 #AIArtCommuity #aiart pic.twitter.com/r8ZhM0GnAd
— Rory Flynn (@Ror_Fly) November 24, 2023
In a parallel development, Stability AI has introduced Stable Video Diffusion, a cutting-edge AI research tool that transforms any static image into a short video. This open-weights preview of two AI models employs image-to-video techniques and can operate locally on machines with Nvidia GPUs.
Fully AI generated short film using @runwayml Gen-2. + @midjourney. pic.twitter.com/3nD3IqzHEi
— Reagan Maconi (@reaganmaconi) November 27, 2023
I made a short film about dogs. 🐶
“The Unspoken World of Woofs”
Made with @runwayml Text-to-Image and Gen-2. pic.twitter.com/oCqsKSdGep
— Bryan Fox (@bryanf0x) November 23, 2023
Stability AI gained attention last year with Stable Diffusion, an open-weights image synthesis model, inspiring a community of enthusiasts who built on the technology with their own adaptations.
Now, Stability aims to replicate this success in AI video synthesis. Currently, Stable Video Diffusion comprises two models, “SVD” and “SVD-XT,” producing video synthesis at 14 and 25 frames, respectively. The models can operate at varying speeds and output short MP4 video clips at 576×1024 resolution, typically lasting 2-4 seconds.
Stability underscores that the model is still in its early stages and intended for research purposes only.
While they actively update the models, seeking feedback on safety and quality, the company emphasizes that the current stage is not suitable for real-world or commercial applications. The insights and feedback received will play a crucial role in refining the model for future releases.
from Firstpost Tech Latest News https://ift.tt/x1eTSQ9
No comments:
Post a Comment