Runway Teases AI-Powered Text-To-Video Editing Using Written Prompts

Want to read Slashdot from your mobile device? Point it at and keep reading!

Runway Teases AI-Powered Text-To-Video Editing Using Written Prompts (



from the taste-of-the-future dept.

An anonymous reader quotes a report from Ars Technica: In a tweet posted this morning, artificial intelligence company Runway teased a new feature of its AI-powered web-based video editor that can edit video from written descriptions, often called “prompts.” Runway’s “Text to Video” demonstration reel shows a text input box that allows editing commands such as “import city street” (suggesting the video clip already existed) or “make it look more cinematic” (applying an effect). It depicts someone typing “remove object” and selecting a streetlight with a drawing tool that then disappears (from our testing, Runway can already perform a similar effect using its “inpainting” tool, with mixed results). The promotional video also showcases what looks like still-image text-to-image generation similar to Stable Diffusion (note that the video does not depict any of these generated scenes in motion) and demonstrates text overlay, character masking (using its “Green Screen” feature, also already present in Runway), and more.

Video generation promises aside, what seems most novel about Runway’s Text to Video announcement is the text-based command interface. Whether video editors will want to work with natural language prompts in the future remains to be seen, but the demonstration shows that people in the video production industry are actively working toward a future in which synthesizing or editing video is as easy as writing a command. […] Runway is available as a web-based commercial product that runs in the Google Chrome browser for a monthly fee, which includes cloud storage for about $35 per year. But the Text to Video feature is in closed “Early Access” testing, and you can sign up for the waitlist on Runway’s website.

core error – bus dumped


Read More