Photo Credit: Runway
Runway, the startup that co-created the popular Stable Diffusion AI image generator, has released an AI model that takes any text description – such as “turtles flying in the sky” – and generates three seconds of matching video footage.
Citing safety and business reasons, Runway is not releasing the model widely to start, nor will it be open-sourced like Stable Diffusion. The text-to-video model, dubbed Gen-2, will initially be available on Discord via a waitlist on the Runway website.
Using AI to generate videos from text inputs is not new. Meta Platforms and Google both released research papers on text-to-video AI models late last year. However, the difference is that Runway's text-to-video AI model is being made available to the general public, said Cristobal Valenzuela, Runway's chief executive.
Runway hopes that creatives and filmmakers will use the product, Valenzuela said.
Last month, chipmaking giant Qualcomm demonstrated Stable Diffusion 1.5, the AI image generator, running on an Android handset without network access ahead of Mobile World Congress (MWC) 2023. According to Qualcomm, the company's deployment of the AI tool, that typically requires a lot of computing power, is capable of generating images in a few seconds. The company did not reveal details of the smartphone hardware used to optimise the AI tool to run locally on smartphones.
The popular generative AI tool is known to consume a lot of computing power in order to run, which is why several services that rely on it perform these activities on large servers instead of on a user's smartphone or computer.
© Thomson Reuters 2023
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.