Photo Credit: Runway
Runway AI, an artificial intelligence (AI) firm focusing on video generation models, announced a new feature on Tuesday. Dubbed Act-One, the new capability is available within the company's latest Gen-3 Alpha large language model (LLM) and is said to accurately capture facial expressions from a source video and then reproduce them on an AI-generated character in a video. The feature solves a significant pain point in AI video generation technology which is converting real people into AI characters while not losing out on realistic expressions.
In a blog post, the AI firm detailed the new video generation capability. Runway stated that the Act-One tool can create live-action and animated content using video and voice performances as inputs. The tool is aimed at offering expressive character performance in AI-generated videos.
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
— Runway (@runwayml) October 22, 2024
Learn more about Act-One below.
(1/7) pic.twitter.com/p1Q8lR8K7G
AI-generated videos have changed the video content creation process significantly as individuals can now generate specific videos using text prompts in natural language. However, there are certain limitations that have prevented the adaptation of this technology. One such limitation is the lack of controls to change the expressions of a character in a video or to improve their performance in terms of delivery of a sentence, gestures, and eye movement.
However, with Act-One, Runway is trying to bridge that gap. The tool, which only works with the Gen-3 Alpha model, simplifies the facial animation process, which can often be complex and require multi-step workflows. Today, animating such characters requires recording videos of an individual from multiple angles, manual face rigging, and capturing their facial motion separately.
Runway claims Act-One replaces the workflow and turns it into a two-step process. Users can now record a video of themselves or an actor from a single-point camera, which can also be a smartphone, and select an AI character. Once done, the tool is claimed to faithfully capture not only facial expressions but also minor details such as eye movements, micro-expressions as well as the style of delivery.
Highlighting the scope of this feature, the company stated in the blog post, “The model preserves realistic facial expressions and accurately translates performances into characters with proportions different from the original source video. This versatility opens up new possibilities for inventive character design and animation.”
One of the models strengths is producing cinematic and realistic outputs across a robust number of camera angles and focal lengths. Allowing you generate emotional performances with previously impossible character depth opening new avenues for creative expression.
— Runway (@runwayml) October 22, 2024
(4/7) pic.twitter.com/JG1Fvj8OUm
Notably, while Act-One can be used for animated characters, it can also be used for live-action characters in a cinematic sequence. Further, the tool can also capture details even if the angle of the actor's face is different from the angle of the AI character's face.
This feature is currently being rolled out to all users gradually, however, since it only works with Gen-3 Alpha, those on the free tier will get a limited number of tokens to generate videos with this tool.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.