Runway, a US-based AI research company specialising in creative software, has launched Act-One, a tool for generating character animations based on simple video and voice inputs. According to Runway, Act-One is designed to streamline animation production, offering an alternative to the typically complex and resource-intensive pipelines used in facial animation.
Traditional animation workflows for realistic facial expressions require motion capture equipment, multiple video references, and detailed face rigging—steps that can be costly and time-consuming. Act-One bypasses these requirements by allowing users to create animated characters directly from a video and voice recording, making it feasible to produce animations with a simple camera setup, says Runway in an official blog.
The tool supports a range of character styles, from realistic portrayals to stylised designs. Act-One translates facial expressions and subtle movements—such as micro-expressions and eye-line adjustments—from actors onto different character designs, even if the character’s proportions differ from the source footage. This capability enables new options in character design without the need for motion capture, as per the company.
Act-One also facilitates multi-character scenes, allowing a single actor to perform multiple roles. Runway adds that this feature, paired with the tool’s high-fidelity outputs, may be suited for creators producing dialogue-focused videos without extensive production resources.
According to Runway, they have incorporated content moderation measures in Act-One, including safeguards to prevent the unauthorised generation of public figures and technical checks to verify users’ rights to any custom voice created.