Meta announces new AI model that can generate video with sound


Meta CEO Mark Zuckerberg holds a smartphone, as he makes a keynote speech at the Meta Connect annual event, at the company’s headquarters in Menlo Park, California, U.S. September 25, 2024. 

Manuel Orbegozo | Reuters

Facebook owner Meta announced on Friday it had built a new AI model called Movie Gen that can create realistic-seeming video and audio clips in response to user prompts, claiming it can rival tools from leading media generation startups like OpenAI and ElevenLabs.

Samples of Movie Gen’s creations provided by Meta showed videos of animals swimming and surfing, as well as videos using people’s real photos to depict them performing actions like painting on a canvas.

Movie Gen also can generate background music and sound effects synced to the content of the videos, Meta said in a blog post, and use the tool to edit existing videos.

In one such video, Meta had the tool insert pom-poms into the hands of a man running by himself in the desert, while in another it changed a parking lot where a man was skateboarding from dry ground into one covered by a splashing puddle.

Videos created by Movie Gen can be up to 16 seconds long, while audio can be up to 45 seconds long, Meta said. It shared data showing blind tests indicating that the model performs favorably compared with offerings from startups including Runway, OpenAI, ElevenLabs and Kling.

The announcement comes as Hollywood has been wrestling with how to harness generative AI video technology this year, after Microsoft-backed OpenAI in February first showed off how its product Sora could create feature film-like videos in response to text prompts.

Technologists in the entertainment industry are eager to use such tools to enhance and expedite filmmaking, while others worry about embracing systems that appear to have been trained on copyright works without permission.

Lawmakers also have highlighted concerns about how AI-generated fakes, or deepfakes, are being used in elections around the world, including in the U.S., Pakistan, India and Indonesia.

Meta spokespeople said the company was unlikely to release Movie Gen for open use by developers, as it has with its Llama series of large-language models, saying it considers the risks individually for each model. They declined to comment on Meta’s assessment for Movie Gen specifically.

Instead, they said, Meta was working directly with the entertainment community and other content creators on uses of Movie Gen and would incorporate it into Meta’s own products sometime next year.

According to the blog post and a research paper about the tool released by Meta, the company used a mix of licensed and publicly available datasets to build Movie Gen.

OpenAI has been meeting with Hollywood executives and agents this year to discuss possible partnerships involving Sora, although no deals have been reported to have come out of those talks yet. Anxieties over the company’s approach increased in May when actor Scarlett Johansson accused the ChatGPT maker of imitating her voice without permission for its chatbot.

Lions Gate Entertainment, the company behind “The Hunger Games” and “Twilight,” announced in September that it was giving AI startup Runway access to its film and television library to train an AI model. In return, it said, the studio and its filmmakers can use the model to augment their work.



Source link