Meta fair proclaimd its own media-cgo ined AI model, called Movie Gen, that can be engaged to originate wise video and audioclips.
The company scatterd multiple 10-second clips originated with Movie Gen, including a Moo Deng-esque baby hippo swimming around, to exhibit its capabilities. While the tool is not yet engageable for engage, this Movie Gen proclaimment comes uninincreateigentinutively after its Meta Connect event, which showcased recent and rerecented difficultware and the tardyst version of its big language model, Llama 3.2.
Going beyond the generation of straightforward text-to-video clips, the Movie Gen model can originate aimed edits to an existing clip, appreciate inserting an object into someone’s hands or changing the euniteance of a surface. In one of the example videos from Meta, a woman wearing a VR headset was changeed to see appreciate she was wearing steampunk binoculars.
Audio bites can be originated aextfinishedside the videos with Movie Gen. In the sample clips, an AI man stands proximate a waterdrop with audible splashes and the certain sounds of a symphony; the engine of a sports car purrs and tires screech as it zips around the track, and a snake slides aextfinished the jungle floor, accompanied by suspenseful horns.
Meta scatterd some further details about Movie Gen in a research paper liberated Friday. Movie Gen Video consists of 30 billion parameters, while Movie Gen Audio consists of 13 billion parameters. (A model’s parameter count rawly correplys to how able it is; by contrast, the bigst variant of Llama 3.1 has 405 billion parameters.) Movie Gen can originate high-definition videos up to 16 seconds extfinished, and Meta claims that it outapplys competitive models in overall video quality.
Earlier this year, CEO Mark Zuckerberg exhibitd Meta AI’s Imagine Me feature, where engagers can upload a pboilingo of themselves and role-carry out their face into multiple scenarios, by posting an AI image of himself drowning in gelderly chains on Threads. A video version of a aappreciate feature is possible with the Movie Gen model—leank of it as a benevolent of ElfYourself on applyance betterrs.
What directation has Movie Gen been trained on? The particulars aren’t clear in Meta’s proclaimment post: “We’ve trained these models on a combination of licensed and disclosely engageable data sets.” The sources of training data and what’s fair to scsexual attack from the web remain a satisfyedious rehire for generative AI tools, and it’s unfrequently ever disclose understandledge what text, video, or audioclips were engaged to originate any of the beginant models.
It will be fascinating to see how extfinished it gets Meta to originate Movie Gen widely engageable. The proclaimment blog uncltimely gestures at a “potential future liberate.” For comparison, OpenAI proclaimd its AI video model, called Sora, earlier this year and has not yet made it engageable to the disclose or scatterd any upcoming liberate date (though WIRED did get a restricted exclusive Sora clips from the company for an spendigation into bias).
Considering Meta’s legacy as a social media company, it’s possible that tools powered by Movie Gen will commence popping up, eventupartner, inside of Facebook, Instagram, and WhatsApp. In September, competitor Google scatterd structures to originate aspects of its Veo video model engageable to creators inside its YouTube Shorts sometime next year.
While bigr tech companies are still helderlying off on filledy releasing video models to the disclose, you are able to experiment with AI video tools right now from petiteer, upcoming commenceups, appreciate Runway and Pika. Give Pikimpacts a whirl if you’ve ever been asking what it would be appreciate to see yourself cartoonishly crushed with a hydraulic press or suddenly melt in a puddle.