Meta seems to trying out something new with AI-driven video editing features, allowing advertisers to animate static images and extend the borders of video content. This new tool is aimed at providing brands more creative control and flexibility when creating ads for Instagram and Facebook, providing better ways to engage users with dynamic visuals.
Moreover, Meta has launched new AI-powered tools that enable advertisers to transform static images into dynamic videos. The AI can also expand existing videos by generating additional pixels, enlarging the content without compromising on the quality.
These features are being pushed to users in a gradual manner, with plans for wider availability in 2025. The tools aim to enhance creative options for advertisers on Facebook and Instagram.
Also Read: Threads To Soon Start Showing Instagram Comments
Meta Launches Its Text To Video Model
Facebook owner Meta on Friday revealed a new AI model dubbed Movie Gen that can produce realistic-looking videos using only text prompts. Meta’s announcement comes just 6 months after OpenAI took the AI industry by storm by unveiling its text-to-video generator, Sora.
Movie Gen is also capable of generating background music and sound effects that are synchronized with the video content. The AI tool can generate up to 16 seconds of video in different aspect ratios and up to 45 seconds of audio. The company also shared data from various blind tests where Movie Gen outperformed its other competitors in the segment, such as Runaway Gen 3, OpenAI’s Sora, and Kling 1.5.
Movie Gen can also create custom videos by using an image or video of a person to create a video that features that person in ‘rich visual detail’ while preserving human identity and movement. Meta claims that the new AI tool can be used to edit existing videos by adding different transitions or effects. In a video shared on the company’s blog, Movie Gen was able to add clothes to animals, change the background of the video, and add new elements.
Also Read: WhatsApp Users Can Now Tag Friends In Status Updates
Meta Won’t Tell Whether Ray-Ban Smart Glass Videos Are Private
Meta is reportedly staying mum on whether it is collecting video and image data from its AI wearable device Ray-Ban Meta smart glasses to train its large language models (LLMs). The company announced a new real-time video feature for the device using which users can ask the AI to answer queries and ask for suggestions based on their surroundings.
The feature offers real-time video capability that enables Meta AI to “look” at the users’ surroundings and process that visual information to answer any query a user might have. For example, a user can ask it to identify a famous landmark. All of these functionalities require the Ray-Ban Meta smart glasses to take passive videos and images of the surroundings to understand the context.