Meta swings big on AI at Meta Connect 2024

SeniorTechInfo
4 Min Read
zuckerberg
Screenshot by David Gewirtz/ZDNET

Mark Zuckerberg was the highlight at Meta Connect 2024, showcasing groundbreaking advancements in VR/AR and AI. The fusion of these cutting-edge technologies, especially within Meta’s glasses lineup as covered by ZDNET, is revolutionizing the way we interact with the digital world.

Also: Everything announced at Meta Connect 2024: $299 Quest 3S, Orion AR glasses, and more

However, this article will delve into the remarkable announcements surrounding Meta’s AI endeavors that stole the show.

Multimodal large language model

Zuckerberg introduced Llama 3.2, incorporating multimodal capabilities, enabling the model to comprehend images effectively.

Comparing Meta’s Llama 3.2 large language models with industry counterparts, Zuckerberg highlighted Meta’s pioneering approach in providing cutting-edge models, along with unlimited access for free, seamlessly integrated into their diverse range of products and applications.

Also: Meta inches toward open-source AI

Powered by Llama 3.2, Meta AI, Meta’s AI assistant, is on track to become the most widely used AI assistant globally, with nearly 500 million monthly active users, according to Zuckerberg.

tie-dye
Screenshot by David Gewirtz/ZDNET

To showcase the model’s image understanding capabilities, Zuckerberg demonstrated how Meta AI could modify images on a mobile device in response to simple text commands, transforming a shirt into tie-dye or adding accessories like helmets.

Meta AI with voice

Meta’s AI assistant now supports voice interactions within Meta’s apps, enhancing user experience. This advancement aligns with the growing trend of AI voice interaction surpassing traditional text chatbots, marking a significant leap in digital communication.

john-cena
Screenshot by David Gewirtz/ZDNET

Zuckerberg emphasized the potential of AI voice interaction, foreseeing it to outpace text chatbots in the near future. However, user accessibility remains paramount for seamless adoption.

Also: AI voice generators: What they can do and how they work

Additionally, users can personalize their AI assistant with various celebrity voices, enhancing natural voice conversations across platforms like Instagram, WhatsApp, and Messenger Facebook.

Meta AI Studio

Meta introduces new features in its AI Studio chatbot creation tool, enabling users to craft personalized chatbots that mirror their conversational style. The evolution of AI Studio into immersive experiences delves into the realm of uncanny valley deepfakes.

ai-studio
Screenshot by David Gewirtz/ZDNET

The new iteration of AI Studio offers a more natural and interactive approach, exemplified by a demo where Zuckerberg interacted with a chatbot resembling creator Don Allen Stevenson III, showcasing lifelike head motions and lip animations.

Also: How Apple, Google, and Microsoft can save us from AI deepfakes

Furthermore, Meta’s breakthrough in AI translation enables automatic video dubbing on Reels in English and Spanish, facilitating seamless multilingual content creation.

ai-translation
Screenshot by David Gewirtz/ZDNET

Llama 3.2

Zuckerberg elaborated on the enhancements in Llama 3.2, emphasizing the increased parameter count due to its multimodal nature. Additionally, Meta introduced smaller on-device models optimized for custom app development.

Also: I’ve tested dozens of AI chatbots since ChatGPT’s stunning debut. Here’s my top pick

Both models are open source, positioning Llama as a pivotal player in shaping the future of the AI industry.

Lastly, Meta unveiled a host of AI features for its AI glasses, underscoring their commitment to pushing the boundaries of AI integration. For a detailed breakdown of these features, check out our dedicated article.


For daily project updates, follow me on social media. Subscribe to my weekly newsletter, and connect with me on Twitter, Facebook, Instagram, and YouTube for more insights.



Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *