Meta recently upgraded its Llama AI model, upgrading it from version 3.1 to 3.2 and bringing a number of exciting new capabilities. Now, Llama is multimodal, meaning it can process text, audio, and images, making it more versatile than ever. So, what are the most striking features of this latest update?
Celebrity voices for Meta AI
One of the most exciting new features in Meta’s Llama 3.2 is the addition of celebrity voices to its AI. With this update, you can now use your voice to interact with Meta AI on platforms like WhatsApp, Messenger, Facebook, and Instagram. Even better, it will respond to you out loud, making the experience even more personal and engaging.
Whether you need anything from Meta AI – answers, explanations, or just some fun – this feature makes everything even more fun. You can now choose to hear reactions from celebrities like the witty Awkwafina, the legendary Dame Judi Dench, the energetic WWE star John Cena, the hilarious Keegan Michael Key, and the charming Kristen Bell.
Visual feedback and image editing capabilities
Meta’s Llama 3.2 can now “see” and interpret images. We’re all used to AIs that are great at handling text – whether answering questions like a chatbot or summarizing long articles – but machine vision opens up a whole new dimension.
With Llama 3.2 in Meta AI, you can take a picture of a historical site during your travels, and the AI can provide detailed information about its history and significance. This is especially useful for history buffs and adventure travelers.
But this visual feedback doesn’t stop there. The AI can also help you edit your photos by adding new backgrounds or details. So you can ask it to add a sunset to a photo taken at the beach or change the background entirely. This feature is similar to what you’d find in editing apps like Photoshop or Lightroom, but having it built directly into Meta’s platform makes it much easier to access.
Multiple versions of Llama 3.2
Llama 3.2 is launching with four different model sizes, each designed to address different needs and use cases.
First up are the 11B and 90B models (that “B” stands for billions of parameters). These are the multimodel heavyweights of the Llama 3.2 family, designed for complex tasks requiring more computational power. Imagine you’re overseeing a construction project and want to know the best way to allocate resources based on a dynamic schedule. Llama 3.2 can analyze the timeline, resources, and task dependencies to suggest the most efficient work plan.
Or let’s say you have an extensive database of customer feedback. Instead of manually sorting through comments, you can ask the model to identify patterns in customer satisfaction over time, and it will process the data to deliver a report instantly.
At the other end of the spectrum, we have the 1B and 3B models. These are great for lightweight tasks that prioritize speed and privacy; you might consider using them for everyday personal productivity on your phone.
For example, you might have a to-do list app that can automatically categorize your tasks, highlight important tasks, and even set reminders for deadlines.
The best part is that all of this happens locally on your device, so none of your sensitive information – like emails or calendar events – leaves your phone.
Meta’s new Llama 3.2 models are now more accessible than ever, available for download on platforms like Llama (Meta’s official site) and Hugging Face. But what makes this release different is its integration into Meta’s ecosystem. With billions of people using Facebook, Instagram, WhatsApp, and Messenger every day, an upgraded Llama means many more users will soon experience Meta’s more sophisticated and engaging AI.