Learn languages naturally with fresh, real content!

tap to translate recording

Explore By Region

flag Meta launches open-source multimodal AI model Llama 3.2 for image and text processing.

flag Meta has launched Llama 3.2, its first open-source multimodal AI model capable of processing images and text. flag It includes vision models with 11 billion and 90 billion parameters, and lightweight text models with 1 billion and 3 billion parameters, designed for diverse hardware. flag Llama 3.2 aims to enhance AI applications in areas like augmented reality and document analysis, offering competitive performance in image recognition tasks against rivals like OpenAI and Anthropic.

7 months ago
10 Articles