Learn languages naturally with fresh, real content!

Popular Topics
Explore By Region
Meta launches open-source multimodal AI model Llama 3.2 for image and text processing.
Meta has launched Llama 3.2, its first open-source multimodal AI model capable of processing images and text.
It includes vision models with 11 billion and 90 billion parameters, and lightweight text models with 1 billion and 3 billion parameters, designed for diverse hardware.
Llama 3.2 aims to enhance AI applications in areas like augmented reality and document analysis, offering competitive performance in image recognition tasks against rivals like OpenAI and Anthropic.
7 months ago
10 Articles
Articles
Further Reading
You have 6 free stories remaining this month. Subscribe anytime for unlimited access.