Meta launches open-source multimodal AI model Llama 3.2 for image and text processing.

Meta has launched Llama 3.2, its first open-source multimodal AI model capable of processing images and text. It includes vision models with 11 billion and 90 billion parameters, and lightweight text models with 1 billion and 3 billion parameters, designed for diverse hardware. Llama 3.2 aims to enhance AI applications in areas like augmented reality and document analysis, offering competitive performance in image recognition tasks against rivals like OpenAI and Anthropic.

September 25, 2024
10 Articles