Apple's ReALM language model improves context understanding and outperforms GPT-4 in specific benchmarks for voice assistants like Siri.
Apple's AI researchers unveiled a new language model called ReALM (Reference Resolution As Language Modeling), which aims to make voice assistants like Siri smarter by improving context understanding and handling ambiguous references. The model is designed to run on devices and consider both screen content and ongoing tasks to provide users with more accurate responses. Apple claims that its ReALM model outperforms OpenAI's GPT-4 in specific benchmarks.
April 02, 2024
10 Articles