Coalition of AI experts launches "Humanity's Last Exam" to assess expert-level AI with 1,000 crowd-sourced questions.

A coalition of AI experts, including the Center for AI Safety and Scale AI, has launched "Humanity's Last Exam" to create challenging questions for AI systems, following the impressive performance of OpenAI's o1 model. The project aims to assess when expert-level AI is achieved and will feature at least 1,000 crowd-sourced questions. Submissions are due by November 1, with winners receiving co-authorship and prizes up to $5,000, excluding weapon-related topics for safety reasons.

September 16, 2024
18 Articles

Further Reading