Learn languages naturally with fresh, real content!

Popular Topics
Explore By Region
Zico Kolter, a Carnegie Mellon professor, leads OpenAI’s safety committee, which can halt AI releases to ensure safety during its transition to a for-profit entity.
Zico Kolter, a 42-year-old computer science professor at Carnegie Mellon University, leads OpenAI’s Safety and Security Committee, a four-member panel with the power to halt AI releases deemed unsafe.
Appointed over a year ago, his role became a key requirement for OpenAI’s 2025 transition into a for-profit public benefit corporation, mandated by California and Delaware regulators.
The agreements ensure safety decisions take precedence over financial goals, granting Kolter full observation rights to for-profit board meetings and access to safety data.
The committee addresses risks including AI misuse in weapons development, cyberattacks, and mental health harm.
While Kolter declined to confirm if the panel has ever blocked a release, he emphasized the growing threat landscape of advanced AI.
His leadership reflects heightened scrutiny as OpenAI navigates its shift from nonprofit mission to commercial enterprise.
Zico Kolter, profesor de Carnegie Mellon, dirige el comité de seguridad de OpenAI, que puede detener las liberaciones de IA para garantizar la seguridad durante su transición a una entidad con fines de lucro.