Learn languages naturally with fresh, real content!

Popular Topics
Explore By Region
OpenAI launches GPT-5.3-Codex-Spark, a fast, lightweight coding model for Cerebras chips, offering real-time development speed and reduced reliance on Nvidia.
OpenAI has released GPT-5.3-Codex-Spark, a fast, lightweight coding model optimized for Cerebras Systems’ wafer-scale chips, delivering over 1,000 tokens per second and 25% faster output than prior models.
Designed for real-time development tasks like debugging and documentation, it uses half the tokens of earlier versions, scores 77.3% on Terminal-Bench 2.0, and runs with just 44 GB of memory.
Available now to ChatGPT Plus and Codex Pro users, it marks OpenAI’s first major production model on non-Nvidia hardware, part of a $10 billion partnership with Cerebras.
While it sacrifices broad capabilities for speed, the model signals a strategic shift toward diversified AI hardware, reducing reliance on Nvidia and emphasizing low-latency inference for developers.
OpenAI lanza GPT-5.3-Codex-Spark, un modelo de codificación rápido y liviano para chips Cerebras, que ofrece velocidad de desarrollo en tiempo real y una menor dependencia de Nvidia.