Deployment safety
Sharing the technical work we do to make our systems safe, including how deployed models perform in evaluations, the risks we measure, and the steps we take to improve over time.
Updates
GPT-5.4 Thinking System Card
GPT-5.4 Thinking is the latest reasoning model in the GPT-5 series, and explained in our blog. The comprehensive safety…
GPT-5.3 Instant System Card
GPT-5.3 Instant is the newest addition to the GPT-5 series. As described in our blog , GPT-5.3 Instant responds faster,…
GPT-5.3-Codex System Card
GPT‑5.3‑Codex is the most capable agentic coding model to date, combining the frontier coding performance of…
Addendum to GPT-5.2 System Card: GPT-5.2-Codex
GPT-5.2-Codex is our most advanced agentic coding model yet for complex, real-world software engineering. A version of…
Update to GPT-5 System Card: GPT-5.2
GPT-5.2 is the latest model family in the GPT-5 series, and explained in our blog . The comprehensive safety mitigation…
GPT-5.1-Codex-Max System Card
This system card outlines the comprehensive safety measures implemented for GPT‑5.1-Codex-Max. It details both…
GPT-5.1 Instant and GPT-5.1 Thinking System Card Addendum
As described in our blog , GPT-5.1 Instant and GPT-5.1 Thinking are the next iteration of our GPT-5 models. GPT-5.1…
Addendum to GPT-5 System Card: Sensitive Conversations
When we launched GPT-5, we noted in the system card that we were working to establish better benchmarks and to continue…
Explore more
Safety approach
Learn about our approach to building safe and beneficial AI for everyone.
Trust and transparency
Read our transparency reports detailing data requests, content moderation, and child safety efforts.
Research
Review technical research advancing the capabilities and safety of our AI systems.