OpenAI says its new GPT-5 models are 30% less politically biased

0
11

OpenAI says its new GPT-5 models are 30% less politically biased

OpenAI has released research findings indicating that its newest models, GPT-5 Instant and GPT-5 Thinking, show a 30% reduction in measurable political bias compared to previous versions like GPT-4o and o3. The research was conducted by the company’s Model Behavior division, led by Joanne Jang.

🧪 i’m starting oai labs: a research-driven group focused on inventing and prototyping new interfaces for how people collaborate with ai.

i’m excited to explore patterns that move us beyond chat or even agents — toward new paradigms and instruments for thinking, making,…

— Joanne Jang (@joannejang) September 5, 2025

The findings are based on a new framework developed by the team to quantify and measure political bias in large language models. The evaluation involved testing the models’ responses against a set of 500 prompts, which ranged from neutral to emotionally charged to simulate real-world political questions.

According to researcher Natalie Staudacher, who detailed the results, political bias appeared only rarely and with low severity, even when the models were stress-tested with prompts designed to provoke slanted or emotional language. OpenAI stated that the goal of this work is to create a clearer and more accountable approach to defining and mitigating bias in its systems.

The release of the research on Thursday followed OpenAI’s annual developer conference earlier in the week, where the company announced new tools to turn ChatGPT into an application platform.

Featured image credit