Negative
22Serious
Neutral
Optimistic
Positive
- Total News Sources
- 2
- Left
- 1
- Center
- 1
- Right
- 0
- Unrated
- 0
- Last Updated
- 11 days ago
- Bias Distribution
- 50% Center


OpenAI's New AI Models Hallucinate More, Causes Unknown
OpenAI's latest AI models, o3 and o4-mini, have shown a surprising increase in hallucinations—instances where the models generate false or misleading information—compared to their predecessors. Internal benchmarks such as PersonQA revealed that o3 hallucinated in 33% of responses and o4-mini in 48%, whereas older models like o1 and o3-mini had rates closer to 16% and 14.8%. This trend is unexpected, as newer models are generally anticipated to be more reliable and less prone to such errors. OpenAI has acknowledged these results but has not identified a clear reason for the increase in hallucinations, noting that further research is needed. Some reports also indicate that o3 not only hallucinates more but sometimes justifies its false claims when challenged. OpenAI states it is actively working to improve the accuracy and reliability of its models.


- Total News Sources
- 2
- Left
- 1
- Center
- 1
- Right
- 0
- Unrated
- 0
- Last Updated
- 11 days ago
- Bias Distribution
- 50% Center
Negative
22Serious
Neutral
Optimistic
Positive
Related Topics
Stay in the know
Get the latest news, exclusive insights, and curated content delivered straight to your inbox.

Gift Subscriptions
The perfect gift for understanding
news from all angles.