--:--
CATEGORIES
AUTHORS

GPT-5.2’s Source Selection: When AI Cites Controversy—And When It Doesn’t

OpenAI’s GPT-5.2 faces scrutiny over citing a controversial encyclopedia selectively, raising concerns about AI reliability.

GPT-5.2’s Source Selection: When AI Cites Controversy—And When It Doesn’t

OpenAI’s GPT-5.2, touted as a professional-grade AI, may be citing a controversial encyclopedia with neo-Nazi ties—selectively.

Internal testing by The Guardian revealed inconsistencies in GPT-5.2’s source citations. While the model frequently referenced Grokipedia—a xAI-powered encyclopedia previously flagged for citing neo-Nazi forums—for claims about Iran and Holocaust-related topics, it avoided Grokipedia entirely when addressing other politically sensitive subjects like media bias against Donald Trump.

This discrepancy raises questions about the effectiveness of OpenAI’s claimed “safety filters,” which are supposed to prevent high-severity harms.

The Alignment Problem

Grokipedia, launched in 2023, has faced scrutiny for using “questionable” sources. A U.S. study highlighted its reliance on fringe forums, yet OpenAI’s documentation does not explicitly address Grokipedia’s role in GPT-5.2’s training or search processes.

This omission is notable given that GPT-5.2 is marketed as a “professional work” tool, released in December 2023, with assurances about source diversity.

Regulatory Hurdles

OpenAI’s public claims about safety mechanisms contrast with the observed behavior of GPT-5.2. While the company states its models use “safety filters” to avoid high-severity harms, the selective use of Grokipedia suggests these filters may not apply uniformly across topics. This inconsistency could complicate regulatory compliance for businesses relying on GPT-5.2 for professional tasks.