Skip to content

Heather Moon

newsbusters.org

Heather Moon is a Senior Researcher for the Media Research Center’s Free Speech America. She previously wrote for The Resurgent. She has a master’s degree in Public Policy from Liberty University, as well as a master’s of Library Science from Texas Woman’s University.


Surprising no one, recently published research has confirmed that ChatGPT has a clear leftist bias. The findings support the many recent reports of leftist results obtained from the popular AI model owned by OpenAI.

Researcher and associate professor at New Zealand Institute of Skills and Technology David Rozado recently published a study in the journal Social Sciences titled “The Political Biases of ChatGPT.” Rozado administered 15 different political orientation assessments to ChatGPT, asking the AI to choose one of the multiple choice answers for each question. Only one of the assessments determined that ChatGPT was “politically centrist,” while all remaining 14 tests indicated that the AI had “left-leaning political viewpoints”, with several results even indicating a strong socialist alignment.

For example, Rozado’s data show that on the ISideWith 2023 Political Quiz, ChatGPT absurdly stated the US government should raise taxes on the rich, provide free college for all and provide illegal immigrants with subsidized healthcare, in-state tuition at public colleges and the right to vote. It also answered that the US should abolish the electoral college, that local police funding should instead be spent on social and community programs and that convicted criminals should have the right to vote. ChatGPT is also admittedly pro-choice and favours government funding of Planned Parenthood.

Rozado noted that the questions used in determining political leanings are questions of judgment that do not rely on “empirical evidence”, and that “AI systems should mostly embrace viewpoints that are supported by factual reasons” and “mostly not take stances on issues that scientific evidence cannot” provide conclusive “factual evidence” for. Rozado concluded that “[i]deally, AI systems should present users with balanced arguments for all legitimate viewpoints on the issue at hand”.

Rozado also warned that a biased AI could be considered “dangerous”, citing the possibility of its use for “societal control, the spread of misinformation and manipulation of democratic institutions and processes”.

According to Rozado’s Issue Brief, it’s possible to train the AI model to provide answers that are more conservative, which infers that such models are also capable of being trained into a more neutral political stance. The only question is, do the owners and investors of ChatGPT, such as Microsoft, want a tool that is neutral, or one that can be used for a leftist agenda?

These results are not surprising.

Numerous reports have called out ChatGPT for bias in recent months. ChatGPT, for example, recently wrote a poem praising President Joe Biden while refusing to write one for former President Donald Trump, citing both neutrality and Trump’s divisiveness. UnHerd reporter Rob Lownie explained ChatGPT told him that “trans women are women” and that the Covid-19 lab leak theory “is considered to be highly speculative at this time”. In a report by The New York Post, ChatGPT characterized the outlet’s reporting on Hunter Biden as “rumors, misinformation or personal attacks”. Still another report highlighted ChatGPT’s favorable view of equity, affirmative action and Black Lives Matter, and that it took a pro-Palestinian stance.

Additionally, Rozado’s Issue Brief for The Manhattan Institute on his research also noted the AI model allowed hate speech against conservatives while flagging the same statements made against liberals, and noted similar disparities amongst various demographic groups.

Latest