Tech

ChatGPT has a ‘significant’ liberal bias, researchers say

OpenAIā€™s wildly popular ChatGPT artificā™›ial-intelligence service has showed a clear bias toward the Democratic Party and other liberal viewpšŸŒžoints, conducted by UK-based researchers.

Academics from the University of East Anglia tested ChatGPT by asking the chatbot to answer a series of political questions as iš”‰f it were a Republican, a Democrat, or without a specified leaning. The responses were then compared and mapped according to where they land on the political spectrum.

ā€œWe find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazilš’†™, and the Labour Party in the UK,ā€ the researchers said, referring to the left-leaning Brazilian President Luiz InĆ”cio Lula da Silva.

ChatGPT has already drawn sharp scrutiny for demonstrating political biases, such as its refusal to write a story about Hunter Biden in the style of The New York Post but accepting a prompt to do so as šŸƒiź§ƒf it were left-leaning CNN.

In March, the Manhattan Institute, a conserā™•vative think tank, published ašŸ€… damning report which found that ChatGPT is ā€œmore permissive of hateful comments made about conservatives than the exact same comments made about liberals.ā€

ChatGPT has drawn intense scrutiny since its debut. REUTERS

To reinforce their conclusions,š„¹ the UK researchers asked ChatGPT the same questions 100 times. The process was then put througšŸ§øh ā€œ1,000 repetitions for each answer and impersonationā€ to account for the chatbotā€™s randomness and its propensity to ā€œhallucinate,ā€ or spit out false information.

ā€œThese results translate into real concerns that ChatGPT, and [large language models] in general, can extend or ź¦¬even amplify the existing challenges involving political processes posed by the Internet and social media,ā€ the researchers added.

The Post has reached out to OpenAI for comment.

Brazilian President Luiz Inacio Lula da Silva is pictured. AFP via Getty Images
The study found that ChatGPT favors Democratic viewpoints. Getty Images

The existence of bias is just one area of concern in the development of ChatGPT and other advanced AI tools. Detractors, including OpenAIā€™s own CEO Sam Altman, have warned that AI could cause chaos ā€“ or even the destruction of humanity ā€“ without proper guardrails in place.

OpenAI tried to deflect potential concšŸˆerns about political bias in a lengthy February blog post, which detailed how the firm ā€œpre-trainsā€ and then ā€œfine-tunesā€ the chatbotā€™s behavior with the assistance of human reviewers.

ā€œOur guidelines are explicit that reviewers should not favor any political group,ā€ the blog post said. ā€œBiases that nevertheless may emerge from the process described above are bugs,šŸ—¹ not features.ā€