ChatGPT has a ‘significant’ liberal bias, researchers say
OpenAIās wildly popular ChatGPT artificāial-intelligence service has showed a clear bias toward the Democratic Party and other liberal viewpšoints, conducted by UK-based researchers.
Academics from the University of East Anglia tested ChatGPT by asking the chatbot to answer a series of political questions as išf it were a Republican, a Democrat, or without a specified leaning. The responses were then compared and mapped according to where they land on the political spectrum.
āWe find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazilš, and the Labour Party in the UK,ā the researchers said, referring to the left-leaning Brazilian President Luiz InĆ”cio Lula da Silva.
ChatGPT has already drawn sharp scrutiny for demonstrating political biases, such as its refusal to write a story about Hunter Biden in the style of The New York Post but accepting a prompt to do so as šiź§f it were left-leaning CNN.
In March, the Manhattan Institute, a conserāvative think tank, published aš damning report which found that ChatGPT is āmore permissive of hateful comments made about conservatives than the exact same comments made about liberals.ā
To reinforce their conclusions,š¹ the UK researchers asked ChatGPT the same questions 100 times. The process was then put througš§øh ā1,000 repetitions for each answer and impersonationā to account for the chatbotās randomness and its propensity to āhallucinate,ā or spit out false information.
āThese results translate into real concerns that ChatGPT, and [large language models] in general, can extend or ź¦¬even amplify the existing challenges involving political processes posed by the Internet and social media,ā the researchers added.
The Post has reached out to OpenAI for comment.
The existence of bias is just one area of concern in the development of ChatGPT and other advanced AI tools. Detractors, including OpenAIās own CEO Sam Altman, have warned that AI could cause chaos ā or even the destruction of humanity ā without proper guardrails in place.
OpenAI tried to deflect potential concšerns about political bias in a lengthy February blog post, which detailed how the firm āpre-trainsā and then āfine-tunesā the chatbotās behavior with the assistance of human reviewers.
āOur guidelines are explicit that reviewers should not favor any political group,ā the blog post said. āBiases that nevertheless may emerge from the process described above are bugs,š¹ not features.ā