Business

Google pauses ‘absurdly woke’ Gemini AI chatbot’s image tool after backlash over historically inaccurate pictures

Google said Thursday it would ā€œpauseā€ its Gemini chatbotā€™s image generation tool after it was widely panned for creating ā€œdiverseā€ images that were not historically or factually accurate — such as black Vikings, female pšŸ”Æopes and NatišŸ½ve Americans among the Founding Fathers.

Social mešŸŒƒdia users hšŸŒŠad blasted Gemini as ā€œabsurdly wokeā€ and ā€œunusableā€ after requests to generate representative images for subjects resulted in the bizarrely revisionist pictures.

ā€œWe’re already working to address recent issues with Gemini’s image generation feature,ā€ Google said in a statement posted on X. ā€œWhile we do this, we’re going to pause the image generation of people and will re-release an improved version soon.ā€

Examples included an AI image of a black man who appeared to represent George Washington, complete with a white powdered wig and Continental Army uniform, and a Southeast Asian woman dressed in papal attire eveź¦šn though all 266 popes throughout history have been white men.

One social media user blasted the Gemini tool as “unusable.” Google Gemini

In another shocking example , Gemini even generated ā€œdiverseā€ šŸ§”representations of Nazi-era German sošŸŒ³ldiers, including an Asian woman and a black man decked out in 1943 military garb.

Since Google has not published the parameters that govern the Gemini chatbotā€™s behavior, it is difficult to get a clear explanation of why the software was inventing diverse versions of historical figures anš“„§d events.

William A. Jacobson, a Cornell University Law professor and founder of the Equal Protection Project, a watchdog group told The Post: ā€œIn the name of anti-biašŸŒ³s, actual bias is being built into the systems.ā€

ā€œThis is a concern not just for search results, but real-world applications where ‘bias free’ algorithm testing actually is building bias into the system by targeting end results that amount to quotas.ā€

The problem may come down to Googleā€™s ā€œtraining processā€ for the ā€œlarge-language modelā€ that powers Geminiā€™s image tool, according to Fabio Motoki, a lecturer at the UKā€™s University of East Anglia who co-authored a paper last year thatĀ found a noticeable left-leaning bias in ChatGPT.

ā€œšŸ…Remember that reinforcement learning from human feedback (RLHF) is about people telling the model what is better and what is worse, in practice shaping its ā€˜rewardā€™ function ā€“ technically, its loss function,ā€ Motoki told The Post.Ā 

ā€œSo, depending on which people Google išŸŽs recruiting, or wįƒ¦hich instructions Google is giving them, it could lead to this problem.ā€

It was a significant misstep for search giant, which had just rebranded its main AI chatbot from Bard earlier this month and introduced heavily touted new features — including image generation.

Google Gemini was mocked online for producing “woke” versions of historical figures. Google Gemini

The blunder also came days after Opź¦enAI, which operates the popular ChatGPT, introduced a new AI tool called Sora that creates videos based on usersā€™ text prompts.

Gooź¦”gle had earlier admitted that the chatbotā€™s erratic behavior needed to be fixed.

ā€œWeā€™re working to improve these kinds of depictions immediately,ā€ Jack Krawczyk, Googleā€™s senior director of product management for Gemini experieā›¦nces, told The Post.

ā€œGeminiā€™s AI image generation does generate a wide range of people. And thatā€™s generally ā„±a good thing because people around šŸ¦©the world use it. But itā€™s missing the mark here.ā€

The šŸ™ˆPost has reached out to Google for further commenšŸŒžt.

When asked by The Post to providšŸ”œe its trust and safety guidšŸ’«elines, Gemini acknowledged that they were not ā€œpublicly disclosed due to technical complexities and intellectual property considerations.ā€.

Google has not published the parameters that govern Gemini’s behavior. Google Gemini

The chatbot in its ršŸ…°espoą¶£nses to prompts also had admitted it was aware of ā€œcriticisms that Gemini might have prioritized forced diversity in its image generation, leading to historically inaccurate portrayals.ā€

ā€œThe algorithms behind image generation models are complex and still under develošŸŽƒpment,ā€ Gemini said. ā€œThey may struggle to understand the nuances of historical context and cultural represeā­•ntation, leading to inaccurate outputs.ā€