Google pauses ‘absurdly woke’ Gemini AI chatbot’s image tool after backlash over historically inaccurate pictures
Google said Thursday it would āpauseā its Gemini chatbotās image generation tool after it was widely panned for creating ādiverseā images that were not historically or factually accurate — such as black Vikings, female pšÆopes and Natiš½ve Americans among the Founding Fathers.
Social mešdia users hšad blasted Gemini as āabsurdly wokeā and āunusableā after requests to generate representative images for subjects resulted in the bizarrely revisionist pictures.
āWe’re already working to address recent issues with Gemini’s image generation feature,ā Google said in a statement posted on X. āWhile we do this, we’re going to pause the image generation of people and will re-release an improved version soon.ā
Examples included an AI image of a black man who appeared to represent George Washington, complete with a white powdered wig and Continental Army uniform, and a Southeast Asian woman dressed in papal attire eveź¦n though all 266 popes throughout history have been white men.
In another shocking example , Gemini even generated ādiverseā š§representations of Nazi-era German soš³ldiers, including an Asian woman and a black man decked out in 1943 military garb.
Since Google has not published the parameters that govern the Gemini chatbotās behavior, it is difficult to get a clear explanation of why the software was inventing diverse versions of historical figures anš§d events.
William A. Jacobson, a Cornell University Law professor and founder of the Equal Protection Project, a watchdog group told The Post: āIn the name of anti-biaš³s, actual bias is being built into the systems.ā
āThis is a concern not just for search results, but real-world applications where ‘bias free’ algorithm testing actually is building bias into the system by targeting end results that amount to quotas.ā
The problem may come down to Googleās ātraining processā for the ālarge-language modelā that powers Geminiās image tool, according to Fabio Motoki, a lecturer at the UKās University of East Anglia who co-authored a paper last year thatĀ found a noticeable left-leaning bias in ChatGPT.
āš Remember that reinforcement learning from human feedback (RLHF) is about people telling the model what is better and what is worse, in practice shaping its ārewardā function ā technically, its loss function,ā Motoki told The Post.Ā
āSo, depending on which people Google išs recruiting, or wį¦hich instructions Google is giving them, it could lead to this problem.ā
It was a significant misstep for search giant, which had just rebranded its main AI chatbot from Bard earlier this month and introduced heavily touted new features — including image generation.
The blunder also came days after Opź¦enAI, which operates the popular ChatGPT, introduced a new AI tool called Sora that creates videos based on usersā text prompts.
Gooź¦”gle had earlier admitted that the chatbotās erratic behavior needed to be fixed.
āWeāre working to improve these kinds of depictions immediately,ā Jack Krawczyk, Googleās senior director of product management for Gemini experieā¦nces, told The Post.
āGeminiās AI image generation does generate a wide range of people. And thatās generally ā±a good thing because people around š¦©the world use it. But itās missing the mark here.ā
The šPost has reached out to Google for further commenšt.
When asked by The Post to providše its trust and safety guidš«elines, Gemini acknowledged that they were not āpublicly disclosed due to technical complexities and intellectual property considerations.ā.
The chatbot in its rš °espoą¶£nses to prompts also had admitted it was aware of ācriticisms that Gemini might have prioritized forced diversity in its image generation, leading to historically inaccurate portrayals.ā
āThe algorithms behind image generation models are complex and still under develošpment,ā Gemini said. āThey may struggle to understand the nuances of historical context and cultural represeāntation, leading to inaccurate outputs.ā