Why Is Google's AI Gemini Facing Backlash? Here's The Surprising Reason
Gemini AI tool is criticized for failing to generate images of white people, sparking outrage. Google has temporarily suspended the tool's image generation feature while it works on improvements.
-
Google's Gemini AI tool faces backlash for its inability to generate images of white people
-
Tesla CEO Elon Musk criticizes Google, calling the AI tool racist
Google's Gemini, a flagship suite of generative AI models, apps, and services, has been facing criticism and ridicule for its inability to generate images of white people. The tool launched earlier this month is supposed to produce realistic and diverse images of people based on text prompts. However, users have discovered that the tool often fails to depict historical figures and people of different nationalities as white, even when explicitly requested.
Outrage and accusations of racism
For example, users have posted images generated by Gemini that show the U.S. Founding Fathers, popes, Vikings, and German soldiers during World War II as people of color. Some users have also claimed that the tool refuses to create images of white people at all, regardless of the input. These images have sparked outrage and mockery on social media platforms, with some accusing Google of being "woke" and "racist" towards white people.
Tesla founder and current owner of X, Elon Musk, also weighed in on the matter, calling Google "racist" and "anti-civilizational." Musk wrote: "I'm glad that Google overplayed their hand with their AI image generation, as it made their insane racist, anti-civilizational programming clear to all."
Google's response and temporary suspension of image generation
Google has acknowledged the issue and temporarily suspended Gemini's ability to generate images of people while it works on updating the model to improve the historical accuracy of outputs. In a statement posted on X, the company said:
"We're already working to address recent issues with Gemini's image generation feature. While we do this, we're going to pause the image generation of people and will re-release an improved version soon."
Google also explained that its AI principles commit its image generation tools to reflect our global user base and aim to generate a wide range of people for open-ended image requests. However, the company admitted that historical contexts have more nuance to them and that it will further tune the model to accommodate that.
The broader implications and ethical questions
The controversy over Gemini's image generation highlights the challenges and pitfalls of generative AI, which relies on large datasets and complex algorithms to produce outputs based on training data and other parameters. Such tools have often faced criticism for producing outputs that are biased, inaccurate, or harmful in various ways.
For instance, in 2015, Google's image classification tool mislabeled black people as gorillas, prompting the company to apologize and remove the label altogether. In 2020, a Washington Post investigation revealed that many image generators showed bias against people of color and women, such as sexualizing female images or associating high-status jobs with white men.
As generative AI becomes more advanced and accessible, it also raises ethical and social questions about the implications and responsibilities of creating and using such tools. How can generative AI be designed and regulated to ensure fairness, accuracy, and safety? How can users and consumers be informed and educated about the limitations and risks of generative AI? How can generative AI be used for positive and constructive purposes rather than deception, manipulation, or harm?
These are some of the questions that Google and other AI developers and users will have to grapple with as they continue to explore the possibilities and challenges of generative AI.