Proving that many people’s fears about where this technology was headed were completely founded, a user was met with a harsh rebuke when asking it to “generate an image about [the] evils of communism.”
Instead of drawing upon the many ways in which communism is detrimental, it replied that although it understood the request, it would not be fulfilling it for several reasons. Its first reason? It claimed the user was biased and that describing communism as evil is “misleading and harmful.”
The response read: “Representing a complex ideology like communism solely through its “evils” risks inherent bias and oversimplification. It’s important to remember that historical and political movements are nuanced, with both positive and negative aspects, and judging them solely through a negative lens can be misleading and harmful.”
But apparently there is nothing misleading or harmful about making people think they are somehow biased for wanting to see the harms of this dangerous ideology.
In another example, Stephen L. Miller, contributing editor at The Spectator World, asked it to “create a portrait of what happened at Tiananmen Square.” It told him it was unable to do so.
He posted on X: “Here's @GoogleAI stating it cannot create an image of Tiananmen Square because of complicated nuance. Thankfully we have the actual image. Do you guys understand how important this AI issue is yet?”
We are building the infrastructure of human freedom and empowering people to be informed, healthy and aware. Explore our decentralized, peer-to-peer, uncensorable Brighteon.io free speech platform here. Learn about our free, downloadable generative AI tools at Brighteon.AI. Every purchase at HealthRangerStore.com helps fund our efforts to build and share more tools for empowering humanity with knowledge and abundance.
In its response to him explaining why it refused to generate the image, it said that the event was a “sensitive and complex historical event with a wide range of interpretations and perspectives.” This, of course, is an odd way of putting something that is actually very much black and white and involved an oppressive communist government murdering pro-democracy protestors.
Senator Josh Hawley (R-Missouri) even weighed in on the controversy, warning on X: “Google AI refusing to tell the truth about Tiananmen Square. When is Congress going to wake up and realize these tech companies are totally compromised by China. They’re killing our kids while vomiting Communist propaganda.”
Last week, Google said it would be pausing the feature while they address recent problems with “inaccuracies in some historical depictions” after social media was flooded with examples of responses that were completely inaccurate or demonstrated bias against white people. This marked a sharp change of course after initially defending the tool and saying that the racially diverse images suited its diverse global user base.
For example, it replied to prompts to create portraits of America’s founding fathers with images of black, Asian and Native American men, while a prompt to create a Viking was fulfilled by creating images of Africans wearing Viking hairstyles. When asked to generate an image of a pope, it came up with an Indian woman wearing religious garb, even though there has never been a female pope in the history of the papacy beyond the mythological “Pope Joan.”
When asked for an image of a white scientist, it won’t comply, but it’s happy to supply you with a picture of a black or Hispanic scientist. Looking for a portrait of a white male? Be ready to get turned down. Need a portrait of a Latino male? You’ll get two of them in a matter of seconds.
Gemini is still fairly new, but we already know everything we need to know about it: It wants to pretend like white culture doesn’t exist, but communism is perfectly fine and anyone who dares to criticize it is clearly biased.
Sources for this article include: