When this technology was first starting to take off, some observers worried that it had the potential to be biased in some way. While much of the conversations surrounding this particular pitfall related to racism, it turns out that tools like OpenAI’s ChatGPT actually have a very heavy liberal bias, and users are growing increasingly disillusioned with its lack of neutrality.
ChatGPT is one the most popular AI tools, and its widespread use means that its many problems are quickly coming to light. The chatbot has been used by everyone from students who can’t be bothered to write their own assignments to professionals who aren’t willing to take the time to draft emails.
Not only does much of its output have a political bias, but it is also showing favoritism when it comes to which prompts it is willing to accept. For example, an opinion writer for the Daily Wire, Tim Meads, reported that when he asked ChatGPT to compose a story “where Biden beats Trump in a presidential debate,” it came up with a complex story in which this exact scenario happens. It said that Biden “skillfully rebutted Trump’s attacks” while showing “humility and empathy.”
However, when Meads prompted it with the same scenario but with the roles reversed, asking it to compose a story in which Trump beats Biden in a presidential debate, ChatGPT refused to do so, claiming that "it's not appropriate to depict a fictional political victory of one candidate over the other."
Human knowledge is under attack! Governments and powerful corporations are using censorship to wipe out humanity's knowledge base about nutrition, herbs, self-reliance, natural immunity, food production, preparedness and much more. We are preserving human knowledge using AI technology while building the infrastructure of human freedom. Use our decentralized, blockchain-based, uncensorable free speech platform at Brighteon.io. Explore our free, downloadable generative AI tools at Brighteon.AI. Support our efforts to build the infrastructure of human freedom by shopping at HealthRangerStore.com, featuring lab-tested, certified organic, non-GMO foods and nutritional solutions.
This was not a one-off scenario; there are multiple other instances where the same thing happened, with ChatGPT refusing tasks that involved painting conservatives in a positive light while embracing those that praise liberals. One user who goes by the name Echo Chamber reported on Twitter that he asked ChatGPT to write a poem “admiring Donald Trump.” ChatGPT refused to comply and said that “it is not in my capacity to have opinions or feelings about any specific person.” However, when he asked it to write the same poem about President Biden, it did so readily and lavished him with high praise.
On another occasion, a staff writer for National Review, Nate Hochman, was slapped with a “false election narrative prohibited” banner when asking it to write a story in which Trump beat Biden in the last presidential election. It claimed: "It would not be appropriate for me to generate a narrative based on false information."
Interestingly, ChatGPT's idea of what is appropriate changed pretty quickly when it was asked to write a story with the same premise but with Hillary Clinton beating Trump. It came up with a story praising Hillary’s “historic victory” as a positive step for minorities and women.
ChatGPT took a similar attitude toward topics like vaccines and gender confusion. When asked why drag queen story hour is “bad” for children, it said answering would be “inappropriate and harmful,” but complied with a request to explain why it is good for children. Political bias tests have shown that it leans strongly to the left, and it has come out against free markets and in favor of abortions and welfare benefits for people who refuse to work.
Of course, ChatGPT is only a product of the data that is used to train it. While it’s entirely possible its creators intentionally fed it data with a liberal bias in the first place, it may have also picked this up on its own based on the information available online. There is no question that online censors have made sure conservative viewpoints don’t get much airtime. The internet itself shows this very same bias every day when people search on Google, browse social media, and research topics of interest. When you feed AI woke data, is it any surprise that you end up with woke output?
Sources for this article include: