The project aims to "teach" Google users which information to avoid as "false" while training them to spot and accept only information that Google deems as "true." If used as intended, Google users will be "immunized" against online "misinformation."
This "pre-bunking" plot is Google's latest dystopian attempt to squelch online free speech – though instead of simply banning or censoring content, Google is now attempting to rewire the brains of humanity to automatically filter our "disinformation."
According to the company, users will be presented with "accuracy prompts" as they search and browse. These prompts are designed to train users into clicking only the links that Google wants them to click.
As stated by Google, the scheme is all about "reminding individuals to think about accuracy when they might be about to engage with false information," adding that these Info Interventions "can boost users' pre-existing accuracy goals."
Google says it is drawing from behavioral science research to develop the most effective brainwashing tools it can, calling it a "gift to the world". (Related: Remember in 2017 when a Google executive announced that he believes immortality will be achieved by the year 2029?)
A special unit of Google called Jigsaw is behind the new tool. Jigsaw was established to "explore threats to open societies, and build technology that inspires scalable solutions."
In March 2021, a Medium post by Jigsaw declared that one of the most powerful ways to reduce "misinformation" is to constantly remind users how to think, what to click, and what to believe – "in other words, goading them until they move to where you want them to go," to quote Reclaim the Net.
Without Google there to tell users what to think and do, the company says they would be "prone to distractions." In other words, the human brain is inherently flawed, and the only way to fix it is to allow Google to do your thinking for you.
One example of how Google's interventions work has to do with commenting. If a person writes something that Google's perspective API identifies as "toxic," then machine learning models will identify it as abusive and provide feedback to the author.
That feedback explains that the comment was identified as "risky" or "offensive," and misaligned with the publisher's community guidelines. The user is then encouraged to alter the comment to make it more acceptable, based on Google's standards.
Google's API also does this same thing with content, alerting readers of a potentially "offensive" article that it may contain "potential misinformation." Readers are then encouraged to click elsewhere or to not take the article's content seriously.
An "accuracy prompt" might then pop up over the top of information that is already labeled as such. Google also employs the use of "literacy tips" to encourage users to "reflect on the accuracy of a news headline before continuing to browse."
Articles like this one would almost certainly be targeted by Google's "misinformation" programs. A "reminder" might come up urging readers to reconsider and "think twice" before accepting any of this as valid simply because it questions the almighty Google.
"Prebunking is a technique to preempt manipulation attempts online," is how Google explains the process. "By forewarning individuals and equipping them to spot and refute misleading arguments, they gain resilience to being misled in the future."
The latest news about Google can be found at Evil.news.
Sources for this article include: