Close

Hmmm, you are using a Gmail.com email address...

Google has declared war on the independent media and has begun blocking emails from NaturalNews from getting to our readers. We recommend GoodGopher.com as a free, uncensored email receiving service, or ProtonMail.com as a free, encrypted email send and receive service.

That's okay. Continue with my Gmail address...

Dangerous new identity precedents being set by AI; soon we won’t know the difference between who’s real or who’s fake


The advent of machine learning and artificial intelligence has now made it possible to manipulate or produce realistic photos and videos of just about anyone, thus making it more difficult to discern the truth from the lies, according to a recently published article.

The article cited several available apps — such as FaceApp, Lyrebird and Face2Face —  that demonstrated such capabilities. The Russian-designed FaceApp, for instance, can automatically alter a user’s face to make it look younger or older, or even swap genders. The app is also capable of adding beautifying effects to the user’s face such as smoothing out wrinkles and lightening the skin. The app’s developer has since apologized for the mishap on its skin lightening filter. Following the controversy, FaceApp’s founder Yaroslav Goncharov announced that the feature will remain in the app, but will be renamed “spark” in order to avoid any negative connotation.

Another featured app called Face2Face has also made it more difficult to verify a video’s authenticity. As part of the process, a team of researchers at the Stanford University in California incorporated a depth-sensing camera into the video-sharing technology in order to have the ability to manipulate a video footage. The app worked by face-swapping the user’s facial expressions to match those of someone being monitored. The resulting video outputs are found to have uncanny degrees of realism.

The article also cited Lyrebird as another app that affected how humans differentiate between real and fake media outputs. Recently launched by the University of Montreal just last week, the technology is designed to impersonate another person’s voice. Several demonstration clips of the app impersonating key political figures — such as President Donald Trump, Hillary Clinton or Barack Obama — have been posted by the company.

Support our mission and enhance your own self-reliance: The laboratory-verified Organic Emergency Survival Bucket provides certified organic, high-nutrition storable food for emergency preparedness. Completely free of corn syrup, MSG, GMOs and other food toxins. Ultra-clean solution for years of food security. Learn more at the Health Ranger Store.

Lyrebird developers are quick to recognize that the app’s abilities to manipulate photos and videos on a level that is almost accurate may raise concerns in the future.

“Voice recordings are currently considered as strong pieces of evidence in our societies and in particular in jurisdictions of many countries. Our technology questions the validity of such evidence as it allows to easily manipulate audio recordings. This could potentially have dangerous consequences,” the creators wrote in MIT Technology Review.

Convolutional networks spearhead today’s AI innovation

According to the article, both Lyrebird and FaceApp utilized deep generative convolutional networks to produce these effects. Deep generative convolutional network is known to allow algorithms to not only classify things, but to also yield credible data on their own. This technique involved the use of large or deep neural networks. In a normal setting, these networks are given training data and are subsequently modified in order to accommodate new input. The networks can also be tweaked into generating their own data according to the information that they were trained to process.

Experts have noted that today’s current technology facilitated the generation of images from scratch that have eerie resemblance to reality. The experts also stated that using similar techniques could make it easier to manipulate videos as well. ”

At some point it’s likely that generating whole videos with neural nets will become possible. It’s more challenging because there is a lot of variability in the high dimensional space representing videos, and current models for it are still not perfect,” said Lyrebird co-founder Alexandre de Brébisson.

Given today’s available technology, the article stressed on the importance of distinguishing real videos and audios from fake ones. According to Justus Thies, Face2Face researcher and doctoral student at the Friedrich Alexander University in Germany, he has commenced on a project designed to detect video manipulation. He noted that intermediate results of the project show promising potential.

Read more articles on how fast technology is growing at Computing.news.

Sources include:

TechnologyReview.com

TheVerge.com

Receive Our Free Email Newsletter

Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.



Comments
comments powered by Disqus

RECENT NEWS & ARTICLES