Industry experts express concern for the lack of ethics rules in place to protect privacy, free will as brain enhancement technology grows
08/20/2018 / By Zoey Sky / Comments
Bypass censorship by sharing this link:
New
Industry experts express concern for the lack of ethics rules in place to protect privacy, free will as brain enhancement technology grows

While technological advances in artificial intelligence aim to augment or even restore human capabilities, experts are worried that this can one day cause more harm than good, making a “call for ethics” necessary to protect our privacy.

Now that artificial intelligence and brain-computer interfaces are merging, we might not have to wait too long until it can “restore sight to the blind, allow the paralyzed to move robotic limbs, and cure any number of brain and nervous system disorders.”

However, a team of researchers spearheaded by Columbia University neuroscientist Rafael Yuste and University of Washington bioethicist Sara Goering, which includes University of Michigan biomedical engineering and rehabilitation scientist Jane Huggins, Ph.D., cautions that without regulation, innovation can still have negative implications for mankind.

Via an essay in Nature, more than two dozen physicians, ethicists, neuroscientists, and computer scientists stressed the need for “ethical guidelines” to regulate the evolving use of computer hardware and software that will be used to enhance or restore human capabilities.

Yuste, director of Columbia’s Neurotechnology Center and a member of the Data Science Instituteexplains that the group only wants to ensure that this “exciting” technology, which can be used to “revolutionize our lives,” is only used for the betterment of mankind.

Huggins, director of the U-M Direct Brain Interface Laboratory, agrees. She said, “This technology has great potential to help people with disabilities express themselves and participate in society, but it also has potential for misuse and unintended consequences. We want to maximize the benefit and promote responsible use.”

Brighteon.TV

Science fiction has often speculated about the possibilities, but now computers fusing with the human mind “to augment or restore brain function” is quickly becoming a reality. The group of experts predict that the for-profit brain-implant industry, which is led by Bryan Johnson’s startup Kernel and Elon Musk’s Neuralink, is currently worth $100 million. During President Obama’s term, the U.S. government spent another $500 million since 2013 for the BRAIN Initiative alone.

These investments can soon yield positive results, but the authors are wary about four main threats: “The loss of individual privacy, identity and autonomy, and the potential for social inequalities to widen, as corporations, governments, and hackers gain added power to exploit and manipulate people.” (Related: Artificial Intelligence ‘more dangerous than nukes,’ warns technology pioneer Elon Musk.)

Here are their suggestions to protect against these threats:

  • To protect privacy, the authors urge that individuals be required to opt-in, e.g. like organ donors do, when sharing their brain data from their devices. They added that the sale and commercial use of personal data must be strictly monitored.
  • To protect autonomy and identity, the authors suggest that an international convention be created to determine what actions would be prohibited. It can also educate people about the possible effects of this technology on “mood, personality, and sense of self.”
  • To address the potential for a brain-enhancement arms race that will set “people with super-human intelligence and endurance against everyone else,” the authors called for the creation of culture-specific commissions to establish norms and regulations. They also advised that the military use of brain technologies must be kept under control in the same way that chemical and biological weapons are under the Geneva Protocol.

DeepMind’s ethics group to tackle AI problems

Earlier this year, Google’s London-based AI research counterpart DeepMind launched a new unit that will deal with ethical and societal questions concerning artificial intelligence. The company shared that the new research unit was formed “to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all.”

Google is collaborating with external advisers from academia and the charitable sector, such as Columbia development professor Jeffrey Sachs, University of Oxford’s AI professor Nick Bostrom, and climate change campaigner Christiana Figueres, who will advise the group.

You can read more articles about how to use technology wisely at FutureScienceNews.com.

Sources include:

UOfMHealth.org

TheGuardian.com

Submit a correction >>

Get Our Free Email Newsletter
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
Your privacy is protected. Subscription confirmation required.


Get the world's best independent media newsletter delivered straight to your inbox.
x

By continuing to browse our site you agree to our use of cookies and our Privacy Policy.