By Dario Gil
Senior Vice President & Director
IBM Research
Any views expressed herein are not necessarily the views of CIPL nor Hunton Andrews Kurth LLP
Senior Vice President & Director
IBM Research
Any views expressed herein are not necessarily the views of CIPL nor Hunton Andrews Kurth LLP
The next 10 years will bring about all manner of revolutionary data-driven technologies that pose both tremendous benefits and alarming privacy risks. Of these, neurotechnology, or neurotech, will likely be one of the most disruptive.
Neurotech is our, frankly, mind-blowing attempt to connect human brains to machines. Although brain-computer interfaces (BCIs) are the heart of neurotech, it is more broadly defined as technology able to collect, interpret, infer or modify information generated by any part of the nervous system. Why? To develop therapies for mental illnesses and neurological diseases. Beyond health care, it could soon be used in education, gaming, entertainment, transportation and so much more.
But there are pitfalls: there are no widely accepted regulations or guardrails yet when it comes to neurotech’s development or deployment. We must have principles and policies around neurotech, technology safeguards, and national and international regulations.
Neurotech is far from just conceptual -- such technology has already improved the quality of life and abilities of people with different illnesses or impairments, from epilepsy to Parkinson’s Disease to chronic pain. One day, we might implant such neurotech devices into paralyzed humans, allowing them to easily control phones, computers and prosthetic limbs—with their thoughts alone. In 2017, Rodrigo Hübner Mendes, a paraplegic, used neurotech to drive a racecar with his mind. Recently, an invasive neurotech device accurately decoded imagined handwriting movements in real time, at a speed that matched typical typing. Researchers have also showed how invasive neurotech allows users with missing or damaged limbs to feel touch, heat and cold through their prostheses.
Emerging applications of neurotech provide even more promise. Not only can neurotechnology sense or read neurodata but it can also modulate—invasively and noninvasively. This research is still in early stages, but it’s advancing rapidly. One astounding example is the work of Rafael Yuste, a neurobiologist at Columbia University. His team has recorded the neuron activity of a mouse that was performing an action, such as licking, for a reward. Later the researchers reactivated these same neurons and got the mouse to perform the same action, even if the rodent did not intend to do it at that moment. It is easy to imagine how this technology could lead to new breakthrough treatments for people with physical disabilities, for example.
Neurotech is still extremely immature. As it becomes more commonplace, we must consider the risks it might present, the ethics around it, and what regulation would be appropriate. Such risks are indeed vast, in some cases challenging the very autonomy of our own actions and the privacy of our thoughts. What if someone were to face employment discrimination because the algorithms that power a neurotech application used for hiring misinterpreted their neurodata? What if someone’s most sensitive and private thoughts were shared without their knowledge or consent? Of particular concern is the fact that most of the neurodata generated by the nervous systems is unconscious, meaning it could be possible for users to unknowingly or unintentionally share sensitive neurodata. The presumption of privacy within one’s own mind may simply no longer be a certainty.
While it is too early to know how to answer the questions neurotech poses about privacy and ethics, we need to ensure that researchers, corporations, policymakers, and consumers alike study and monitor this technology carefully. Developers of neurotech in particular must reaffirm their commitment to responsible innovation and help to develop and enforce guardrails so that they lead to beneficial long-term outcomes for the economy and society alike.
Neurotech is our, frankly, mind-blowing attempt to connect human brains to machines. Although brain-computer interfaces (BCIs) are the heart of neurotech, it is more broadly defined as technology able to collect, interpret, infer or modify information generated by any part of the nervous system. Why? To develop therapies for mental illnesses and neurological diseases. Beyond health care, it could soon be used in education, gaming, entertainment, transportation and so much more.
But there are pitfalls: there are no widely accepted regulations or guardrails yet when it comes to neurotech’s development or deployment. We must have principles and policies around neurotech, technology safeguards, and national and international regulations.
Neurotech is far from just conceptual -- such technology has already improved the quality of life and abilities of people with different illnesses or impairments, from epilepsy to Parkinson’s Disease to chronic pain. One day, we might implant such neurotech devices into paralyzed humans, allowing them to easily control phones, computers and prosthetic limbs—with their thoughts alone. In 2017, Rodrigo Hübner Mendes, a paraplegic, used neurotech to drive a racecar with his mind. Recently, an invasive neurotech device accurately decoded imagined handwriting movements in real time, at a speed that matched typical typing. Researchers have also showed how invasive neurotech allows users with missing or damaged limbs to feel touch, heat and cold through their prostheses.
Emerging applications of neurotech provide even more promise. Not only can neurotechnology sense or read neurodata but it can also modulate—invasively and noninvasively. This research is still in early stages, but it’s advancing rapidly. One astounding example is the work of Rafael Yuste, a neurobiologist at Columbia University. His team has recorded the neuron activity of a mouse that was performing an action, such as licking, for a reward. Later the researchers reactivated these same neurons and got the mouse to perform the same action, even if the rodent did not intend to do it at that moment. It is easy to imagine how this technology could lead to new breakthrough treatments for people with physical disabilities, for example.
Neurotech is still extremely immature. As it becomes more commonplace, we must consider the risks it might present, the ethics around it, and what regulation would be appropriate. Such risks are indeed vast, in some cases challenging the very autonomy of our own actions and the privacy of our thoughts. What if someone were to face employment discrimination because the algorithms that power a neurotech application used for hiring misinterpreted their neurodata? What if someone’s most sensitive and private thoughts were shared without their knowledge or consent? Of particular concern is the fact that most of the neurodata generated by the nervous systems is unconscious, meaning it could be possible for users to unknowingly or unintentionally share sensitive neurodata. The presumption of privacy within one’s own mind may simply no longer be a certainty.
While it is too early to know how to answer the questions neurotech poses about privacy and ethics, we need to ensure that researchers, corporations, policymakers, and consumers alike study and monitor this technology carefully. Developers of neurotech in particular must reaffirm their commitment to responsible innovation and help to develop and enforce guardrails so that they lead to beneficial long-term outcomes for the economy and society alike.