Will Neurotech Force Us to Update Human Rights?

0
Will Neurotech Force Us to Update Human Rights?

Explore

One question for Nita Farahany, a philosopher at Duke University who studies the implications of emerging neuroscience, genomics and artificial intelligence for law and society.

Photo courtesy of Nita Farahany

Will neurotechnology force us to update human rights?

Yes. And that moment will quickly pass us by if we don’t seize it, allowing us to embrace and reap the benefits of the coming age of neural interface technology. This means recognizing the fundamental human right to cognitive freedom—our right to think freely, our self-determination over our brains and mental experiences. And then updating three existing rights: the right to privacy, freedom of thought and self-determination.

Updating the right to privacy requires us to explicitly recognize a right to mental privacy. Freedom of thought is already recognized in international human rights legislation, but is focused on the right to free exercise of religion. We must recognize a right against manipulation of thought, or having our thoughts used against us. And we have long recognized a collective right to self-determination of peoples or nations, but we need a right to self-determination over our own bodies, which would include, for example, a right to receive information about ourselves.

For example, if a corporation or an employer wants to implement fatigue monitoring in the workplace, the default will be that the employee has a right to mental privacy. This means, if I track my brain data, a right to receive information about what is being tracked. It recognizes that people have rights over their cognitive freedom by default, and the exceptions must be legally carved out. There will need to be robust consent and robust information given to consumers about what the data is being collected, how it is being used and whether it can be used or commodified.

I wrote a book that comes out in March called The battle for your brain: defending the right to think freely in the age of neurotechnology. One of the chapters in the book examines the line between persuasion and manipulation. I go into the example of Facebook experimenting on people and changing their timelines to contain negative or positive content. It was very offensive, and part of it was the lack of informed consent, but a bigger part was it felt like people’s emotions were being played with just to see if they could make someone unhappy in ways you could measure.

In a world of neurotechnology, you can measure the effect of those experiments much more precisely because you can see what happens to the brain as you make those changes. But also, these technologies are not just devices that read the brain. Many of them are writing devices—you can make changes to the brain. It certainly requires us to think about who controls the technology and what they can do with it, including things to deliberately manipulate your brain that can harm you. What rules are we going to put in place to protect people from that?

I am optimistic that we can do it. There is already momentum at the human rights level. The value of the human rights level is that there will be very specific laws that will need to be implemented to realize a right to cognitive freedom locally, nationally and around the world. But if you start with a powerful legal and moral obligation that is universal, it’s easier to get those laws updated. People recognize the unique sensitivity of their thoughts and emotions. It’s not just the right to keep people out of your mind, or the right not to be manipulated. It is a positive right to make choices about how your spiritual experiences are going to be, whether they are enhancements, or access to technology, or the ability to use and read information from that technology.

Main image: AndryDj / Shutterstock



Leave a Reply

Your email address will not be published. Required fields are marked *