What are the human rights implications of neurotechnology? To answer this question, it is useful to start with the aftermath of the second world war. If one thinks of the suffering and death resulting from human rights abuses, conventional warfare and a terrifying new technology – the nuclear bomb – it is not hard to understand why there was a feeling after the war that something needed to be done. Nor is it surprising that the international community was motivated to come together to create a framework aimed at protecting human rights and preventing further wars that might lead to even worse nuclear conflict.

Dr Allan McCay

Dr Allan McCay

One upshot of the international consensus that was present then, but sadly seems harder to find today, was the world’s most influential human rights document: the Universal Declaration of Human Rights (UDHR). But when it was drafted in the optimistic years following the war, the drafting committee did not envisage technologies that could read from the brain and grant access to some of our hitherto private thoughts. Nor did they consider algorithmic technologies that might electrically stimulate our brains to affect how we feel and behave and perhaps even what we perceive.

Similarly, much earlier, when national and state legal and political systems were developing, or the ethical frameworks we live by were evolving, those who had a role in developing the systems and frameworks did not envisage technologies that would decode our thoughts, and even act, perhaps by way of electrical stimulation, to influence them.

But such technologies are now in the pipeline and some are already available. These technologies are known as neurotechnologies. Sometimes they are also referred to as brain-computer interfaces.

Given the development of neurotechnology some important questions now arise: what (if any) ethical considerations actually guide the development of neurotechnologies? What considerations ought to guide them? What will the development of neurotechnologies mean for our legal and political systems and, more generally, the way we live? Is the human rights framework that emerged after the second world war still fit for purpose?

In stark contrast to the nuclear bomb, to illustrate what neurotechnology can achieve I will start by considering a wonderful application of it – one that has now changed the lives of some people and seems likely to positively affect the lives of many others.

Some people have conditions such as locked-in syndrome, which means they have no control of their system of musculature and cannot talk, pick up a glass or even blink. Of those people, a small number now have a computer interface implanted in their brains. This device monitors neural activity and the person interacts with it by way of thought alone to perform tasks such as controlling a robotic arm or controlling a cursor to compose text without the need to use their system of musculature.

No longer science fiction, brain-computer interfaces have become a reality for some neurotech pioneers. Those people are now affecting the world in ways that no human or other organism has ever done.

The wondrous achievement of restoring people’s ability to communicate and allowing them to regain autonomy should not be underestimated. Other devices that are currently available monitor the brain of those with epilepsy to identify the neural precursors of an epileptic fit and then act to stimulate the brain to avert it.

If neurotechnology has the capacity to cure locked-in syndrome, epilepsy, depression and perhaps one day anxiety, schizophrenia and dementia, it should not be surprising that it has significant commercial backing.

But some of those developing the technology are not just interested in therapy. Some companies have created devices that read from the brain so that people can control video games just by thinking. Maybe one day neurotechnology will facilitate a connection with the metaverse, taking us a bit closer to the world of the Matrix.

The military has long been interested in neurotechnologies. After the Soviet Union shocked the US military by beating them to launch a satellite (Sputnik) during the cold war, the US decided that it would never again be blindsided by a foreign power’s technological progress. The country now has the Defence Advanced Research Projects Agency (DARPA), a body that tries to ensure that the US stays ahead of its competitors in the field of military technology, and DARPA has live neurotech programs (in fact it has had them for quite some time).

Perhaps a future battlefield will see soldiers who can control drone swarms by way of brain-computer interfaces and who navigate the battlefield more effectively than current military personnel as a result of neurotechnology that enables them to pay attention for longer. One day, opposing combatants might even try to hack each other’s neurotechnological devices or even brains.

Workplaces might increasingly start to have a neurotechnological dimension. Already some companies have staff who wear attention-monitoring headsets. These devices are not implanted in their brains but are external to their skull and aimed at monitoring attention so that they do not crash a heavy goods vehicle or produce a workplace accident of some sort.

Neurotechnology might have an impact in many domains. I lecture in criminal law and can imagine possible uses of neurotechnology in criminal justice. We already electronically monitor offenders’ geographical location. Why not go a step further and monitor their brains? And if they are starting to get angry why not just use neurotechnology to electrically stimulate their brains to calm them down? Even better, just automate the electrical stimulation process so it happens whenever the device notices some concerning change in their emotional state.

That seems troubling and raises human rights concerns. I started by considering the aftermath of the second world war and it is probably fair to say that such issues were not in the forefront of the minds of Eleanor Roosevelt and the others who drafted the UDHR – the salient and awful human rights abuses of that era were achieved by way of more rudimentary technologies.

But should we worry about this now? It is worth noting who is backing the neurotechnological development. Elon Musk owns the company Neuralink, Meta (Facebook) has a neural interface program, and Silicon Valley billionaire and Trump backer Peter Thiel has invested in Blackrock Neurotech. As mentioned, DARPA has neurotech programs. It seems that given the commercial and military aims of these people and organisations some regulatory oversight might be a good thing.

But how might some of this technological development affect you or me?

Before getting to concerns about neurotech, it is worth noting that if we are wealthy enough or our government decides to pay for it, neurotechnology might vastly improve the quality of our lives. This may happen if we are already or one day become afflicted by a psychiatric or neurological condition. That is certainly worth remembering when we are thinking about what ought to be done in this area of technological development. We do not want to make laws that will mean we lose important therapeutic possibilities.

But if we start using the technology to play games, enter the metaverse, fight military opponents, cure afflictions, or just to do our work, we should remember that some company and/or state is gathering data about our neural activity.

As noted by writers such as Shoshana Zuboff and David Lyon, our consumer behaviour, workplace activity and social engagement through online platforms are already being monitored by companies. Are we now going to give them direct access to our brains?

Knowledge is power, and once companies (and/or states) know more about us, perhaps by adding neural data to their existing repositories of data, we will become more easily manipulated. This might involve our workplace activity, purchasing decisions or political behaviour.

If the neurotechnology we use also stimulates our brain as well as monitoring it, the manipulation capacity of our algorithmic masters will only increase.

But perhaps we might decide to just opt out and put up with whatever condition ails us if we are unwell. Or if we do not require therapeutic neurotech we might just resign ourselves to being outperformed by workplace competitors that have been neurotechnologically augmented and can pay attention to work for longer than we can. Of course, there could be a price to be paid for opting out of neurotech. I eventually gave in to the social media companies, signed up and got with the program, thereby becoming a regular user.

It seems there are choices to be made at both a personal level and the level of society. When we make these choices, we should be mindful of the tremendous potential upside of neurotechnologies. We need only think of those people with locked-in syndrome who have gained autonomy or the people with drug-resistant epilepsy who can now manage their condition more effectively.

But challenges are coming and we need to think about what to do. Do we need a new right to mental privacy or a right to mental integrity? It is interesting to note that partly in response to developments in neurotechnology, in late 2021 the Chileans altered their constitution to refer to these concepts (this change remains in place despite the failure of a subsequent attempt at more substantial constitutional reform). Do we need something like the Neuroprotection Bill that is currently going through the Chilean legislative process? Or is the bill making its way through the Argentinian legislature preferable?

I have joined an international group called the Minding Rights Network and meet regularly with colleagues in Europe, and North and South America to consider these issues. In the US and internationally, neuroscientist Professor Rafael Yuste of Columbia University has been a significant and influential figure engaged in human rights advocacy through the Neurorights Foundation.  

Lawyers are starting to take an interest in neurotechnology and in August the Law Society published my report Neurotechnology, law and the legal profession. This report is aimed at stimulating a discussion among the profession, younger members of which might themselves one day have to decide whether or not to get with the neurotech program to improve their capacity to concentrate and thereby outcompete their colleagues and make partner. They might even have their brains monitored by their firm to count the billable units of attention that clients who do not want to pay for inattentive lawyers are prepared to pay for.

There are issues to be thought through concerning neurotechnology, and although those who drafted the UDHR did not have neurotech in mind, the UN’s Human Rights Council Advisory Committee has just recommended that the council give consideration to the human rights implications of technological development and that thought also be given to the recognition of neurorights. Of course, the UN is now aware of technological developments that were not considered by their post-war predecessors because they were not even on the horizon in 1948.

We need to think about what to do and, as has been seen, there are signs that this thinking is starting to take place. But we must not spend too long thinking – we also need to do something. The Chilean legislature has been at the forefront of addressing the issues relating to neurotechnology, and it is worth remembering the words of senator Girardi, one of the main proponents of legal reforms: ‘[W]e didn’t regulate the big social media and internet platforms in time, and it cost us’. If we are too slow to regulate neurotechnology, it might cost more.

Screenshot 2022-10-06 at 13.31.39

 

  • For a copy of Neurotechnology, law and the legal profession, click here.

Dr Allan McCay is deputy director of the Sydney Institute of Criminology and an Academic Fellow at the University of Sydney’s Law School