Press Office

comment - robot rights

Comment: 'Machines Like Me' and the thorny issue of robot rights

Published on: 17 April 2019

Writing for The Conversation, Joshua Jowitt comments on the issue of robot rights.

File 20190416 147483 ccs9ce.jpg?ixlib=rb 1.1
shutterstock
Joshua Jowitt, Newcastle University

Ian McEwan’s latest book, Machines Like Me: A Novel offers an alternative history: Britain has lost the Falklands War, Margaret Thatcher is waging an election campaign against Tony Benn and Alan Turing survived homophobic persecution to achieve breakthroughs in artificial intelligence.

The novel paints a picture of 1980s London that is at once familiar, but at the same time very different – and in doing so it raises some pressing questions. Central to the plot are the world’s first synthetic humans, put on sale for the public to buy. With this device, McEwan questions what it means to be human – if these machines are just like me, does that mean they have rights, like me?

It’s tempting to dismiss this as a ridiculous notion. When the question comes up with friends in the pub (usually after a few drinks), a common response is that we have human rights because we’re human. Robots aren’t human, so they can’t have the same rights as us. But if you think about this, it’s a circular argument. The same logic was used against women’s suffrage – they can’t have the vote, because they’re women. Slaves can’t have freedom, because they’re slaves. Machines can’t have rights, because they’re machines.

Being human

But before this can be dismissed as whimsical science fiction, we need to think more about why humans have rights and what it means to be human in the first place. Some might highlight the importance of our births – the fact that we are naturally procreated, whereas machines are made by humans. But if this is true, where does this leave the eight million people who have been born as a result of IVF treatment?

You could highlight our organic nature to sidestep this problem – we are biological beings, whereas machines are made of component parts. But this would mean that people with prosthetic limbs are “less human” – which is clearly not the case. Nor are people “less human” who have commonplace hip and knee replacements. Scientists at my own university have 3D printed the first artificial cornea, and this week Israeli scientists 3D printed an entire human heart. Nobody is suggesting that patients receiving these artificial organs are less human – even though they are no longer 100% organic.

Consciousness may also be a place to look – as humans are able to act on reasons beyond natural impulse or programming. But we are not alone in this ability – other animals can also engage in sophisticated planning and tool usage. And this argument would mean that babies and late-stage dementia patients are in effect “less human” because they lack this feature – which is clearly not the case.

Ultimately, all of these lines of argument have problems that only lead to deeper levels of abstraction. Maybe then what’s required is the ability to be open to a change in how we see the world and ourselves.

Conflict and consciousness

Although the level of machine consciousness portrayed by McEwan is, for the time being, still fiction – many believe that it will be a reality by the end of the century. And as technology develops and machines become more like us, then they may also need to be recognised as having rights like us.

Alan Gewirth was a professor of philosophy at the University of Chicago. He claimed that the reasons humans have rights is because we are prospective agents, able to choose what to do beyond natural impulse or reflex. So if this autonomous agency is the foundation of our rights, and robots are also autonomous agents, consistency requires us to recognise that they too have the same basic rights to freedom and well-being that we claim for ourselves.

This is not to say that robot rights cannot be overridden – all rights conflicts lead to the rights of one party being prioritised over the other. It merely requires us to see that robots are equal parties in any rights dispute. Mistreating a robot agent would not be the same as mistreating a printer for example, it would be more similar to mistreating another human.

Granting legal rights to robots clearly remains a complicated subject, but experiences from other fields shows how the problem is only practical and that it can be overcome. Legal systems have recognised that things as diverse as idols, orangutans and even rivers can have rights – so why not robots? It’s clear then that, like McEwan, the law should start thinking about these questions now instead of playing catch-up once the robots have arrived.

Joshua Jowitt, Teaching Fellow in Law, Newcastle University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation
Share:




Latest News