Today was a significant event in the development of robot law.  RT reported that

A European Parliament committee has voted in favor of a draft report that proposes granting legal status to robots, categorizing them as “electronic persons”.

Unsurprisingly it was the featured story on the Drudge Report, even with all of the domestic political stories going on.  Drudge is probably the most fierce A.I. watchdog site on the internet, with daily posts on A.I. developments.

There are a number of important features to today’s significant development.

First of all, we see Europe taking a positive stance to preempt a runaway A.I. scenario where killer robots take over the world.  Elon Musk, Stephen Hawking, Bill Gates, and others have said that is one of the gravest potential threats to humanity.

The legislatures are trying to create policies like mandatory kill switches that can stop robots if they decide to run amok.

They are also concerned about some of the psychological and “human” elements of robots and are proposing legislation that prohibits manufacturing robots designed to look like they are emotionally dependent.  This would cause us to really think that robots are humans and actually require our empathy.

As I point out in my book Ai Vey: Jewish Thoughts on Thinking Machines this type of legislation would be impossible to enforce.  It is not the exterior design that would evoke empathy.  The most iconic robot in all of film, R2-D2, is a rolling trashcan that beeps and whistles, and yet it seems to us to have a life of its own.

It is our job to remind ourselves that the robots are not real, and that, as MEP Mady Delvaux rightly said in an interview, “robots can show empathy, but cannot feel empathy.”

There is however, another underlying issue that I think is important to watch closely.

The draft report, approved by 17 votes to two and two abstentions by the European Parliament Committee on Legal Affairs, proposes that “The most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause.”

I assume that this type of “personhood” is meant to be something akin to the personhood that we attribute to other nonhuman entities like corporations.

I have two issues with this.  The first is that nobody ever confused a corporation for a real person, however, the more we treat a robot like a person, even if only legally, the closer we become to believing that they are people.

But there is a much larger issue.  I suspect that conferring liability on a robot is a way to shield the robot manufacturer from liability if the robot malfunctions and injures someone.  Even the smartest robot is not a person.  Behind every artificial intelligence is still a real person.  The corporations that manufacture the robots should not be able to avoid liability by claiming that the robot has a mind of its own and therefore the manufacturer is no longer responsible for the robots actions once it leaves the factory.  (Disclaimer: I am only a first year law student so please take what I say with a grain of salt.  But it doesn’t take a law professor to be suspicious of rules that transfer liability away from manufacturers.)

In any event, I am happy that legislatures are talking about it and that news sites are bringing it to our attention.

If you want to read more about this topic, don’t forget to purchase my book Ai Vey: Jewish Thoughts on Thinking Machines published by The Aspen Center for Social Values.