Monday, February 20, 2012

The Next Time He Runs for President, Mitt Romney Will Say Robots Are People Too


A few months ago, I discussed the idea of the rights and responsibilities of robots – John Chipman Gray’s proposal that any entity with legal rights and duties is a “person” under the law. This dates back all the way to 1909. More recently, presidential candidate Mitt Romney referenced the same idea by saying “Corporations are people too.” Somehow, he sucked so much charisma and charm out of the bland, precise words of an early 20th century legal scholar, that he made them appear to have charisma and charm in the first place. Not surprisingly, he has taken some heat for this.

But the next time he runs for President, he might have to include robots and artificial intelligence in his tactless confirmation of every criticism lobbed his way.

There is increasing thought devoted to AI machines. A recently published book from Samir Chopra and Lawrence White, A Legal Theory for Autonomous Artificial Agents, considers the legal status of such machines. An online symposium just concluded at Concurring Opinions that discussed the book. Although there were many well thought out and interesting posts (and if you’re interested in what machines are going to do and should do in the future, I recommend you read them), I want to draw attention to two in particular.

The first is from one of the authors, Samir Chopra. He notes that permitting autonomous artificial agents (“AAAs”) to function as “people” for the purpose of agency through common law “would require some creative stretching and interpretation of the common-law of doctrine of agency [sic].” He is being generous. Using common law to grant any legal rights and responsibilities would be disastrous. The resulting court opinions would come one on top of the other (as more AAAs cause more litigation), piecemeal, and with conflicting results, eventually necessitating legislative action to clear up the wreckage. Let’s just cut to the chase: Before any court opinions muddy the water, it would be much better for aspects of legal personhood to be assigned by legislation, either at the state or federal level. If done by Congress, I picture a system much like the Telecommunications Act of 1996, which sought to "let anyone enter any communications business -- to let any communications business compete in any market against any other,” while also minimizing the clash between telecommunications facilities and local laws and regulations. If done by state legislatures, I picture a system like the various corporation acts among the states. I hope to address these competing systems further in a future post.

The second is from Lawrence Solum (whose Legal Personhood for Artificial Intelligence I discussed briefly in the entry above), who compares legal personhood for AAA to legal personhood for…zombies.

I’m not prejudiced – if a zombie is just as qualified for a job as a living person, I think that zombie has just as much a right to the job as the individual who is not in a state of decay. Similarly, if my daughter wanted to marry a zombie, I’d like to think I wouldn’t stand in the way of true love (assuming the zombie is not going to eat her brains). But he’s equating zombies with AI: “Just as we can conceive of a possible world inhabited by both humans and Zombies, we can imagine a future in which artificial agents (or robots or androids) have all the capacities we associate with human persons… [But] we don’t have the emotional responses and cultural sensibilities that would develop in such a world.” Google engineers aren’t working on zombie cars; zombie sportswriters aren’t drafting AP reports for every college basketball game this season. Zombies are fiction. Cool, but fiction. AI is real. Don’t dodge the conversation about what to do about it by saying we’re not emotionally ready for it.

Having said that, I hope zombies are on the horizon. I would love it if Romney runs for President in 2016 (third time’s the charm, Mitt!) and says “Zombies are people too.”

Monday, January 30, 2012

RoboCop – Buying a Better Police Force…


… or at least one that is more easily programmed. Numerous companies like (and I’m not making these names up) Super Droid Robots, Police One and Inspector Bots have begun offering various models of “tactical law enforcement robots.” And some cities have begun buying models or are actively considering incorporating robots into their police forces.

If you read these articles, you’ll note that the robots discussed are not exactly the work of Omni Consumer Products. They’re closer to Radio Shack toys with first strike capabilities. But in the same way that military drones are advancing in automation and ability, the technology behind tactical law enforcement robots is developing as well.

Right now, for the most part, there are human beings directly controlling these police robots. But what happens when police robots take the next leap to drone-like levels of autonomy? Drones are subject to the bounds of international law and the law of war (i.e., unadjudicable law), but police robots are subject to the Constitution. If a citizen accuses an autonomous police robot of violating the Constitution, it’s unclear who will be held responsible for that potential infringement.

And it gets harder to prove unconstitutional action as there is increased reliance on autonomous robots. For example: A sergeant sends a police robot out to patrol a municipal park every day for 1,000 days, almost three years, looking for pick pockets. There are no incidents. On the 1,001st day, the police robot arrests someone who is sitting by themselves reading a book on a bench. No human police office would find reasonable suspicion. In the process of the arrest, the robot finds illegal drugs in the backpack of the arrestee. Had a human being made the arrest, the drugs would be considered tainted fruit (to use a favorite law school phrase), inadmissible because the arrest and search were unreasonable.

A smart prosecutor won’t give up that easily with an AI robot making the arrest. Here’s how the argument could go: That robot has been making that patrol for 1,000 days with no alteration to its program and no other reported incidents. If it makes an arrest, it has reasonable suspicion. Just because human beings can’t pick up on the signs right away, doesn’t mean that reasonable suspicion doesn’t exist with the greater data processing capabilities of the robot.

The danger is that in this scenario people don’t know what constitutes reasonable suspicion. And the more data points in the robot’s experience – 1,000 days, 10,000 days, 100,000 days, etc. – the easier it is to believe that it is operating with a superior body of evidence to make educated decisions about reasonable suspicion. Unconstitutionality becomes increasingly harder to prove.