Monday, January 30, 2012

RoboCop – Buying a Better Police Force…


… or at least one that is more easily programmed. Numerous companies like (and I’m not making these names up) Super Droid Robots, Police One and Inspector Bots have begun offering various models of “tactical law enforcement robots.” And some cities have begun buying models or are actively considering incorporating robots into their police forces.

If you read these articles, you’ll note that the robots discussed are not exactly the work of Omni Consumer Products. They’re closer to Radio Shack toys with first strike capabilities. But in the same way that military drones are advancing in automation and ability, the technology behind tactical law enforcement robots is developing as well.

Right now, for the most part, there are human beings directly controlling these police robots. But what happens when police robots take the next leap to drone-like levels of autonomy? Drones are subject to the bounds of international law and the law of war (i.e., unadjudicable law), but police robots are subject to the Constitution. If a citizen accuses an autonomous police robot of violating the Constitution, it’s unclear who will be held responsible for that potential infringement.

And it gets harder to prove unconstitutional action as there is increased reliance on autonomous robots. For example: A sergeant sends a police robot out to patrol a municipal park every day for 1,000 days, almost three years, looking for pick pockets. There are no incidents. On the 1,001st day, the police robot arrests someone who is sitting by themselves reading a book on a bench. No human police office would find reasonable suspicion. In the process of the arrest, the robot finds illegal drugs in the backpack of the arrestee. Had a human being made the arrest, the drugs would be considered tainted fruit (to use a favorite law school phrase), inadmissible because the arrest and search were unreasonable.

A smart prosecutor won’t give up that easily with an AI robot making the arrest. Here’s how the argument could go: That robot has been making that patrol for 1,000 days with no alteration to its program and no other reported incidents. If it makes an arrest, it has reasonable suspicion. Just because human beings can’t pick up on the signs right away, doesn’t mean that reasonable suspicion doesn’t exist with the greater data processing capabilities of the robot.

The danger is that in this scenario people don’t know what constitutes reasonable suspicion. And the more data points in the robot’s experience – 1,000 days, 10,000 days, 100,000 days, etc. – the easier it is to believe that it is operating with a superior body of evidence to make educated decisions about reasonable suspicion. Unconstitutionality becomes increasingly harder to prove.