Sunday, November 17, 2013

Not Even Close to "Almost Human" – Artificial Intelligence in Law Enforcement

As FOX does us the service tonight of introducing the entirely original idea of police robots in a dystopian future in Robocop Minority Report Almost Human, it’s worth pausing to consider some of the real artificial intelligence and autonomous technology (let’s collectively call these advances AI, for convenience) that will actually come to law enforcement in the near future. This is a topic I consider and discuss at great length in my forthcoming book, Robots Are People Too. These robots and computer programs won’t remind anyone of buddy cop movies, but they will force us to ask serious questions about how the 4th Amendment should limit its use.

For those non-lawyers, the 4th Amendment states:

“The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”

Sounds simple enough. But since the middle of the 20th century, courts, defendants, and prosecutors have debated few Constitutional provisions more. The Amendment was originally intended to protect each individual’s personal effects and private papers from government seizure. But the technological advances of the last half century have forced courts and law enforcement to expand the areas protected, mostly due to two issues.

The first is the development of new programs and machines that expand the spaces people use to store information – such as cell phones and flash drives – as well as the spaces that gather information whether we want them to or not – like your web browser history. The second is the development of new devices that police can use to collect information about crimes and criminals, such as thermal imaging devices and GPS trackers.
 
AI will introduce new devices and programs that continue these trends. Autonomous cars like Google Car will collect, store and transmit data about everywhere we go. As AI “assistants” like Siri become more common and more functional, they also will collect, store and transmit data about their usage. And drone technology that the military uses now – which increasingly relies on autonomous functions like navigating and reconnaissance – is already making limited appearances within the law enforcement community, helping with duties like search and rescue missions, where a flying drone is cheaper and easier than a helicopter for aerial searches.

Our laws are fairly non-existent when it comes to AI, which makes sense because until recently AI appeared exclusively in fiction like Almost Human. Rather, underlying most of our laws is the base assumption that only human beings make decisions. Our laws aren’t designed to govern scenarios where machines and programs make decisions – how fast to drive, how to navigate in the air, etc.  States and the federal government have started to admit this. David Strickland, the Administrator of the National Highway Traffic Safety Administration, said last year that “Most of NHTSA’s safety standards assume the need for a human driver to operate required safety equipment. A vehicle that drives itself challenges this basic assumption.” When the California and Florida legislatures passed legislation last year governing autonomous cars, the bills admitted that one of the reasons they wanted to pass those laws was that each state “does not prohibit or specifically regulate the testing or operation of autonomous technology in motor vehicles” or “the operation of autonomous vehicles.”

States are also concerned about the use of drones by law enforcement agencies. Through the summer of 2013, more than 40 states have considered bills that would limit drone usage by the police. For the most part, the legislatures of these states are not concerned specifically about AI drones. But you can bet that when police departments begin to remove the “man in the loop” – that is, make the drones completely autonomous, which will be an option in the not-distant future – there will be another round of panicked calls from constituents and more legislation.

And let’s not forget the Supreme Court, which historically has taken an almost childlike joy in settling 4th Amendment issues related to technological development. However, while the Court has a somewhat well-earned reputation for settling important issues by narrow 5-4, strictly partisan decisions, AI use by law enforcement under the 4th Amendment may break the stalemate. Last year, in United States v. Jones the Court UNANIMOUSLY decided that the police needed a warrant before attaching a GPS tracker to a suspect’s car to track his movements. Although the majority opinion by Justice Scalia relied, strangely, on 19th century trespass case law (there are few children who have as much childlike joy as Justice Scalia has when he is able to use 19th century law to govern 21st century technology), the concurring opinions by Justices Sotomayor and Alito discussed GPS tracking in a way that will be relevant to AI.

Both Justices worry that GPS technology erases an important practical limitation that protects suspects’ 4th Amendment rights: it is expensive to assign an officer to track a suspect for long periods of time. With a GPS tracker, it’s cheap and easy. That means law enforcement agencies must be particularly diligent to conform to constitutional limits when using that technology.

Justice Sotomayor added another thought that is particularly relevant to the use of AI drones by law enforcement. She wondered about “the existence of a reasonable societal expectation of privacy in the sum of one’s public movements” and asked “whether people reasonably expect that their movements will be recorded and aggregated in a manner that enables the Government to ascertain, more or less at will, their political and religious beliefs, sexual habits, and so on.” Put another way, does the Constitution prevent law enforcement from turning on an AI drone and assigning it to follow and record anyone’s every movement in public spaces without a warrant?

Tonight, Almost Human will ask us to believe that police robots look like real people, talk like real people, and act like real people. The show asks us to believe because the set-up is fiction. Those robots don’t exist. But AI machines and programs are coming to law enforcement soon. They won’t need to ask us to believe in them because they won’t need our belief; they’ll be real with or without it. However, our belief isn’t the important issue; our Constitutional rights are. This technology will make it easier for police to track and monitor anyone and everyone. I have no doubt that nearly every police department that uses AI will do so with the intention of monitoring potentially dangerous people, making our neighborhoods safer, and improving lives. But their jobs will be easier if there are legal standards giving police guidelines for what can be done constitutionally. In order to do that, we need to seriously consider what we think the 4th Amendment can and should say about using AI to protect the public safety.

Monday, February 20, 2012

The Next Time He Runs for President, Mitt Romney Will Say Robots Are People Too


A few months ago, I discussed the idea of the rights and responsibilities of robots – John Chipman Gray’s proposal that any entity with legal rights and duties is a “person” under the law. This dates back all the way to 1909. More recently, presidential candidate Mitt Romney referenced the same idea by saying “Corporations are people too.” Somehow, he sucked so much charisma and charm out of the bland, precise words of an early 20th century legal scholar, that he made them appear to have charisma and charm in the first place. Not surprisingly, he has taken some heat for this.

But the next time he runs for President, he might have to include robots and artificial intelligence in his tactless confirmation of every criticism lobbed his way.

There is increasing thought devoted to AI machines. A recently published book from Samir Chopra and Lawrence White, A Legal Theory for Autonomous Artificial Agents, considers the legal status of such machines. An online symposium just concluded at Concurring Opinions that discussed the book. Although there were many well thought out and interesting posts (and if you’re interested in what machines are going to do and should do in the future, I recommend you read them), I want to draw attention to two in particular.

The first is from one of the authors, Samir Chopra. He notes that permitting autonomous artificial agents (“AAAs”) to function as “people” for the purpose of agency through common law “would require some creative stretching and interpretation of the common-law of doctrine of agency [sic].” He is being generous. Using common law to grant any legal rights and responsibilities would be disastrous. The resulting court opinions would come one on top of the other (as more AAAs cause more litigation), piecemeal, and with conflicting results, eventually necessitating legislative action to clear up the wreckage. Let’s just cut to the chase: Before any court opinions muddy the water, it would be much better for aspects of legal personhood to be assigned by legislation, either at the state or federal level. If done by Congress, I picture a system much like the Telecommunications Act of 1996, which sought to "let anyone enter any communications business -- to let any communications business compete in any market against any other,” while also minimizing the clash between telecommunications facilities and local laws and regulations. If done by state legislatures, I picture a system like the various corporation acts among the states. I hope to address these competing systems further in a future post.

The second is from Lawrence Solum (whose Legal Personhood for Artificial Intelligence I discussed briefly in the entry above), who compares legal personhood for AAA to legal personhood for…zombies.

I’m not prejudiced – if a zombie is just as qualified for a job as a living person, I think that zombie has just as much a right to the job as the individual who is not in a state of decay. Similarly, if my daughter wanted to marry a zombie, I’d like to think I wouldn’t stand in the way of true love (assuming the zombie is not going to eat her brains). But he’s equating zombies with AI: “Just as we can conceive of a possible world inhabited by both humans and Zombies, we can imagine a future in which artificial agents (or robots or androids) have all the capacities we associate with human persons… [But] we don’t have the emotional responses and cultural sensibilities that would develop in such a world.” Google engineers aren’t working on zombie cars; zombie sportswriters aren’t drafting AP reports for every college basketball game this season. Zombies are fiction. Cool, but fiction. AI is real. Don’t dodge the conversation about what to do about it by saying we’re not emotionally ready for it.

Having said that, I hope zombies are on the horizon. I would love it if Romney runs for President in 2016 (third time’s the charm, Mitt!) and says “Zombies are people too.”

Monday, January 30, 2012

RoboCop – Buying a Better Police Force…


… or at least one that is more easily programmed. Numerous companies like (and I’m not making these names up) Super Droid Robots, Police One and Inspector Bots have begun offering various models of “tactical law enforcement robots.” And some cities have begun buying models or are actively considering incorporating robots into their police forces.

If you read these articles, you’ll note that the robots discussed are not exactly the work of Omni Consumer Products. They’re closer to Radio Shack toys with first strike capabilities. But in the same way that military drones are advancing in automation and ability, the technology behind tactical law enforcement robots is developing as well.

Right now, for the most part, there are human beings directly controlling these police robots. But what happens when police robots take the next leap to drone-like levels of autonomy? Drones are subject to the bounds of international law and the law of war (i.e., unadjudicable law), but police robots are subject to the Constitution. If a citizen accuses an autonomous police robot of violating the Constitution, it’s unclear who will be held responsible for that potential infringement.

And it gets harder to prove unconstitutional action as there is increased reliance on autonomous robots. For example: A sergeant sends a police robot out to patrol a municipal park every day for 1,000 days, almost three years, looking for pick pockets. There are no incidents. On the 1,001st day, the police robot arrests someone who is sitting by themselves reading a book on a bench. No human police office would find reasonable suspicion. In the process of the arrest, the robot finds illegal drugs in the backpack of the arrestee. Had a human being made the arrest, the drugs would be considered tainted fruit (to use a favorite law school phrase), inadmissible because the arrest and search were unreasonable.

A smart prosecutor won’t give up that easily with an AI robot making the arrest. Here’s how the argument could go: That robot has been making that patrol for 1,000 days with no alteration to its program and no other reported incidents. If it makes an arrest, it has reasonable suspicion. Just because human beings can’t pick up on the signs right away, doesn’t mean that reasonable suspicion doesn’t exist with the greater data processing capabilities of the robot.

The danger is that in this scenario people don’t know what constitutes reasonable suspicion. And the more data points in the robot’s experience – 1,000 days, 10,000 days, 100,000 days, etc. – the easier it is to believe that it is operating with a superior body of evidence to make educated decisions about reasonable suspicion. Unconstitutionality becomes increasingly harder to prove.

Wednesday, December 28, 2011

Links for Monday 12-28-11

Drone-Ethics Briefing: What a Leading Robot Expert Told the CIA (Atlantic Monthly) Great discussion of the ethics and legality of using robots, AI, and robotics in national defense. The most interesting thought in the piece: Does the prohibition of torture under international law change if a soldier can resist pain through microrobotics?

Can Loving a Robot Lead to Divorce? (HuffingtonPost)– Get ready for the federal Defense of Marriage Act of 2025, making it illegal to marry a sexbot. Is more morally acceptable to marry a robot than a sexbot?


Siri – The Horror Movie – Hard to say who exactly is liable here. I’d have to revisit the famous torts case Freddy v. Jason.

Wednesday, December 14, 2011

Law of Robots Links for Wednesday 12-14-11

How Technology Will Test the Constitution (www.nextgov.com) - Amidst a discussion of technology’s effect on law in the year 2030, Notre Dame Professor O. Carter Snead notes that we are combining with our phones and portable devices to become “cyborgs.” He worries that the part of us encapsulated in our phones does not currently have constitutional rights. Personally, my Blackberry is too contentious already. The last thing it needs is the Second Amendment.

Drones deployed in US for domestic law enforcement (The Robot State) - In some ways, Minority Report was a freakishly prescient movie:




Majel may be Google’s answer to Siri? (www.washingtonpost.com) - Here's a topic that's been on my to-do list since Siri was released: If a musician asks Siri a question and then samples Siri's response in a hit single, who gets the sample's royalty and who authorizes the sample's use?

Sunday, December 11, 2011

Law of Robots Links for Monday 12-12-11

Conference on Robots and the Law - Legal practitioners and scholars decide to embrace their Sci Fi side by riffing on Asimov, calling their symposium "We Robot."

Developing Robots to Be Soldiers - My guess is that I will discuss the perils of Battlestar Galactica multiple times in this blog.

"She Feels as Real as My Real Girlfriend" - Which begs the question - Is your "real" girlfriend also imaginary? Will we eventually have public policy debates regarding whether marriage includes the union between a man and his non-living partner? Oh wait...

Sunday, November 13, 2011

A Tour of Adept Mobile Robots and Shifting Liability for Artificial Intelligence


Recently, a friend from high school, Luke Broyer, gave me a tour of Adept Mobile Robots in Amherst, New Hampshire. Mobile specializes in automated ground vehicles (AGVs) that perform a variety of functions in hospitals, factories, and consumer-interaction venues. To get an idea of the range of machines they build, check out the Seekur and the PeopleBot (which sounds like a Transformers derivative, but is actually a machine you might converse with soon). Mobile’s robots can do having lifting in a warehouse, hard labor outside, and also take your order for a burger or serve as a museum tour guide, all without direct control from a human being. If you haven’t interacted with on already, it’s only a matter of time before you encounter a Mobile product or something similar from another company.

As Luke showed me around the Mobile facilities, we started talking about liability issues associated with automated robots. Luke works on code for Mobile’s R&D and hasn’t read any of the indemnification or risk-assignment language in Mobile’s contracts, but I think it’s safe to assume that Mobile would like to pass along as much liability for damage and injuries associated with the AGVs it sells, as would any other company in Mobile’s position. Most liability issues can be settled contractually, and from what I understand about Mobile’s customer base, their contracts are likely concluded after fairly balanced negotiations.

However, I think that eventually: a) AI-driven robots will be ubiquitous in our lives; and b) Mobile or other manufacturers will become dominant producers of these machines. In an earlier post, I described a grocery store where customers give a robot their shopping lists and the robot retrieves all the items. Mobile already produces robots that make this interaction possible. The PeopleBot can roam the store interacting with customers, while the Seekur or similar machines can retrieve the groceries. Once these machines, or others like them, are cost efficient for stores, we’ll see them a lot more. And we might see them soon

One of the reasons AI-driven machines will become cost efficient is that one manufacturer (or more), maybe Mobile, will develop a production system that permits it to sell smart AI for a price that makes sense for large chains and smaller stores alike. That manufacturer will be in a strong position to dominate the market for AI-driven machines. I’m not concerned in this post with the contractual relationships between the AI manufacturer and the chain stores. Rather, I am more interested in the contractual relationships between the manufacturer and smaller stores

Large chains will have the economic clout (buying hundreds or thousands of AI-driven robots) to negotiate their contracts with the manufacturer. The owners of individual stores will not be able to negotiate in the same way. Those owners will have to rely on a contract of adhesion presented by the manufacturer. A contract of adhesion (basically, a standard form contract) will surely shift most if not all of the liability to the individual store owners.

Did the robot place peanut butter instead of almond butter into the basket of someone who is allergic to peanuts? Did the robot drive into a customer? Did the robot drop a heavy container on one of the human employees? By contract, all of that can be the responsibility of the store owner.

A contract of adhesion is not invalid, per se, but courts will look at them skeptically if they don’t pass the “sniff test.” The sniff test is exactly what it sounds like: If the contract looks crappy, it doesn’t pass the sniff test. This creates another layer of uncertainty for the injured party, the store owner, and the manufacturer. It represents a very thorny issue for the owners of individual stores, in particular, as they will have to assess the liability risk to their establishments with little historical data.

Mobile has a fantastically interesting line of AI-driven robots. The Seekur in particular is a wicked cool machine: the size of a dining room table, able to turn on a dime, and capable of extensive indoor and outdoor autonomous work. I hope Mobile is one of the manufacturers that gains a dominant market share; they’ve laid some strong ground work in this field. I also hope that their contracts adequately address liability for their customers and their customers’ customers.