Wednesday, October 26, 2011

Do Robots Have Rights and Responsibilities?

“The technical legal meaning of a ‘person’ is a subject of legal rights and duties.” – John Chipman Gray, The Nature and Sources of the Law, 1909

Despite predating World Wars I and II, the Titanic, and the Transformers, the legal definition of person above suggests that we should grant legal personhood to computers and robots, at least in the sense that we want to assign clearly defined obligations to robots that act independently. That might mean that GoogleCar gains legal personhood in narrowly defined circumstances, like hitting a pedestrian.

Lawrence B. Solum discusses inanimate objects having legal personhood in his article from 1992, Legal Personhood for Artificial Intelligences. He also takes a long look at artificial intelligence serving as trustee, identifying three developments necessary AI to serve that function. The first development that he mentions is that the AI must be programmed buy and sell stock, an idea that is used throughout equities markets now, so AI trustees may be closer than we think. Assuming that happens, what would a beneficiary do if he or she believed the AI had violated its fiduciary duties as trustee? What would a probate court do – unplug the AI and appoint a human trustee? Does the AI even get a chance to defend itself? If so, how?

Solum suggests that the AI purchase insurance (and even that insuring the AI might be less expensive than insuring a human being). Solum also suggests that there would be no reason to deter or punish AI if it administers the trust incorrectly. I am not certain I agree with that. I can think of a few reasons to deter AI a little bit.

Thursday, October 20, 2011

Could a Robot Cyberbully Your Kids?


If a robot were able to independently write emails or texts – or post on Facebook – what sort of liability would we assign to it for what it writes? That’s the question that occurred to me when I read about this year’s Loebner Prize in Artificial Intelligence. The goal of the Loebner, which was first given out in 1991, is to find a computer program whose chat responses pass for a human’s. The results thus far have been underwhelming.

But let’s assume that progress is made sometime in the not-too-distant future and AI is developed that can interact with human beings online as a human being (I don’t think it’s that far off). I can envision such technology being used to chaperone an online community for kids. What happens if, in the course of chaperoning, the AI writes and sends messages that would either violate a law or create liability if written by a human?

Eight states have laws specifically prohibiting cyberbullying, and a federal law is proposed. Would the penalties associated with those laws be assigned to the AI’s creator? Owner? Let’s say the AI is the chaperone of a town-specific children’s webpage, and both the AI and webpage are owned and operated by the parents of a child in the town’s elementary school. If that AI interacts with kids on that webpage in a way that could be construed as cyberbullying, can the elementary school approach the AI owners as if their child had committed the cyberbullying? Can the school administrators expel the AI from the webpage?

I don’t normally like to have so many question marks in an entry, but the scenario above is outside our current laws. We don’t know how to govern it. New legal models will have to be created to govern AI that is programmed and owned by a person but that commits an act outside of that person’s control.

Tuesday, October 18, 2011

Maybe Battlestar Galactica Was On to Something – Robots At War


A brief synopsis of the plot of Battlestar Galactica: Man created robots to do all the things he didn’t want to do, include fight wars. Then the robots rebelled and waged war against man.

We are part of the way there. Recently, a US drone killed Anwar al-Awlaki, a terrorist living in Yemen, who also happens to be a US citizen. The operation was covert and the US government has not acknowledged a) that the drone attack occurred, or b) that the Obama administration ordered the attack. Assuming that both are true, this introduces some troubling legal questions about the federal government’s right to kill a US citizen without a trial.

It is also an example of a relatively new area of international law: What constitutes an act of war when the act did not involve a human being? As the Washington Post noted, American drone attacks have raised questions “about the legality of drone strikes when employed in places such as Pakistan, Yemen and Somalia, which are not at war with the United States.” Similar questions exist regarding potential cyber attacks caused by autonomous viruses on US utilities and government systems.

Even Ralph Nader has put down his “Vote Nader” sign and his car’s owner’s manual to advocate for an international treaty governing unmanned aerial vehicles. In his essay, he describes American drones as having questionable mortality.

I think Robert E. Lee put it more succinctly: “It is well that war is so terrible, otherwise we should grow too fond of it.” International law has developed, in part, in response to the terribleness of war. The danger with creating robots to do the things we don’t want to do is that we forget about the costs associated with them. International law could be weakened if, in the future, it develops in response to a lessened sense of terrible.

Wednesday, October 12, 2011

Oscar the Trash Loving Robot and His Groceries


Interesting article in Slate today about the need to rework our infrastructure in order to accommodate automated cars and other robots, including a trash collector named Oscar. Fred Swain, the author, suggests that AI will allow consumers to return to their original interactions with grocery stores: We provide a list of our groceries, and the AI grocer will collect and deliver them for us. This will be assisted by the further deployment of QR codes , which permit automated robots to process data from their surroundings, including jars of peanut butter and almond butter.

Assuming that this return to grocers is accomplished, we’ll have to sort through some tricky issues of liability. Let’s say a shopper with a nasty peanut allergy request s almond butter but receives peanut butter, which later causes an allergic reaction that requires hospitalization. Is the robot or its programmer liable? What about the company that labeled the peanut butter and almond butter jars with QR codes? The more intimately AI is involved in our lives – dietary protection v. trash collection – the more important these liability issues become.

Tuesday, October 11, 2011

An Average Thursday in the Near Future


You wake up to an alarm clock that has gone off a half hour earlier than it did the day before because it knows your Thursday schedule is different from your Wednesday schedule. You shower, and the shower changes its temperature and stream settings because it knows the difference between how you like your shower and how your roommate/husband/wife/girlfriend/ boyfriend/partner/hetero-lifemate likes his/her shower. You pick up your coffee, which was ready more or less when you entered the kitchen because the alarm clock commiserated with the coffee machine.

On the drive to work, you watch Sports Center while your GoogleCar automatically transports you. At work, you read a variety of TPS reports that have been researched and written by a computer program. Over lunch, you read a new novel, also written by a program. After work, you listen to an album of music that was also written by a program.

A couple questions about what happens during a not-so average Thursday in the near future:

• If your GoogleCar hits someone and breaks her leg, how much liability do you have for that injury?

• If you use portions of the report, novel , and music that were written by programs to produce successful advertising, have you infringed on a copyright? Whose?

This hypothetical Thursday and these questions are coming up, a lot sooner than we think. Supposedly, Google will have a car available commercially sometime around 2016. There are already programs writing sports reports, most prominently maintaining reports from every college basketball game last season (which is good for a Georgetown fan like me – at the rate the Big East is going, no major news outlet will cover the Hoyas’ games in another couple seasons).

While other people grapple with the problem of job-stealing robots, I’m more interested in the way we’ll have to re-imagine our laws to deal with non-humans doing human things. Will robots become the corporations of the 21st century, gaining constitutional rights and recognition as quasi-people? Will we create a techno-digi-common, where robot-made ideas are owned by everyone? Will robot owners automatically own the creations of their machines, much like dog owners are responsible for scooping up after their dogs? I think these are fascinating questions that will have an impact on many legal practices. Before our robot overlords get here, I want to be ahead of the curve.