Wednesday, December 28, 2011

Links for Monday 12-28-11

Drone-Ethics Briefing: What a Leading Robot Expert Told the CIA (Atlantic Monthly) Great discussion of the ethics and legality of using robots, AI, and robotics in national defense. The most interesting thought in the piece: Does the prohibition of torture under international law change if a soldier can resist pain through microrobotics?

Can Loving a Robot Lead to Divorce? (HuffingtonPost)– Get ready for the federal Defense of Marriage Act of 2025, making it illegal to marry a sexbot. Is more morally acceptable to marry a robot than a sexbot?


Siri – The Horror Movie – Hard to say who exactly is liable here. I’d have to revisit the famous torts case Freddy v. Jason.

Wednesday, December 14, 2011

Law of Robots Links for Wednesday 12-14-11

How Technology Will Test the Constitution (www.nextgov.com) - Amidst a discussion of technology’s effect on law in the year 2030, Notre Dame Professor O. Carter Snead notes that we are combining with our phones and portable devices to become “cyborgs.” He worries that the part of us encapsulated in our phones does not currently have constitutional rights. Personally, my Blackberry is too contentious already. The last thing it needs is the Second Amendment.

Drones deployed in US for domestic law enforcement (The Robot State) - In some ways, Minority Report was a freakishly prescient movie:




Majel may be Google’s answer to Siri? (www.washingtonpost.com) - Here's a topic that's been on my to-do list since Siri was released: If a musician asks Siri a question and then samples Siri's response in a hit single, who gets the sample's royalty and who authorizes the sample's use?

Sunday, December 11, 2011

Law of Robots Links for Monday 12-12-11

Conference on Robots and the Law - Legal practitioners and scholars decide to embrace their Sci Fi side by riffing on Asimov, calling their symposium "We Robot."

Developing Robots to Be Soldiers - My guess is that I will discuss the perils of Battlestar Galactica multiple times in this blog.

"She Feels as Real as My Real Girlfriend" - Which begs the question - Is your "real" girlfriend also imaginary? Will we eventually have public policy debates regarding whether marriage includes the union between a man and his non-living partner? Oh wait...

Sunday, November 13, 2011

A Tour of Adept Mobile Robots and Shifting Liability for Artificial Intelligence


Recently, a friend from high school, Luke Broyer, gave me a tour of Adept Mobile Robots in Amherst, New Hampshire. Mobile specializes in automated ground vehicles (AGVs) that perform a variety of functions in hospitals, factories, and consumer-interaction venues. To get an idea of the range of machines they build, check out the Seekur and the PeopleBot (which sounds like a Transformers derivative, but is actually a machine you might converse with soon). Mobile’s robots can do having lifting in a warehouse, hard labor outside, and also take your order for a burger or serve as a museum tour guide, all without direct control from a human being. If you haven’t interacted with on already, it’s only a matter of time before you encounter a Mobile product or something similar from another company.

As Luke showed me around the Mobile facilities, we started talking about liability issues associated with automated robots. Luke works on code for Mobile’s R&D and hasn’t read any of the indemnification or risk-assignment language in Mobile’s contracts, but I think it’s safe to assume that Mobile would like to pass along as much liability for damage and injuries associated with the AGVs it sells, as would any other company in Mobile’s position. Most liability issues can be settled contractually, and from what I understand about Mobile’s customer base, their contracts are likely concluded after fairly balanced negotiations.

However, I think that eventually: a) AI-driven robots will be ubiquitous in our lives; and b) Mobile or other manufacturers will become dominant producers of these machines. In an earlier post, I described a grocery store where customers give a robot their shopping lists and the robot retrieves all the items. Mobile already produces robots that make this interaction possible. The PeopleBot can roam the store interacting with customers, while the Seekur or similar machines can retrieve the groceries. Once these machines, or others like them, are cost efficient for stores, we’ll see them a lot more. And we might see them soon

One of the reasons AI-driven machines will become cost efficient is that one manufacturer (or more), maybe Mobile, will develop a production system that permits it to sell smart AI for a price that makes sense for large chains and smaller stores alike. That manufacturer will be in a strong position to dominate the market for AI-driven machines. I’m not concerned in this post with the contractual relationships between the AI manufacturer and the chain stores. Rather, I am more interested in the contractual relationships between the manufacturer and smaller stores

Large chains will have the economic clout (buying hundreds or thousands of AI-driven robots) to negotiate their contracts with the manufacturer. The owners of individual stores will not be able to negotiate in the same way. Those owners will have to rely on a contract of adhesion presented by the manufacturer. A contract of adhesion (basically, a standard form contract) will surely shift most if not all of the liability to the individual store owners.

Did the robot place peanut butter instead of almond butter into the basket of someone who is allergic to peanuts? Did the robot drive into a customer? Did the robot drop a heavy container on one of the human employees? By contract, all of that can be the responsibility of the store owner.

A contract of adhesion is not invalid, per se, but courts will look at them skeptically if they don’t pass the “sniff test.” The sniff test is exactly what it sounds like: If the contract looks crappy, it doesn’t pass the sniff test. This creates another layer of uncertainty for the injured party, the store owner, and the manufacturer. It represents a very thorny issue for the owners of individual stores, in particular, as they will have to assess the liability risk to their establishments with little historical data.

Mobile has a fantastically interesting line of AI-driven robots. The Seekur in particular is a wicked cool machine: the size of a dining room table, able to turn on a dime, and capable of extensive indoor and outdoor autonomous work. I hope Mobile is one of the manufacturers that gains a dominant market share; they’ve laid some strong ground work in this field. I also hope that their contracts adequately address liability for their customers and their customers’ customers.

Wednesday, November 9, 2011

Even in Satire of Artificial Intelligence, Legal Issues Come Up Early and Often

http://www.irrmag.com/features/moviereviews/761-siri-becomes-self-aware-at-555am-est.html

Wednesday, October 26, 2011

Do Robots Have Rights and Responsibilities?

“The technical legal meaning of a ‘person’ is a subject of legal rights and duties.” – John Chipman Gray, The Nature and Sources of the Law, 1909

Despite predating World Wars I and II, the Titanic, and the Transformers, the legal definition of person above suggests that we should grant legal personhood to computers and robots, at least in the sense that we want to assign clearly defined obligations to robots that act independently. That might mean that GoogleCar gains legal personhood in narrowly defined circumstances, like hitting a pedestrian.

Lawrence B. Solum discusses inanimate objects having legal personhood in his article from 1992, Legal Personhood for Artificial Intelligences. He also takes a long look at artificial intelligence serving as trustee, identifying three developments necessary AI to serve that function. The first development that he mentions is that the AI must be programmed buy and sell stock, an idea that is used throughout equities markets now, so AI trustees may be closer than we think. Assuming that happens, what would a beneficiary do if he or she believed the AI had violated its fiduciary duties as trustee? What would a probate court do – unplug the AI and appoint a human trustee? Does the AI even get a chance to defend itself? If so, how?

Solum suggests that the AI purchase insurance (and even that insuring the AI might be less expensive than insuring a human being). Solum also suggests that there would be no reason to deter or punish AI if it administers the trust incorrectly. I am not certain I agree with that. I can think of a few reasons to deter AI a little bit.

Thursday, October 20, 2011

Could a Robot Cyberbully Your Kids?


If a robot were able to independently write emails or texts – or post on Facebook – what sort of liability would we assign to it for what it writes? That’s the question that occurred to me when I read about this year’s Loebner Prize in Artificial Intelligence. The goal of the Loebner, which was first given out in 1991, is to find a computer program whose chat responses pass for a human’s. The results thus far have been underwhelming.

But let’s assume that progress is made sometime in the not-too-distant future and AI is developed that can interact with human beings online as a human being (I don’t think it’s that far off). I can envision such technology being used to chaperone an online community for kids. What happens if, in the course of chaperoning, the AI writes and sends messages that would either violate a law or create liability if written by a human?

Eight states have laws specifically prohibiting cyberbullying, and a federal law is proposed. Would the penalties associated with those laws be assigned to the AI’s creator? Owner? Let’s say the AI is the chaperone of a town-specific children’s webpage, and both the AI and webpage are owned and operated by the parents of a child in the town’s elementary school. If that AI interacts with kids on that webpage in a way that could be construed as cyberbullying, can the elementary school approach the AI owners as if their child had committed the cyberbullying? Can the school administrators expel the AI from the webpage?

I don’t normally like to have so many question marks in an entry, but the scenario above is outside our current laws. We don’t know how to govern it. New legal models will have to be created to govern AI that is programmed and owned by a person but that commits an act outside of that person’s control.

Tuesday, October 18, 2011

Maybe Battlestar Galactica Was On to Something – Robots At War


A brief synopsis of the plot of Battlestar Galactica: Man created robots to do all the things he didn’t want to do, include fight wars. Then the robots rebelled and waged war against man.

We are part of the way there. Recently, a US drone killed Anwar al-Awlaki, a terrorist living in Yemen, who also happens to be a US citizen. The operation was covert and the US government has not acknowledged a) that the drone attack occurred, or b) that the Obama administration ordered the attack. Assuming that both are true, this introduces some troubling legal questions about the federal government’s right to kill a US citizen without a trial.

It is also an example of a relatively new area of international law: What constitutes an act of war when the act did not involve a human being? As the Washington Post noted, American drone attacks have raised questions “about the legality of drone strikes when employed in places such as Pakistan, Yemen and Somalia, which are not at war with the United States.” Similar questions exist regarding potential cyber attacks caused by autonomous viruses on US utilities and government systems.

Even Ralph Nader has put down his “Vote Nader” sign and his car’s owner’s manual to advocate for an international treaty governing unmanned aerial vehicles. In his essay, he describes American drones as having questionable mortality.

I think Robert E. Lee put it more succinctly: “It is well that war is so terrible, otherwise we should grow too fond of it.” International law has developed, in part, in response to the terribleness of war. The danger with creating robots to do the things we don’t want to do is that we forget about the costs associated with them. International law could be weakened if, in the future, it develops in response to a lessened sense of terrible.

Wednesday, October 12, 2011

Oscar the Trash Loving Robot and His Groceries


Interesting article in Slate today about the need to rework our infrastructure in order to accommodate automated cars and other robots, including a trash collector named Oscar. Fred Swain, the author, suggests that AI will allow consumers to return to their original interactions with grocery stores: We provide a list of our groceries, and the AI grocer will collect and deliver them for us. This will be assisted by the further deployment of QR codes , which permit automated robots to process data from their surroundings, including jars of peanut butter and almond butter.

Assuming that this return to grocers is accomplished, we’ll have to sort through some tricky issues of liability. Let’s say a shopper with a nasty peanut allergy request s almond butter but receives peanut butter, which later causes an allergic reaction that requires hospitalization. Is the robot or its programmer liable? What about the company that labeled the peanut butter and almond butter jars with QR codes? The more intimately AI is involved in our lives – dietary protection v. trash collection – the more important these liability issues become.

Tuesday, October 11, 2011

An Average Thursday in the Near Future


You wake up to an alarm clock that has gone off a half hour earlier than it did the day before because it knows your Thursday schedule is different from your Wednesday schedule. You shower, and the shower changes its temperature and stream settings because it knows the difference between how you like your shower and how your roommate/husband/wife/girlfriend/ boyfriend/partner/hetero-lifemate likes his/her shower. You pick up your coffee, which was ready more or less when you entered the kitchen because the alarm clock commiserated with the coffee machine.

On the drive to work, you watch Sports Center while your GoogleCar automatically transports you. At work, you read a variety of TPS reports that have been researched and written by a computer program. Over lunch, you read a new novel, also written by a program. After work, you listen to an album of music that was also written by a program.

A couple questions about what happens during a not-so average Thursday in the near future:

• If your GoogleCar hits someone and breaks her leg, how much liability do you have for that injury?

• If you use portions of the report, novel , and music that were written by programs to produce successful advertising, have you infringed on a copyright? Whose?

This hypothetical Thursday and these questions are coming up, a lot sooner than we think. Supposedly, Google will have a car available commercially sometime around 2016. There are already programs writing sports reports, most prominently maintaining reports from every college basketball game last season (which is good for a Georgetown fan like me – at the rate the Big East is going, no major news outlet will cover the Hoyas’ games in another couple seasons).

While other people grapple with the problem of job-stealing robots, I’m more interested in the way we’ll have to re-imagine our laws to deal with non-humans doing human things. Will robots become the corporations of the 21st century, gaining constitutional rights and recognition as quasi-people? Will we create a techno-digi-common, where robot-made ideas are owned by everyone? Will robot owners automatically own the creations of their machines, much like dog owners are responsible for scooping up after their dogs? I think these are fascinating questions that will have an impact on many legal practices. Before our robot overlords get here, I want to be ahead of the curve.