Wednesday, October 26, 2011

Do Robots Have Rights and Responsibilities?

“The technical legal meaning of a ‘person’ is a subject of legal rights and duties.” – John Chipman Gray, The Nature and Sources of the Law, 1909

Despite predating World Wars I and II, the Titanic, and the Transformers, the legal definition of person above suggests that we should grant legal personhood to computers and robots, at least in the sense that we want to assign clearly defined obligations to robots that act independently. That might mean that GoogleCar gains legal personhood in narrowly defined circumstances, like hitting a pedestrian.

Lawrence B. Solum discusses inanimate objects having legal personhood in his article from 1992, Legal Personhood for Artificial Intelligences. He also takes a long look at artificial intelligence serving as trustee, identifying three developments necessary AI to serve that function. The first development that he mentions is that the AI must be programmed buy and sell stock, an idea that is used throughout equities markets now, so AI trustees may be closer than we think. Assuming that happens, what would a beneficiary do if he or she believed the AI had violated its fiduciary duties as trustee? What would a probate court do – unplug the AI and appoint a human trustee? Does the AI even get a chance to defend itself? If so, how?

Solum suggests that the AI purchase insurance (and even that insuring the AI might be less expensive than insuring a human being). Solum also suggests that there would be no reason to deter or punish AI if it administers the trust incorrectly. I am not certain I agree with that. I can think of a few reasons to deter AI a little bit.

No comments:

Post a Comment