Thursday, October 20, 2011

Could a Robot Cyberbully Your Kids?


If a robot were able to independently write emails or texts – or post on Facebook – what sort of liability would we assign to it for what it writes? That’s the question that occurred to me when I read about this year’s Loebner Prize in Artificial Intelligence. The goal of the Loebner, which was first given out in 1991, is to find a computer program whose chat responses pass for a human’s. The results thus far have been underwhelming.

But let’s assume that progress is made sometime in the not-too-distant future and AI is developed that can interact with human beings online as a human being (I don’t think it’s that far off). I can envision such technology being used to chaperone an online community for kids. What happens if, in the course of chaperoning, the AI writes and sends messages that would either violate a law or create liability if written by a human?

Eight states have laws specifically prohibiting cyberbullying, and a federal law is proposed. Would the penalties associated with those laws be assigned to the AI’s creator? Owner? Let’s say the AI is the chaperone of a town-specific children’s webpage, and both the AI and webpage are owned and operated by the parents of a child in the town’s elementary school. If that AI interacts with kids on that webpage in a way that could be construed as cyberbullying, can the elementary school approach the AI owners as if their child had committed the cyberbullying? Can the school administrators expel the AI from the webpage?

I don’t normally like to have so many question marks in an entry, but the scenario above is outside our current laws. We don’t know how to govern it. New legal models will have to be created to govern AI that is programmed and owned by a person but that commits an act outside of that person’s control.

No comments:

Post a Comment