
Why This Robot Ethicist Trusts Technology More Than Humans
MIT’s Kate Darling, who writes the rules of human-robot interaction, says an AI-enabled apocalypse should be the least of our concerns.

A s a law student in Switzerland, Kate Darling’s interest in robots was just a hobby. She had purchased a PLEO robot dinosaur that was designed to respond to human contact emotionally and act independently. “It really struck me that I responded to the cues the robot was giving me, even though I knew exactly how the toy worked,” Darling says. “I knew where all the motors were and how it worked, and why it would cry when you held it up by the tail, but I was just so compelled to comfort it and make it stop crying.”
Years later, while on assignment at Massachusetts Institute of Technology, she began talking to roboticists who were developing artificial intelligence and realized they weren’t approaching development from the policy and social sciences perspective that came so naturally to her. She discovered a small community of like-minded people concerned with the ethical implications of social robot development and began taking psychology classes at Harvard so she could start leading experiments to study human-robot interaction.
Darling is now a researcher at MIT Media Lab who spends most of her time anticipating the tough questions lawmakers and engineers will need to answer as AI becomes a real part of everyday life. Darling spoke with Magenta about the right time to talk ethics, how the government is keeping up, and her biggest fear when it comes to AI.

When during tech development should ethics be discussed?
I used to think that the best thing to do was let people create the technology, and then we can figure out how to regulate it and integrate it into society in a responsible way. For example, if we had thought too much about the Internet in advance and put regulation in place, I don’t think we would have the Internet that we have today. We would be missing out on some really positive aspects. So sometimes it’s good that technology just leaps ahead.
But, through watching people develop technology, I’ve also come to the realization that standards get set early on. If robots are put in your home to help you with things, you might think, “What data does the robot collect? Is that data secure? What can it access? Where does it store it?”
Thinking about how to build standards into technology early on sets good precedents for later. Otherwise, you can wind up with technology that is not optimal and very difficult to change. I’d rather err on the side of thinking a little bit too much about stuff before we use it.
How can we bridge the gap between engineers and social scientists?
I come from the world of academia. Academia is siloed in its disciplines. You have the engineers working in this building, social scientists over in this other building and they never talk to each other about anything. Instead of putting the entire burden on engineers to be thinking about what they’re building or the entire burden on the social scientists to understand all the workings of the technology and its effects, I think we need to have more interdisciplinary conversations.
Who do you think is doing this well now?
The White House, before the regime change, put out two really good and sensible reports on some of the effects of artificial intelligence. It included some policy recommendations and what we need to be mindful of. I thought that was really good. I know that the European Union is currently working on a similar report, trying to deal with some of the policy issues that we will be facing in the near future in robotics. We could always be doing a better job, but we’re not doing that bad of a job, government-wise.
As for corporations, there’s so much pressure to create ethics boards in the face of artificial intelligence. Most of these people are freaked out about the prospect of artificial super intelligence coming and killing us all.
What are you most worried about?
I’m much more worried about the intended or even the unintentional effects of the people who are behind the development and distribution of the technology than I am about the technology itself having an agenda. We’re starting to integrate AI systems in ways that can impact people’s lives.
AI systems learn using large sets of data. We know from a lot of data scientists that this can lead to some very biased outcomes and maybe disadvantage certain people. For example, people are already playing around with predictive policing for law enforcement, trying to predict what kinds of people might re-offend after they get out of prison. These systems are flawed, and to the extent they’re working the way they’re supposed to they are still based on historical data that might reinforce some biases that become self-perpetuated moving forward.
Some of the things people are developing right now have the potential to change the way that nature works. If they get into the hands of someone who is irresponsible with it, they could destroy the world.
How do you think we can prevent these potentially damaging effects of AI?
I’d prefer we talk about how to integrate things in a responsible way. Ideally, the conversation about the Internet would have been “Look, we need to put certain regulations in place but we also want to recognize the positive effects that having this network of technology could have and we want to make that possible.”
I think more conversation is better, and as early as possible, particularly as we get more into technology that can really fundamentally change nature and society as we know it.