You’ve probably seen the latest Boston Dynamics video to take the internet by storm. It shows one of their recent quadruped creations, the SpotMini, opening a door while being accosted by a company employee.

Boston Dynamics Robot Spot

Boston Dynamics’ videos are notorious for eliciting both excitement and fear across the internet. Nicholas King’s recently released parody of the Planet Earth documentary shows herds of SpotMinis taking over the earth. And the latest season of Netflix’ Black Mirror features a murderous, highly autonomous, SpotMini look-alike. So should we be concerned about killer robots taking over?

My take: I don’t expect the robot uprising anytime soon, but there’s plenty for us to think about here—as a society, a culture, and humanity.

We fear autonomous killer robots because (a) they’re autonomous, (b) they’re driven to kill, for some reason, and (c) they’re armed. Looking at these, in turn, allows us to isolate the true areas of concern, and understand how we can work to head off our fears.

Autonomy

Autonomy is a layered, perhaps even nuanced concept. At the base level, autonomy can refer simply to the ability to get from A to B without human intervention.

When I see videos like the SpotMini demonstration, I immediately think back to my interview with Aaron Ames, professor of Mechanical & Civil Engineering at Caltech, which focused on intelligent robots. Our discussion inevitably turned to the then-latest Boston Dynamics video which, at the time, featured the Atlas robot performing a backflip. Ames’ take on the level of autonomy involved essentially boiled down to: not so much.

“It’s a preplanned behavior, so this robot has no knowledge of its environment, in the sense that it’s not observing where those blocks are and in real time adjusting its behavior and learning how to do this behavior,” said Ames. Rather, “they put those obstacles in the memory of the computer, they preplan those behaviors, [and] they do a bunch of experiments until they get the right behavior.”

In other words, there’s still lots of work to do before we see autonomously walking robots able to deftly navigate the real world. According to Ames, not only are we not there yet, but researchers don’t even agree on the right basic approach to get us there. Some, like Pieter Abbeel, advocate an approach based on end-to-end deep learning, while Ames suggests a more integrative approach.

So, for the time being, you can probably evade and outrun the robot, especially on varying terrain, but they’re getting there.

Agency

Still, autonomy in the sense of locomotion doesn’t quite get at what’s scary about the “autonomous killer robots” scenario. This is more about agency; the idea that the robot can have a beef with a human in the first place.

I can think of a few scenarios in which a robot would have it in for a human:

The robot is acting under its own volition and has decided that a particular offending individual needs to go.

  • This implies some degree of general intelligence, goal-directedness, and intrinsic motivation. We’re very far from achieving this type of artificial intelligence, and don’t even really know how to define it. My interview with Greg Brockman explores this topic in detail. It seems quite premature to worry about this.

The robot kills a person as an unintended consequence of some human instruction that it’s following.

  • This strikes me as more worthy of concern, especially given the general difficulty we have as humans with anticipating unintended consequences. And, it’s what makes AI safety research so important. Check out my interview with Greg’s colleague Dario Amodei to hear about OpenAI’s work in this space.

The robot kills a person as an intended consequence of some human tasking.

  • This is really the most likely, near-term scenario, and thus the one we should be most concerned about. Essentially the robot is a weapon, and it’s autonomy acts both as a potential multiplier on the amount of damage it can do, but also, critically, so as to decouple any human from the ultimate taking of life.

I think it stands to reason that, for the foreseeable future, the humans, as opposed to the robots, are the greater concern.

Armed

Accidents aside, what makes a killer robot a killer robot comes down to the fact that they were armed in the first place.

This seems like an inevitability that we’re quickly racing towards. Military robots already exist and more are being researched and developed. According to Statista, global spending on military robotics was $6.9 billion dollars in 2015 and is expected to grow to $15 billion by 2025.

If we agree that real risk is of autonomous killer robots is when they’re being used as weapons, and not because they’ve become self-aware, it seems natural that our best defense is to stop arming them.

It turns out that roboticists, ethicists, and AI researchers are already calling for the banning of weaponized autonomous robots. A number of organizations have formed around or taken up this cause, including the Human Rights Watch, the International Committee for Robot Arms ControlArticle 36, and the Future of Life Institute, backed by Stephen Hawking, Elon Musk, and more.

So, back to our original question: Should we fear autonomous killer robots? To be honest, probably not. If you’re reading this newsletter, the chances that you’ll perish at the hands of an autonomous killer robot is really, really, really small. But that doesn’t mean that we shouldn’t be thinking about them and the many moral issues that they raise.

What do you think? I’m curious about your thoughts on the topic. Reply and let me know your take.

Sign up for our Newsletter to receive this weekly to your inbox.