Last year the New York City Police Department (NYPD) began leasing a caninelike robot—a Spot model from Boston Dynamics that the department nicknamed Digidog. Officers deployed the robot in just a few cases, including a hostage situation in the Bronx and an incident at a public housing building in Manhattan. As word spread (along with photographs and videos), a backlash from the public—and eventually elected officials—quickly gained momentum. Some objected to the robot’s expense. Others worried that its use threatened civil liberties. Many simply found it creepy.
“Fear is a common response to new technology, which is overcome when people begin to understand it better and how they can benefit from it,” said a Boston Dynamics spokesperson in a statement to Scientific American. “We find that once people interact with Spot, any fear turns into fascination and eventually appreciation for what it can accomplish.”
But well before New Yorkers’ fears could dissipate, the NYPD abruptly terminated its lease and quit using the robot last month. Other U.S. police departments have been testing their own Spot models, however. “Spot has been particularly resourceful in tackling dull, dirty and dangerous tasks,” the Boston Dynamics spokesperson said. “Public safety initiatives, including police departments, often face dangerous work, such as inspecting a bomb, rummaging through remnants of an explosion or fire, or even deescalating a potentially dangerous situation.” The spokesperson also said that the NYPD’s Spot does not use artificial intelligence and that it is remotely controlled by a human operator. Further, the spokesperson added, the robot “is not designed or intended to replace a police officer” but rather to “reduce human risks both to police officers and civilians and increase safety in hazardous environments.”
Other complex social and historical factors were also in play in the case of Digidog, however. “This is just not a very good time for [the NYPD] to have tried this,” says David J. Gunkel, a professor of communication at Northern Illinois University. He notes the department made the move “at a time that we are, as a public, beginning to question what police are doing, how they’re being funded and what those monies are being used for.” (The NYPD’s Office of the Deputy Commissioner, Public Information, did not respond to requests for comment).
Timing was not the only thing working against Digidog: many humans have a profound, deep-set negative response to robots, especially some very specific kinds. Scientific American spoke with Gunkel about why people accept some machines while rejecting others—and whether the public can ever fully accept the idea of robotic cops.[An edited transcript of the interview follows.]
What influences how we humans feel about robots? People love the cuddly robotic seal PARO, for example, while having a strong negative reaction to Digidog.
There’s a combination of factors that come into play: the design of the robot, the contexts in which it’s deployed and user contributions. The PARO robot is designed to engage humans in more social activities. Boston Dynamics robots are not made to look that way. They don’t have a face. They’re not furry and cuddly. So design can have an effect on how people respond.
But, also, the context of use is really important. The same Boston Dynamics robots that you saw causing trouble with the New York [City] Police Department, just [a few] years earlier, got a great deal of sympathy from humans. Boston Dynamics engineers were shown kicking the robot. People saw these videos online, and there was this outpouring of emotion for “poor Spot.” That robot, because of the context in which it was used, elicited an emotional response that was very different from the response elicited by the police’s Digidog robot.
And then, finally, there’s what users do with these things. You can design the best robot in the world, but if users don’t use it in the way that you’ve anticipated, that robot could become something very different.
Is there something about robots in particular that makes humans nervous?
The really important thing about robots is: they move. And movement is something that elicits, in us human beings, a lot of expectations about what the object is. Already back in the 1940s, [psychologists] Fritz Heider and Marianne Simmel did some studies with very simple animated characters on a piece of film. When they showed this to human test subjects, human beings imparted personality to [a] triangle versus [a] square. And the difference wasn’t that the shapes actually had personalities. The difference was the way they moved. Because movement conveys a whole lot of information about social positioning and expectations, the movement of the robots in physical space really carries a great deal of significance for us.
Returning to the public backlash against the NYPD, why did people feel so strongly about this specific robot?
It’s again a number of factors. One is the very design. The robot, if you’ve seen pictures of it, is a rather imposing presence. It’s a little smaller than the robots you see in science fiction. But the way it navigates through space gives it this very imposing profile that can be seen as creepy by a lot of human observers.
There’s also the context of use. The NYPD used this robot, very famously now, [at] a public housing project. That use of the robot in that place, I think, was a really poor choice on the part of the NYPD—because already you’re talking about police officers entering a public housing facility, now with this massive technological object, and that [exacerbates the] very big power imbalance that’s already there.
Thirdly, there’s just timing. This is all taking place in the wake of increased public scrutiny on policing and police practices—especially the militarization of the police—and [how] the police have responded to minority populations in ways that are very different from the way that they have responded to populations of white individuals.
Some people used science fiction to critique Digidog, referencing an episode of the television show Black Mirror in which robotic dogs hunted humans. How do stories shape our reaction to technology?
The science fiction question is really crucial. We get the word robot from the Czech word robota, which comes to us in a stage play from 1920 by Karel Čapek. So our very idea of “robot” is absolutely connected to, and you can’t really separate it from, science fiction—because that’s where it began.
Also, what the public knows about robots is already predicted in science fiction because we see it in science fiction before we actually see it in social reality. This is called “science-fiction prototyping.” Roboticists get some mileage out of it because they can often use science fiction as a way to explain what it is they’re building and why. But they also fight [this prototyping] because the science-fiction stories create expectations that inform how people will respond to these things before they ever become social reality. So it’s a double-edged sword: it offers opportunities for explanation, but it also inhibits fully understanding what the realities are.
Could the public eventually accept the use of robots in policing?
I think this is an evolving scenario. And the decision-making, on the part of police departments, about how these things are integrated or not is going to be crucial. I think you would have seen a very different response had the Digidog robot been used to rescue someone from a fire, as opposed to being brought to a housing project in support of police action. I think you would have seen a very different outcome if it had been used as a bomb-disposal-unit robot. So I think a lot is going to depend not only on the design of the robot but also on the timing of use, the context of use and the positioning of this device with regards to how police interact with their communities—and who they serve.