Home » It’s not the robots dogs we should fear but the all too human police deploying them

It’s not the robots dogs we should fear but the all too human police deploying them

I’m not afraid of robots. I’m afraid of the police forces that want to use them so much. I’m afraid of the assumptions that will be baked into the algorithms that will control those robots and of the way that bias will slide from human law enforcers to their automated assistants.

When I read about Massachusetts State Police testing Boston Dynamics’ four-legged robot Spot, my mind instantly jumped to the aggressive AI-powered canines of Black Mirror’s four season episode ‘Metalhead’. The artificial antagonists in Charlie Brooker’s story were themselves inspired by another Boston Dynamics robot, Big Dog. In ‘Metalhead’, a woman is pursued by the robot dogs through the ruins of society, cursed by embedded trackers to never be free of their attentions.

Granted it’s a huge leap to go from a limited trial of a robot dog by one police department to a vision of dystopian horror, but I have good reasons for making that connection. Police forces across the world, but particularly in the United States, have become increasingly militarised in their use of technology. In one memorable case from 2016, the Dallas Police Department turned a defensive bomb disposal robot into a weapon by using it to place C4 explosives and kill a sniper.

In the case of Spot, Boston Dynamics leased the robots to the Massachusetts State Police with specific restrictions that they were “not used to physically harm or intimidate people.” But while the first provision is easy to measure, the second is a lot more difficult to measure. Video of Spot robots opening doors and entering buildings for the police have been shared online. How can either Boston Dynamics or the police department know that no one was intimidated by their interaction with the faceless robot dogs?

As the American Civil Liberties Union — whose freedom of information requests uncovered the police’s Spot pilot project —  pointed out dogs were used to intimidate the Civil Rights Movement. The introduction of robot dogs, with the ability to track, recognise and monitor individuals, could be a even more frightening extension of that policing tactic.

Researchers at the Royal United Services Institute reported earlier this year that machine learning algorithms already used by police for predictive crime mapping and generating individual risk assessments replicated existing racial bias. They concluded that “the effects of a biased sample could be amplified by algorithmic predictions via a feedback loop, whereby future policing is predicted, not future crime.”

Now imagine those algorithms applied to the behaviour of policing robots and what the outcomes would be if those robots were, as seems likely, given a degree of autonomy over time. Yes, police forces are currently only using these robots for reconnaissance or in hazardous situations, but time and time again we’ve seen mission creep when new technology is used in policing. Just look at how the police have justified ever more applications of surveillance drones and facial recognition.

My fear is not about the imminent arrival of a malevolent artificial intelligence like Terminator’s Skynet, but of a new metallic long arm of the law that replicates racist policing tactics with the added advantage of allowing them to be applied from a distance. If robot dogs are allowed to slowly creep into the police’s arsenal, it will be minority communities that are targeted and intimidated by them first and most frequently. It’s important to ask questions now before robotically-assisted policing is allowed to creep towards being the norm.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *