Axon’s Taser Drone Plans Prompt AI Ethics Board Resignations
Axon’s board of directors voted unanimously to approve the plan, despite the resignation of the majority of the company’s AI ethics board. While the board said it supported the idea of using drones equipped with Tasers and cameras to “disrupt” shooters before they could harm students, the board members also expressed concern about the potential for abuse.
Axon withdrew its request after an outcry from the public and members of the AI Ethics Board. The company had proposed arming drones with Tasers to help officers subdue unruly students during school events. The board said it didn’t think the technology could be safely deployed at schools.
Axon’s plan to test its Taser drones in schools was met with criticism from other experts. Some say the company’s proposal distracts the world from meaningful solutions to gun violence. Others call the idea a “very, very bad idea.”
“We need to focus on what works,” said Superintendent Michael Hinojosa at the time. “And if we’re going to address violence in our schools, we all know there are much better ways.” The district had previously expressed concerns about the potential for weaponized drones to increase the use of force by police officers, particularly in communities of color. An evaluation of a pilot program involving drone technology was expected to be released later this year.
The real problem with the Taser Drone is not that it doesn’t work. There are many other ways to stop someone from running away. The problem is that you need to train your officers to use them correctly. And even then, you still need to teach them when and how to use them. That takes time and money. If you’re going to spend all that time and money, you might as well go ahead and get the best weapon available.
In the past few years, we’ve seen many AI ethics boards formed within tech companies. These groups are meant to provide guidance on ethical issues related to AI. But they’re also often just a PR stunt to show how concerned the company is about the potential risks of AI. And sometimes, when the company behind the AI ethics board is the same company developing the AI technology itself, the results aren’t very helpful. For example, Google’s AI ethics board was formed shortly after the company released its own AI ethics framework, which included guidelines like “don’t harm humanity” and “be transparent.” The board quickly disbanded after realizing that its members were going to be asked to help implement those guidelines. “It’s a bit like having a committee to decide whether you should go to jail,” says Cortnie Abercronie, founder of AI Truth, a nonprofit that researches best practices for corporate AI Ethics. “You’re going to get a lot of people who say, ‘Well, I dont agree with what youre doing, but Im willing to give you advice because I want to keep my job.’”
Axon had previously listened to its board members’ concerns about facial recognition, according to several former employees. After the drone incident, however, the company stopped taking board members’ advice. “They’re just going to go ahead and do what they want,” AbdAlmageed says. “It’s really disappointing.”