Another Police Use of AI

Another Police Use of AI


Subscribe to the Free Future Newsletter
Free Future home

The police technology company Axon has gotten a lot of attention for its “Draft One” product, which uses body camera recordings to generate a first draft of a police report for officers after an incident. But the company has another AI product called “Policy Chat” that I haven’t seen much discussion of.

The product uses large language models (LLMs), combined with an AI technique called Retrieval-Augmented Generations, or RAG, to provide officers in the field answers about their department’s official procedures and policies. Examples given by Axon of questions it might answer include “What is the protocol for handling domestic violence calls?” and “What is the department’s off duty employment policy.” In theory a department could have a phone bank of lawyers on call for officers to ring them up with questions. This product can best be thought of as trying to provide a scaled up, automated form of that service.

In February I attended a workshop on AI and law enforcement, and a command officer in the room expressed a wish to use LLMs to help her search and analyze her department’s apparently voluminous policy library. The problem was that her department’s policies were not open to the public, so she could not upload them to the likes of Anthropic, OpenAI, or Google. Perhaps Axon heard similar laments, and is trying to respond to that demand.

One question is why any police department would have policies that are not open to the public that it serves. But aside from that, the product raises other questions.

Corporate access to police material
Accepting for the sake of argument that there are some internal documents that it’s legitimate to keep from the public, one question would be whether Axon has access to internal police documents that the rest of the world does not. In theory an organization could keep its documents private and carry out its RAG processing purely locally; there’s a lot of demand for such processing and it has become increasingly doable. But in Axon’s case use of its Policy Chat requires upload of department documents to the company. Axon access to non-public documents would raise a lot of issues such as whether one company might leverage that access to gain advantages over its competitors, vendors, employees, a unionization effort, or others. Why should Axon be different from Anthropic, Open AI, or Google? Do contracts between departments and Axon have provisions to address those issues? (Such questions, of course, pertain to many police AI products — more on that in an upcoming post.)

Effect on officer training
We also have to wonder how this product will affect officers. One former police command staff officer I spoke with expressed two concerns. First, that “the officers are not going to do their training; they’ll get to a mindset of, ‘I can just ask the AI in the field.’” And second, that an officer will be distracted when they’re in the field by the AI. “We don’t want them half-listening to their policy chat while they’re trying to talk to a victim,” he said, and “the last thing I want an officer worried about when they’re going into a potentially hot call is what the policy is.” Basically, he said that it’s a tool that isn’t going to be used — and if it is used, then “you’re failing in the training department, big time.”

It is true that police policy manuals can be voluminous — from hundreds of pages to over two thousand pages for the NYPD. But, the former officer told me of his department, “if you boil down the operational ones, what patrol needs is like 35 policies. Investigations is like 20.” Officers shouldn’t need an AI to know those policies inside and out.

The inevitable errors
Another obvious question is, in the cases where this kind of product is used, how police departments and vendors like Axon plan to deal with the inevitable hallucinations, errors, and biases that all of today’s language models produce. If an AI error leads an officer to violate a department policy — perhaps a significant one — who is responsible? Axon boasts that its product can “provide instant answers, confident decision” — but at the same time warns officers to “always verify with the official policy source.” Axon is talking out of both sides of its mouth here — it wants to brag about how much time the AI will save, but also throw all responsibility for the shortcut onto the shoulders of individual officers. That seems like a trap. (More on that in a followup post.)

At the end of the day I don’t think this product raises the same level of concerns as AI-generated police reports, but is worth noting as another example of the premature insertion of AI into our criminal justice system — a use that bears close scrutiny. Like so many companies selling to law enforcement and other markets, Axon is eager to incorporate and pitch the AI shortcuts that technology can provide. But there are times that one should not take shortcuts.



Source link