Jin Lee / Bloomberg / Getty Images

How To Deploy AI To Improve Policing in New York

Alex Chohlas-Wood

February 19, 2026

Powerful new technology has the potential to improve the City’s justice system, but to get it right, the NYPD has to address serious risks.

Powerful new technology has the potential to improve the City’s justice system, but to get it right, the NYPD has to address serious risks.

The New York City Police Department has long stood at the forefront of developments in policing and public safety. In the early 1900s, for example, it led the country in the collection and use of fingerprints in criminal investigations, when, for a time, its fingerprint archive was larger than the federal government’s. In the decades that followed, the Department experimented with tools such as digital mapping and CompStat, pioneering an empirical approach to crime-fighting and resource allocation. Many argue that these innovations have improved life for New Yorkers; they have certainly influenced the practice of policing across the country.

Yet here we are in the heyday of AI, with accelerating improvements arriving every month and with these new tools transforming software engineering, health care, education and beyond. In this new world, it’s unclear whether the Department can still claim a reputation as a national leader in policing innovation. As of March 2025, the Department’s official stance was that it had not permanently adopted any new AI systems since rolling out an algorithm, Patternizr, that I helped develop when I was the Department’s director of analytics 10 years ago.

A natural instinct for the Department might be to break out of this innovation funk by seeking out AI tools that accelerate crime-fighting objectives. But we’re fortunate that crime rates for many offenses are plunging, both here in New York and across the country. At the same time, a concerted push to test crime-fighting AI carries serious risks — best represented by Immigration and Customs Enforcement’s (ICE’s) recent unencumbered embrace of technologies like phone geofencing and phone-hacking spyware, which has Americans on edge about law enforcement use of AI-driven surveillance. More broadly, a majority of Americans are wary about the potential societal risks of AI.

At the same time, while crime is falling, many of the City’s key reform efforts have stalled. Despite widespread public support for diverting some 911 calls to nonpolice responders, New York City’s call-diversion program — aka B-HEARD — saw a substantial portion of eligible calls fail to receive program services. The federal monitor established by Floyd v. City of New York in the wake of the Department’s disastrous stop-and-frisk policies has been in place for 13 years, with recent reports from the monitor noting the NYPD’s continued noncompliance with important court orders. And the City looks certain to miss the legally mandated deadline to close Rikers next year, with horrid conditions on the island continuing to decimate the credibility of our justice system.

This constellation of factors presents an unusual opportunity for Mayor Zohran Mamdani and Police Commissioner Jessica Tisch: accelerate reform efforts through the adoption of AI in contexts where this technology promises to improve outcomes. This focus would help the NYPD build the policymaking muscle it needs to develop effective policies around the responsible use of AI, preparing it for potential deployments of ethical crime-fighting tech in the future. And in an era of extreme budgetary pressure, these technologies present a rare chance for the mayor and the Department to advance key justice priorities while keeping costs down. Of course, there are many reasons why reform efforts have stalled — including lack of adequate staffing and the negotiation of tricky political questions — where technology cannot help. But a couple of examples highlight where AI has the potential to make a real difference in these efforts: first in the AI-supported expansion of 911 diversion, and second in AI-assisted review of police conduct in body camera footage.

As my colleagues at the Policing Project noted in a couple of recent Vital City articles, New York City has struggled to divert nonviolent 911 calls, like those involving a mental health crisis, away from police response. The City’s B-HEARD pilot, launched in 2021, diverted thousands of calls away from police response, with many of those served connecting with community-based care. But these thousands of calls represent a mere 0.01% of the city’s total call volume. This places New York City behind national leaders like Minneapolis, which has diverted 9% of calls and recently set an ambitious target to divert 20% of calls.

The largest constraint on B-HEARD is staffing, where technology is unlikely to help much. But if the City is able to hire more alternative responders, the natural next hurdle will be devising validated criteria that identify more nonviolent calls suitable for alternative response, and then training the City’s call takers and dispatchers to act on these new criteria.

AI could accelerate this process in a number of ways. First, AI could review historical calls for missed opportunities for diversion, providing empirical evidence for new call diversion criteria. Once these criteria are drafted, AI could simulate and stress test their use: first on historical calls, and then in parallel to actual call taking to evaluate whether they would work in practice. Once these criteria are finalized, AI could accelerate the training of call takers and dispatchers by simulating callers in a zero-risk practice environment, helping staff get repeated exposure to tricky situations so they’d be ready to act in high-pressure, real-world scenarios. AI could scale up quality assurance monitoring once new diversion criteria are in place, finding opportunities for improved staff performance — at a cost much cheaper than hiring employees to do this quality assurance at the same scale.

A different context where AI could advance reform efforts is in tapping the Department’s enormous body-worn camera footage archive. In theory, this repository is a tremendous public asset, providing an opportunity to detect and correct problematic or risky officer behavior before tragic incidents occur, as well as to identify models of excellent policing worth holding up as exemplary. This vision for body cameras is what drove support for their rollout across the country; indeed, one of the key conditions of the Floyd ruling in 2013 was that the NYPD pilot the use of body-worn cameras.

But the NYPD has neither the personnel capacity nor the budget to review most body camera footage using traditional approaches. As of 2019, about two years’ worth of video was recorded and stored by the Department every week. It’s impractical for the Department to hire humans to review footage at this scale; thorough review would require a staff of hundreds of new analysts to watch this footage full-time.

A well-trained AI tool could review all footage and isolate short clips of officer behavior — both good and bad — for supervisor review. This tool would likely cost much less than what it would cost to hire hundreds of analysts. A handful of private-sector vendors, including Truleo and Polis, have built technology that can support this goal. Recognizing this promise, the NYPD announced a pilot with Truleo in the fall of 2023. But since its announcement, there has been neither public communication on the results of the pilot nor any indication of whether the Department intends to proceed with a permanent contract. This lack of communication is a lost opportunity to build credibility around the deployment of AI in service of reform at the NYPD.

It’s worth noting that even reform-oriented applications of AI present tricky policymaking challenges for the Department and the City. The Department will need to allay concerns that AI algorithms used to review BWC footage and 911 calls are being used as a covert surveillance tool or that sensitive BWC footage or 911 call audio is used to train commercial AI models. The Department will need to convince its staff that these algorithms won’t automatically impose discipline but will flag short clips of footage or audio for review, deferring any decision-making to human supervisors. Besides surveillance risks, those who have dabbled in generative AI might have been alarmed by new modes of failure: hallucinations (incorrect or fake but convincing claims), sycophancy (AI saying whatever it thinks you want to hear) or deepfakes (synthetic images or video that are indistinguishable from authentic media). Automated systems have the potential to perpetuate biases found in training data.

There are plenty of good ideas on how to reduce these risks, both through careful design of technology as well as the policies that surround their use. But convincing a skeptical and nervous public — and the Department’s uniformed force — that appropriate safeguards are in place is a bigger challenge. In light of recent scandals, it’s not clear that the NYPD is thinking about these risks. In 2020, for example, reporters at BuzzFeed found that over 40 members of the Department had signed up for Clearview AI using free trial accounts. These members ran over 11,000 searches on Clearview’s database of people’s faces, which were scraped from millions of social media posts without user consent. The NYPD scrambled to release clearer guidance around the proper use of facial recognition a few weeks later, but the reputational damage was done.

In response to public pressure — and likely a desire to avoid new scandals like officers’ unsanctioned use of Clearview AI — the NYPD has adopted a crude approach to internal use of AI. For a time, the NYPD had a blanket policy of blocking all access to chat-based interfaces like ChatGPT and URLs that ended in “ai” (such as clearview.ai), fearing inappropriate use and data leakage. But this approach is shortsighted if staff resort to using their personal phones for access to AI platforms. We need the Department to proactively recognize and reduce these risks through more mature policies and training. In the meantime, a locked-down agency will struggle to accomplish any goal — including those in service of justice reform. The NYPD will function fine if these reforms continue to stall. It’s our most vulnerable residents who will pay the price for the City’s lack of innovation.

════════════════════════════════════════

Outside of New York City, a stampede of private-sector vendors is aggressively pursuing contracts to deploy AI in law enforcement agencies across the country. Anyone who has been to the International Association of Chiefs of Police conference in recent years has seen its exhibition hall filled with vendors touting apparently revolutionary AI-powered crime-fighting products. And these vendors are finding customers across the country: American law enforcement agencies are using AI to transcribe jail calls for detectives, drive unmanned cruisers and flag officers with past patterns of misconduct. Even the historically lethargic MTA is one step ahead of the NYPD in testing AI to deter fare evasion and detect erratic behavior.

Several recent events have underscored concerns about the private sector’s full-court embrace of AI for public safety. Amazon’s Super Bowl commercial for its Ring camera network reminded Americans that if the tech company can automatically track a lost dog in a neighborhood, it can probably automatically track people, too (emails leaked yesterday suggest this is indeed a goal). The public’s uneasy reaction to this ad caused Amazon to void a collaboration with another surveillance provider, Flock Safety, which has its own track record of questionable policy. And in the last few days, the FBI announced it was able to recover footage from a Google Nest camera many days after Nancy Guthrie’s disappearance — an apparent discrepancy with the company’s policy for accounts like Ms. Guthrie’s, which says such footage will only be accessible for a few hours.

Adding to these concerns, we don’t know whether most marketed AI tools work as claimed or whether they’d make any difference in real-world practice. The most discussed generative AI product in policing is Axon’s “Draft One,” which uses transcriptions from Axon’s body cameras to write a draft crime narrative for officers. While Axon claims use of Draft One will reduce report-writing time by 50% — a salivating prospect, given the administrative burdens associated with law enforcement reporting duties — independent researchers ran a peer-reviewed randomized experiment and found that the use of Draft One didn’t reduce report-writing times compared to status quo practice. That’s not to mention the potential pitfalls that may arise in court when officers are cross-examined about a narrative they didn’t write themselves, including words they don’t know.

Surprising results from experiments like this one underscore why it’s crucial to pilot and evaluate promising new technologies before they are fully adopted. Any such effort at the NYPD — including those in service of reform goals — must be credibly and rigorously evaluated before new technologies are rolled out permanently. These evaluations are essential because they will help policymakers and the public understand if AI lives up to its promise and whether there’s any evidence for its misuse or abuse in practice. And technology is not the only unproven piece of this new puzzle. We also lack compelling evidence on which policies will be effective at preventing misuse and abuse of AI. Careful iteration and learning via testing and evaluation is the only real way to make progress on these open and important questions.

In theory, it’s possible that such evaluations are already happening inside the NYPD, but it’s hard to know for sure — with the lack of transparency around its Truleo pilot serving as a prime example. The Department did pilot another AI technology recently as part of a transparency requirement under the City’s new POST Act: Evolv scanners to detect firearms on people entering subway stations. But messaging around this pilot’s intent left a lot of room for improvement. In advance of the pilot, then-Mayor Eric Adams called it a “sputnik moment” for the City and said, “I think it’s good technology.” These enthusiastic comments suggested that the City was conducting a pro forma pilot of a technology it had already determined to be useful. But these preliminary conclusions were dubious. After the monthlong experiment, the Department had recovered zero guns but had 118 false hits — almost 10 times as many as the 12 knives they had recovered in the same time span. Although the former mayor had a penchant for gaffes, framing this pilot as an objective test would have saved the City a lot of embarrassment when it didn’t end as hoped.

It’s also no secret that the City and the NYPD have struggled to define what counts as AI. Although the City passed a law in 2022 requiring City agencies to disclose use of “algorithmic” tools on an annual basis, the mandate doesn’t appear to include technology that most people would consider to qualify as AI. While the NYPD’s use of facial recognition and ShotSpotter — both of which make use of algorithms — are included in the City’s annual AI inventory, not included are the Department’s Truleo pilot or its numerous license plate readers. Traditional algorithmic technologies like fingerprint matching and DNA analysis aren’t qualified. They’re older methods, to be sure, but they use algorithms the same way.

Whether a tool qualifies as new “AI” technology is a distraction from the deeper question at stake: whether government can wield new police powers to keep the public safe while respecting civil rights and minimizing undue costs imposed on its denizens. Put this way, current struggles with AI are the newest example of the fundamental challenge of the justice system, which has always been a delicate balancing act between the state’s extreme power and safeguards against the irresponsible, overbearing or unequal use of this power. We’ve been struggling with this balance for a long time — illustrated by the fact that 4 of the 10 constitutional amendments in the 1789 Bill of Rights guard against government overreach in criminal procedure.

But in the past, it’s taken us a long time to strike an acceptable balance. With fingerprints, for example, it took decades of litigation and mistakes before we devised mature policy on when fingerprints could be taken, how matches would be documented and challenged, and who could access this sensitive information. Today, AI is evolving so fast that we don’t have decades to spare. We need a competent public sector that can draw from historical parallels — and learn from the ineffective or effective ways we’ve curbed overreach in the past — to address new threats posed by AI, all while finding opportunities for its responsible use. In New York City, an earnest effort to design technology that builds state capacity on stalled reform efforts is a good place to start.

To kick-start this process, the Department can draw on our city’s deep bench of public-interest talent. I have many colleagues who run applied research labs in the city’s numerous universities who would be eager to help the NYPD. Reasonable voices in the city’s civic tech and advocacy community could assist the Department in thinking through the risks inherent in new AI technologies. Resources outside the city can assist, including other agencies pioneering responsible adoption and national organizations like the Council on Criminal Justice, where a new Task Force on AI (on which I am serving) is drafting guidelines on the safe and effective use of AI technologies in justice settings. Historically, the NYPD has been reluctant to ask for help — e.g., at last year’s “Pilot Pitchfest,” where the Department was absent from a crowd of over 20 City agencies pitching research and technical projects to over 1,000 volunteers. Reconsidering this reluctance would help the Department rekindle its capacity for responsible innovation.

In tandem, the Department needs to begin the slow process of reestablishing internal capacity like it had 10 years ago, when there were multiple civilian experts working across the agency alongside exceptional uniformed staff. AI’s disruption has brought about a new moment where capable technical talent is easier to hire: Massive recent layoffs have made it possible to recruit experienced technologists who are tired of supporting the tech sector’s march toward enshittification, and who want to make a positive difference in the world instead. Aside from hiring new civilian staff, the Department needs to better support talent that matures within its own ranks. An unfortunate “brain drain” since the COVID-19 pandemic has pushed out many talented uniformed members who could be helping devise a responsible and informed path forward on the Department’s use of AI.

Confusing, inconsistent and spotty information about the NYPD’s use of AI has only served to raise fears among the public that new technologies are being used in an irresponsible way. The mayor has a talent for wonky, policy-dense communication; the NYPD should consider adopting his approach in helping the public learn about responsible innovation at the Department. In particular, the Department should show — not tell — the public how it is making responsible use of these new technologies. And to do so, it can spotlight the Department’s many diverse, talented and passionate civil servants who work hard to do the right thing and keep our city safe.

Implementing these changes would improve the NYPD’s ability to adopt AI responsibly, promising concrete benefits to the city’s most vulnerable residents while reducing the risk of improper use. But the benefits wouldn’t stop there. The NYPD occupies a special position — both by virtue of its internationally respected brand and its incredible size (with more sworn officers than most American states) — from which it can lead the nation in the safe development and testing of these new technologies. With both an effective commissioner and an idealistic yet pragmatic mayor in office, this seems like an unusual moment where the right leaders are in place to make this vision a reality.

This kind of leadership is sorely needed. The AI genie is out of the bottle, and private corporations are leading the development of AI in justice settings. While the private sector is a crucial engine for innovation, our institutions have ceded too much leadership to technology companies, which aren’t incentivized to serve the public interest. Instead, we need a competent public sector, accountable to the public, that can identify the narrow path forward between ineffective, sclerotic government and limitless technology that — if unchecked — will impinge on our civil liberties. It’s a narrow path. But it’s one we can and must do our best to travel down.