The MTA has begun exploring whether or not synthetic intelligence might be harnessed inside the transit system to detect weapons, monitor unattended objects and even anticipate subway stampedes.
An unspecified variety of tech suppliers and methods integrators responded by the Dec. 30 deadline to a request for info from the transportation authority early final month, officers mentioned.
“There’s curiosity throughout the board,” Michael Kemper, MTA chief safety officer, informed THE CITY. “It’s not solely coming from the MTA, however from the enterprise world, the AI enterprise world, in working with us.”
The request spells out early steps within the MTA’s shift towards doubtlessly utilizing AI to carry out advanced public-safety work, reminiscent of analyzing real-time video feeds from subways and buses and predicting doubtlessly unsafe conduct through cameras within the transit system.
“Not solely is that this the norm, it’s the anticipated — AI is right here, AI is the long run,” Kemper mentioned. “For us to not discover it, analysis it and examine it, it might be malpractice on our facet.”
However know-how watchdogs warn that the AI growth comes with privateness dangers and monitoring capabilities that might prolong past what the MTA says it wants out of video analytics.
Jerome Greco, supervising lawyer of The Authorized Assist Society’s Digital Forensics Unit, mentioned the know-how’s means to probably scope out “uncommon” or “unsafe” conduct inside a transit setting comes with many potential issues, together with “very detrimental” interactions with police.
“These makes use of of AI aren’t like Netflix telling you what film it’s best to watch subsequent,” Greco mentioned. “The implications of it being flawed could possibly be fairly vital and I believe that’s one thing the MTA shouldn’t be so cavalier about.”
William Owen, communications director for Surveillance Know-how Oversight Mission, likened the trouble from transit officers to the weapons-detector pilot program that then-Mayor Eric Adams and the NYPD applied within the subway in 2024. Throughout a monthlong take a look at with greater than 3,000 searches at 20 stations, the AI-powered scanners turned up 12 knives, no weapons and greater than 100 false positives.
“It actually turned out to be only a metallic detector that discovered a number of umbrellas and different objects as an alternative of precise weapons,” Owen mentioned.
Kemper mentioned the MTA understands the problems raised over the use within the transit system of AI video analytics, which he described as a “software” to reinforce human decision-making.
“Folks have considerations and questions on it — it’s our job to be clear, reply these questions,” he mentioned. “However we have to transfer ahead and discover these applied sciences to maintain our riders secure.”

Notably, the request does under no circumstances point out the usage of controversial facial recognition know-how, utilized by the NYPD in an April 2024 incident critics urged investigators to assessment. A 2021 Amnesty Worldwide investigation discovered that the NYPD was capable of put photographs from greater than 15,000 cameras in Brooklyn, The Bronx and Manhattan into facial recognition software program.
The MTA says its AI inquiry is centered on tapping into present know-how for the sake of public security.
The response from tech suppliers to the MTA’s request is the most recent transfer for the nation’s largest mass-transit system in trying to adapt the burgeoning know-how for safety functions. There are greater than 15,000 cameras all through all the transit system and on the greater than 6,000 automobiles within the subway fleet.
Synthetic intelligence is already being put to the take a look at elsewhere within the metropolis’s transportation community.
The authority final 12 months retrofitted the axles of some automobiles alongside the A line with Google Pixel smartphones that mix with superior synthetic intelligence capabilities to detect and analyze potential monitor defects. The MTA can also be testing new AI-enabled fare gates at choose stations.
The security-centric initiative is geared towards utilizing the prevailing community of cameras whose streaming video feeds weren’t performing at a Sundown Park station throughout an April 2022 subway taking pictures.
A December 2022 report on the outage from the MTA Inspector Basic cited that the video stream on the Brooklyn station and two different stops had gone down 4 days previous to the taking pictures.
In its request for info, the MTA acknowledged a number of the points related to the usage of its eyes within the transit system.
“With greater than 15,000 cameras deployed throughout roughly 472 subway stations, present monitoring practices stay guide, reactive and useful resource intensive,” it notes.
The doc provides that the MTA is aiming for that monitoring setup to evolve right into a “proactive intelligence-driven ecosystem, able to flagging conduct, threat evaluation and incident response.”
Whereas the initiative could be grounded in superior video analytics and AI applied sciences, insights from licensed subject-matter consultants in behavioral science and psychology who’ve “a deep understanding of human conduct in transit environments” would information the trouble, based on the MTA.
There is no such thing as a timetable for the undertaking, whose subsequent step will contain reviewing submissions from events to find out what could also be doable for finally activating it inside an around-the-clock transit system that strikes near 4 million subway riders each day.
The MTA’s chief safety officer mentioned its potential worth is “immense” to riders.
“We’re trying to transfer ahead as quickly as we discover one thing that we’re comfy with,” Kemper mentioned.
Greco, of Authorized Assist, countered that the MTA must proceed with warning with regards to predictive know-how on “uncommon” or “unsafe” conduct within the subway system.
“How’s that going to work and who will get to make that call and what are the implications of that call?” he mentioned. “If it determines that there’s an unsafe conduct coming — based mostly on who is aware of what it’ll use to find out that — what occurs subsequent?
“Are we basically now policing individuals for being unusual?”

