Chicago’s City Council delivered a decisive rebuke to law enforcement surveillance on Tuesday, voting 37-11 to ban the use of facial recognition technology by city agencies — a move that makes it one of the largest U.S. cities to take such action. The ordinance, which takes effect in 90 days, prohibits police, public transit operators, and municipal departments from deploying or purchasing systems that identify individuals based on biometric data. The decision comes after months of public outcry, a damning city auditor’s report, and testimony from civil rights groups who warned the technology disproportionately misidentifies Black and brown residents.
Why Chicago Moved Fast
It wasn’t the first time Chicago debated facial recognition. Back in 2020, the city paused its pilot program after a study by the University of Chicago found error rates as high as 34% for women of color — more than triple the rate for white men. But the real turning point came last October, when the Chicago Police Department admitted it had used a third-party facial recognition tool — without council approval — to scan surveillance footage from a protest outside City Hall. That incident, coupled with the release of internal emails showing officers labeling peaceful demonstrators as "likely gang members" based on algorithmic matches, ignited public fury.
"We’re not against technology," said Councilmember Maria H. Ramirez, who sponsored the ordinance. "We’re against technology that locks people into a criminal profile before they’ve done anything. This isn’t science fiction. It’s happening right now — and it’s targeting our neighbors."
The Human Cost of False Matches
The numbers tell a stark story. According to the ACLU of Illinois, at least 17 wrongful arrests nationwide since 2020 have been linked to facial recognition errors — and three of them occurred in Illinois. In 2022, a 38-year-old Black man in Detroit was held for six hours after a system flagged him as a suspect in a robbery he didn’t commit. He was at home watching TV. The system matched his face to a grainy security camera image of someone with a similar hat and build.
In Chicago, the City Auditor’s Office found that between 2021 and 2023, 68% of facial recognition matches used by police led to no further investigation — meaning the system flagged people for no reason. And in cases where arrests did follow, Black residents were 2.3 times more likely to be targeted than white residents, despite making up just 29% of the city’s population.
Law Enforcement’s Side
Not everyone cheered. Superintendent David M. Johnson of the Chicago Police Department warned the ban could hinder investigations into violent crimes. "We’ve used this tool to locate missing children, identify suspects in armed robberies, and track down fugitives who’ve fled across state lines," he said in a statement. "Now we’re asking officers to go back to old-school methods — canvassing neighborhoods, reviewing hours of footage by hand. That takes time. And in some cases, time is the one thing victims don’t have."
But critics point out the department’s own data shows facial recognition contributed to fewer than 12 arrests in the past three years — and only two of those led to convictions. Meanwhile, the city spent over $2.3 million on software licenses, training, and server upgrades — money that could have gone to community-based violence prevention programs.
What This Means for Other Cities
Chicago’s vote is a ripple that could become a wave. San Francisco, Boston, and Portland already banned the tech. Now, cities like Philadelphia, Atlanta, and Minneapolis are reviewing similar ordinances. The Electronic Frontier Foundation says 14 state legislatures are considering facial recognition restrictions this year. "Chicago didn’t just ban a tool," said Dr. Lena Patel, a privacy researcher at Northwestern University. "They drew a line: You don’t get to watch people without consent — especially when the system gets it wrong more often than not."
The ban doesn’t apply to federal agencies operating in Chicago — so FBI or DHS agents could still use the tech. But local police? No more. And private companies — like retail chains or apartment complexes — are still free to use it. That’s the loophole. But the city’s ordinance requires any business using facial recognition on public property to post clear signage. No hidden cameras. No silent scanning.
What’s Next?
Over the next 90 days, the city will launch a public education campaign and train officers on new investigative protocols. The Office of the Inspector General will monitor compliance. And if any agency tries to sneak in a system under the radar? Fines up to $5,000 per violation — and the department head could face disciplinary action.
Meanwhile, tech firms like Clearview AI and NEC are quietly lobbying state lawmakers to preempt local bans. But in Chicago, the message is clear: trust has been broken. And rebuilding it won’t come from algorithms.
Behind the Ban: A Timeline
- 2020: Chicago Police Department halts facial recognition pilot after academic study reveals racial bias.
- 2021: City Council holds first public hearing on surveillance tech; no action taken.
- October 2023: CPD admits using unapproved facial recognition software during protest surveillance.
- March 2024: City Auditor’s report confirms disproportionate targeting of Black residents.
- May 2024: Over 12,000 residents sign petition demanding a ban.
- June 11, 2024: City Council votes 37-11 to ban facial recognition by city agencies.
Frequently Asked Questions
How does this ban affect everyday Chicagoans?
For most residents, it means less risk of being wrongly flagged by surveillance systems while walking through neighborhoods, attending protests, or riding the CTA. The ban removes the threat of automated misidentification — especially for Black and Latino communities who’ve been disproportionately targeted. It doesn’t stop cameras, but it stops machines from assigning guilt based on skin tone or facial structure.
What led to the Chicago City Council’s sudden decision?
The tipping point was the revelation that the Chicago Police Department had secretly used facial recognition during a protest — without council approval or public knowledge. Combined with the City Auditor’s report showing racial bias and the ACLU’s documentation of wrongful arrests, public pressure became impossible to ignore. Councilmembers received over 15,000 emails in two weeks demanding action.
Are there any exceptions to the ban?
Yes. The ban applies only to city agencies — so federal agents, private businesses, and schools can still use the technology. However, any business using facial recognition on public property must post visible signage. There’s also an exception for identifying missing persons or victims of human trafficking — but only if approved by the city’s Civilian Office of Police Accountability.
What are experts saying about the effectiveness of this ban?
Privacy advocates call it a "landmark step," but warn it’s incomplete. Researchers at the University of Chicago note that without federal regulation, companies can still sell the tech to private entities — and law enforcement might still access data through backdoor partnerships. The real test will be enforcement: will the city audit compliance? Will fines be levied? So far, the ordinance includes strong penalties — but only if someone reports violations.
How does this compare to other cities’ actions?
Chicago joins a small but growing list: San Francisco was first in 2019, followed by Boston and Portland. But unlike those cities, Chicago’s ban includes public transit and municipal buildings — not just police. It’s also the first in a major Midwestern city with a population over 2 million. The scale makes it a potential model for Detroit, Cleveland, and St. Louis.
Will this ban reduce crime?
There’s no evidence facial recognition significantly reduced violent crime in Chicago. The CPD’s own records show it led to fewer than two convictions annually over three years. Meanwhile, community programs like Cure Violence have reduced shootings by up to 40% in targeted neighborhoods. The city now plans to redirect the $2.3 million previously spent on surveillance tech into violence prevention, mental health response teams, and youth outreach.