Skip to content

AI is watching us. We need city officials watching AI.

Artificial intelligence in Philly government is a black box: There is no public process for communities to review proposed tech and determine if it’s truly safe for us — let alone beneficial.

Though marketed as tools to make life easier, artificial intelligence is overwhelmingly used on us, not by us, write Devren Washington and Clarence Okoh.
Though marketed as tools to make life easier, artificial intelligence is overwhelmingly used on us, not by us, write Devren Washington and Clarence Okoh.Read moreDAVID MCNEW / MCT

By now, you’ve noticed: Artificial intelligence is everywhere. Doorbell cameras film us walking through our neighborhood, Gmail insists on auto-generating email replies, and we’re told to accept a face scan to board an airplane. Health insurance CEOs use algorithms to auto-deny our healthcare coverage, and when you apply for a job, chances are AI screens your résumé before a person ever sees it.

This AI explosion isn’t accidental; it’s manufactured. Big Tech CEOs spend billions on lobbyists, marketers, and advertisers to pitch AI products to companies and, increasingly, to governments, that, in turn, spend millions buying and cramming AI into their day-to-day workings.

Despite the rapid integration of AI into public life and government operations, there’s very little information about what it costs, how it’s used, and who truly benefits, breeding reasonable skepticism. While city officials champion AI as a cutting-edge solution to enduring challenges, residents are suspicious of how it might be used against them — and they’re right.

As technology becomes more deeply embedded in our lives and government, Philadelphia leaders must ensure new technologies serve the needs of Philly’s people, not just the profits of companies.

Though marketed as tools to make life easier, AI is overwhelmingly used on us, not by us.

In Norristown, U.S. Immigration and Customs Enforcement used facial recognition to target people for deportation, weaponizing Palantir’s surveillance against Black and brown families. AI is often used secretly, targeting the most vulnerable, and putting children, especially, at risk. In Pasco County, Fla., officials used sensitive education records and racist algorithms to try to predict which students might be arrested and funneled into the criminal legal system. Philadelphia tracks thousands of children with GPS monitors, sharing their location with police without a warrant, notice, or consent — violating children’s privacy under the guise of oversight.

Because it’s built by humans and trained on biased data, AI often reinforces racism instead of removing it, harming Black and brown families the most.

A courtroom algorithm used nationwide falsely flagged Black people as likely future criminals nearly twice as often as white people. In Pittsburgh, an algorithm disproportionately flagged Black parents for child removal, with social workers overriding its risk scores a third of the time. These decisions have real, lasting impacts on people’s lives and futures.

Every single AI tool is powered by resource-guzzling data centers, which are frequently foisted on poor communities of color, that suffer higher electric rates and poisoned water tables, while politicians gift massive tax breaks to the billion-dollar corporations building and profiting.

Gov. Josh Shapiro has already hopped on this ugly bandwagon, handing Jeff Bezos and Amazon a sweetheart data center deal that robs taxpayers of $43 million. In each instance, the benefits of AI accrue for tech companies — not us.

So if AI is watching and extracting from us, who is watching AI? The answer in Philadelphia is no one.

AI in Philly government is a black box: There is no public process for communities to review proposed tech and determine if it’s truly safe for us (let alone beneficial), no citywide policy to disclose how AI and surveillance are used or trained, and no public oversight over its high costs and harmful impacts.

One thing we do know is that companies are aggressively marketing AI to City Hall. The company that invented and sells Tasers spent $75,000 lobbying the city, and is now pushing software that would use AI to generate police reports, while Microsoft salespeople rub elbows and pitch their vision.

The good news is that every black box contains a trove of information, and Councilmember Rue Landau is leading the way to unlock it, hosting the first-ever public hearing on Philly’s use of AI and surveillance on Wednesday to put the uses, cost, and impacts of AI on the public record. This kind of leadership should be an example for other elected officials in Philly and nationwide.

While bringing some overdue transparency is an essential first step, there’s far more to do. We also need bright-line rules that ban the most abusive uses of AI and prevent tech companies from writing their own rule book.

Philly is already on the way: Councilmember Nicolas O’Rourke’s ban on algorithmic rent setting protects Philadelphians from predators in a housing crisis, while the PHLConnectED program that got thousands of children online shows that the best uses of technology serve the public good first.

The AI boom presents an enormous responsibility and opportunity for Philadelphia to lead on an equitable, people-first approach to tech policy. Let’s not waste it.

Devren Washington is the organizing director with People’s Tech Project and founder of Philly Tech Justice. Clarence Okoh is the senior attorney for civil rights and technology at TechTonic Justice.