WASHINGTON, DC, IS the home base of the most powerful government on earth. It’s also home to 690,000 people—and 29 obscure algorithms that shape their lives. City agencies use automation to screen housing applicants, predict criminal recidivism, identify food assistance fraud, determine if a high schooler is likely to drop out, inform sentencing decisions for young people, and many other things.
That snapshot of semiautomated urban life comes from a new report from the Electronic Privacy Information Center (EPIC). The nonprofit spent 14 months investigating the city’s use of algorithms and found they were used across 20 agencies, with more than a third deployed in policing or criminal justice. For many systems, city agencies would not provide full details of how their technology worked or was used. The project team concluded that the city is likely using still more algorithms that they were not able to uncover.
The findings are notable beyond DC because they add to the evidence that many cities have quietly put bureaucratic algorithms to work across their departments, where they can contribute to decisions that affect citizens’ lives.
Government agencies often turn to automation in hopes of adding efficiency or objectivity to bureaucratic processes, but it’s often difficult for citizens to know they are at work, and some systems have been found to discriminate and lead to decisions that ruin human lives. In Michigan, an unemployment-fraud detection algorithm with a 93 percent error rate caused 40,000 false fraud allegations. A 2020 analysis by Stanford University and New York University found that nearly half of federal agencies are using some form of automated decisionmaking systems.
EPIC dug deep into one city’s use of algorithms to give a sense of the many ways they can influence citizens’ lives and encourage people in other places to undertake similar exercises. Ben Winters, who leads the nonprofit’s work on AI and human rights, says Washington was chosen in part because roughly half the city’s residents identify as Black.
“More often than not, automated decisionmaking systems have disproportionate impacts on Black communities,” Winters says. The project found evidence that automated traffic-enforcement cameras are disproportionately placed in neighborhoods with more Black residents.
Cities with significant Black populations have recently played a central role in campaigns against municipal algorithms, particularly in policing. Detroit became an epicenter of debates about face recognition following the false arrests of Robert Williams and Michael Oliver in 2019 after algorithms misidentified them. In 2015, the deployment of face recognition in Baltimore after the death of Freddie Gray in police custody led to some of the first congressional investigations of law enforcement use of the technology.
EPIC hunted algorithms by looking for public disclosures by city agencies and also filed public records requests, requesting contracts, data sharing agreements, privacy impact assessments and other information. Six out of 12 city agencies responded, sharing documents such as a $295,000 contract with Pondera Systems, owned by Thomson Reuters, which makes fraud detection software called FraudCaster used to screen food-assistance applicants. Earlier this year, California officials found that more than half of 1.1 million claims by state residents that Pondera’s software flagged as suspicious were in fact legitimate.
But, in general, agencies were unwilling to share information about their systems, citing trade secrecy and confidentiality. That made it nearly impossible to identify every algorithm used in DC. Earlier this year, a Yale Law School project made a similar attempt to count algorithms used by state agencies in Connecticut but was also hampered by claims of trade secrecy.
EPIC says governments can help citizens understand their use of algorithms by requiring disclosure anytime a system makes an important decision about a person’s life. And some elected officials have favored the idea of requiring public registries of automated decisionmaking systems used by governments. Last month, lawmakers in Pennsylvania, where a screening algorithm had accused low-income parents of neglect, proposed an algorithm registry law.
But Winters and others warn against thinking that algorithm registries automatically lead to accountability. New York City appointed an “algorithm management and policy officer” in 2020, a new position intended to inform city agencies how to use algorithms, and the public about how the city uses automated decisionmaking.
The officer’s initial report said that city agencies use 16 systems with a potentially substantial impact on people’s rights, with only three used by the NYPD. But a separate disclosure by the NYPD under a city law regulating surveillance showed that the department uses additional forms of automation for tasks like reading license plates and analyzing social media activity.
Roughly two years ago the cities of Amsterdam and Helsinki announced plans to make comprehensive lists of their own municipal algorithms, as well as the data sets used to train them and the city employees responsible. The idea was to help citizens seek redress from a human if they felt a system had problems.
But to date, Helsinki’s AI register largely serves as marketing for a set of city services chatbots. The Amsterdam Algorithm Register currently lists only systems for detecting illegal vacation rentals, automated parking control, and an algorithm used for reporting issues to the city. Together the two cities list a total of 10 automated decisionmaking systems, despite the fact that a document released by Amsterdam and Helsinki officials says they jointly had more than 30 AI projects underway in late 2020.
Researchers from the University of Oxford, Alan Turing Institute in London, and Cardiff University said in a paper last year that Amsterdam’s AI registry omits some of the most concerning or problematic tools encountered by the residents of Amsterdam, calling the list “ethics theater.” In the city, algorithms can also decide where kids go to school or where to send police. The authors concluded that the registry project appeared intentionally focused only on a limited, innocuous set of algorithms.
Winters says algorithm registries can work, if rules or laws are in place to require government departments take them seriously. “It’s a great format,” he says of Amsterdam’s approach. “But it’s extremely incomplete.”
EMF Protection Products: www.ftwproject.com
QEG Clean Energy Academy: www.cleanenergyacademy.com
Forbidden Tech Book: www.forbiddentech.website