The Algorithmic Auditing Trap
‘Bias audits’ for discriminatory tools are a promising idea, but current approaches leave much to be desired
This op-ed was written by Mona Sloane, an adjunct professor in the Department Technology, Culture and Society at NYU Tandon, senior research scientist at the NYU Center for Responsible A.I. and a fellow at the NYU Institute for Public Knowledge. Her work focuses on design and inequality in the context of algorithms and artificial intelligence.
Sloane's article explores the ramifications of algorithmic decision-making systems, and also of the so-called audit algorithms — tools designed ostensibly to ensure the fairness of automated decision — on everything from employment screening to healthcare delivery.
"This technology has disproportionate impacts on racial minorities, the economically disadvantaged, womxn, and people with disabilities, with applications ranging from health care to welfare, hiring, and education," writes Sloane. "Here, algorithms often serve as statistical tools that analyze data about an individual to infer the likelihood of a future event—for example, the risk of becoming severely sick and needing medical care. This risk is quantified as a 'risk score,' a method that can also be found in the lending and insurance industries and serves as a basis for making a decision in the present, such as how resources are distributed and to whom."
She explains that algorithmic auditing, while growing fast, may not be any more trustworthy than the systems they are designed to check.
"Now, a potentially impactful approach is materializing on the horizon: algorithmic auditing, a fast-developing field in both research and application, birthing a new crop of startups offering different forms of 'algorithmic audits' that promise to check algorithmic models for bias or legal compliance."
But are these fundamentally any different than the systems they are designed to audit?
"... We are facing an underappreciated concern," she writes. "To date, there is no clear definition of “algorithmic audit.” Audits, which on their face sound rigorous, can end up as toothless reputation polishers, or even worse: They can legitimize technologies that shouldn’t even exist because they are based on dangerous pseudoscience."