Mona Sloane
-
Research Assistant Professor
-
Senior Research Scientist, NYU Center for Responsible AI
Mona Sloane is a sociologist working on inequality in the context of AI design and policy. She frequently publishes and speaks about AI, ethics, equitability and policy in a global context. Mona is a Fellow with NYU’s Institute for Public Knowledge (IPK), where she convenes the Co-Opting AI series and co-curates the The Shift series. She also is an Adjunct Professor in the Department of Technology, Culture and Society at NYU’s Tandon School of Engineering, a Senior Research Scientist at the NYU Center for Responsible AI, and is part of the inaugural cohort of the Future Imagination Collaboratory (FIC) Fellows at NYU’s Tisch School of the Arts. Mona is also affiliated with The GovLab in New York and works with Public Books as the editor of the Technology section. Her most recent project is Terra Incognita: Mapping NYC’s New Digital Public Spaces in the COVID-19 Outbreak which she leads as principal investigator. Mona currently also serves as principal investigator of the Procurement Roundtables project, a collaboration with Dr. Rumman Chowdhury (Director of Machine Learning Ethics, Transparency & Accountability at Twitter, Founder of Parity), and John C. Havens (IEEE Standards Association) that is focused on innovating AI procurement to center equity and justice. Mona also works with Emmy Award-winning journalist and NYU journalism professor Hilke Schellmann on hiring algorithms, auditing, and new tools for investigative journalism and research on AI. With Dr. Matt Statler (NYU Stern), Mona is also leading the PIT-UN Career Fair project that looks to bring together students and organizations building up the public interest technology space. Mona is also affiliated with the Tübingen AI Center in Germany where she leads a 3-year federally funded research project on the operationalization of ethics in German AI startups. She holds a PhD from the London School of Economics and Political Science and has completed fellowships at the University of California, Berkeley, and at the University of Cape Town.
Research News
Better transparency: Introducing contextual transparency for automated decision systems
LinkedIn Recruiter — a search tool used by professional job recruiters to find candidates for open positions — would function better if recruiters knew exactly how LinkedIn generates its search query responses, possible through a framework called “contextual transparency.”
That is what a team of researchers led by NYU Tandon’s Mona Sloane, a Senior Research Scientist at the NYU Center for Responsible AI and a Research Assistant Professor in the Technology, Culture and Society Department, advance in a provocative new study published in Nature Machine Intelligence.
The study is a collaboration with Julia Stoyanovich, Institute Associate Professor of Computer Science and Engineering, Associate Professor of Data Science, and Director of the Center for Responsible AI at New York University, as well as Ian René Solano-Kamaiko, Ph.D. student at Cornell Tech; Aritra Dasgupta, Assistant Professor of Data Science at New Jersey Institute of Technology; and Jun Yuan, Ph.D. Candidate at New Jersey Institute of Technology.
It introduces the concept of contextual transparency, essentially a “nutritional label” that would accompany results delivered by any Automated Decision System (ADS), a computer system or machine that uses algorithms, data, and rules to make decisions without human intervention. The label would lay bare the explicit and hidden criteria — the ingredients and the recipe — within the algorithms or other technological processes the ADS uses in specific situations.
LinkedIn Recruiter is a real-world ADS example — it “decides” which candidates best fit the criteria the recruiter wants — but different professions use ADS tools in different ways. The researchers propose a flexible model of building contextual transparency — the nutritional label — so it is highly specific to the context. To do this, they recommend three “contextual transparency principles” (CTP) as the basis for building contextual transparency, each of which relies on an approach related to an academic discipline.
- CTP 1: Social Science for Stakeholder Specificity: This aims to identify the professionals who rely on a particular ADS system, how exactly they use it, and what information they need to know about the system to do their jobs better. This can be accomplished through surveys or interviews.
- CTP 2: Engineering for ADS Specificity: This aims to understand the technical context of the ADS used by the relevant stakeholders. Different types of ADS operate with different assumptions, mechanisms and technical constraints. This principle requires an understanding of both the input, the data being used in decision-making, and the output, how the decision is being delivered back.
- CTP 3: Design for Transparency- and Outcome-Specificity: This aims to understand the link between process transparency and the specific outcomes the ADS system would ideally deliver. In recruiting, for example, the outcome could be a more diverse pool of candidates facilitated by an explainable ranking model
Researchers looked at how contextual transparency would work with LinkedIn Recruiter, in which recruiters use Boolean searches — AND, OR, NOT written queries — to receive ranked results. Researchers found that recruiters do not blindly trust ADS-derived rankings and typically double-check ranking outputs for accuracy, oftentimes going back and tweaking keywords. Recruiters told researchers that the lack of ADS transparency challenges efforts to recruit for diversity.
To address the transparency needs of recruiters, researchers suggest that the nutritional label of contextual transparency include passive and active factors. Passive factors comprise information that is relevant to the general functioning of the ADS and the professional practice of recruiting in general, while active factors comprise information that is specific to the Boolean search string and therefore changes.
The nutritional label would be inserted into the typical workflow of LinkedIn Recruiter users, providing them information that would allow them to both assess the degree to which the ranked results satisfy the intent of their original search, and to refine the Boolean search string accordingly to generate better results.
To evaluate whether this ADS transparency intervention did achieve the change that can reasonably be expected, researchers suggest using stakeholder interviews about potential change in use and perception of ADS alongside participant diaries documenting professional practice and A/B testing (if possible).
Contextual transparency is an approach that can be used for AI transparency requirements that are mandated in new and forthcoming AI regulation in the US and Europe, such as the NYC Local Law 144 of 2021 or the EU AI Act.