Now is the time for a transatlantic dialog on the risk of AI
Mona Sloane, of the NYU Center for Responsible AI (R/AI) at NYU Tandon, and adjunct professor in the Department of Technology, Culture and Society writes that "Artificial intelligence is no longer the world’s darling; no longer the '15 trillion-dollar baby.'" Noting that AI applications can cause harm and pose risk to communities and citizens, she points out that lawmakers are now under pressure to come up with new regulatory guardrails:
"While the US government is deliberating on how to regulate big tech, all eyes are on the unbeaten valedictorian of technology regulation: the European Commission," she writes, pointing to the Commission's release, last week, of a wide-ranging proposal that would govern the design, development, and deployment of AI systems. "The proposal is the result of a tortuous path that involved the work of a high-level expert group (full disclosure: one of us was a member), a white paper, and a comprehensive impact assessment."
She notes that the proposed regulation identifies prohibited uses of AI (for example, using AI to manipulate human behavior to circumvent users’ free will, or allowing “social scoring” by governments), and specifies criteria for identifying “high-risk” AI systems, which can fall under eight areas: biometric identification, infrastructure management, education, employment, access to essential services (private and public, including public benefits), law enforcement, and migration and border control.