A tug-of-war over biased AI

Illustration: Eniola Odetunde/Axios

The idea that AI can replicate or amplify human prejudice, once argued mostly at the field's fringes, has been thoroughly absorbed into its mainstream: Every major tech company now makes the necessary noise about "AI ethics."

Yes, but: A critical split divides AI reformers. On one side are the bias-fixers, who believe the systems can be purged of prejudice with a bit more math. (Big Tech is largely in this camp.) On the other side are the bias-blockers, who argue that AI has no place at all in some high-stakes decisions.

Why it matters: This debate will define the future of the controversial AI systems that help determine people's fates through hiring, underwriting, policing and bail-setting.

What's happening: Despite the rise of the bias-blockers in 2019, the bias-fixers remain the orthodoxy.

  • A recent New York Times op-ed laid out the prevailing argument in its headline "Biased algorithms are easier to fix than biased people."
  • "Discrimination by algorithm can be more readily discovered and more easily fixed," says UChicago professor Sendhil Mullainathan in the piece. Yann LeCun, Facebook's head of AI, tweeted approvingly: "Bias in data can be fixed."
  • But the op-ed was met with plenty of resistance.

The other side: At the top academic conference for AI this week, Abeba Birhane of University College Dublin presented the opposing view.

  • Birhane's key point: "This tool that I'm developing, is it even necessary in the first place?"
  • She gave classic examples of potentially dangerous algorithms, like one that claimed to determine a person's sexuality from a photo of their face, and another that tried to guess a person's ethnicity.
  • "[Bias] is not a problem we can solve with maths because the very idea of bias really needs much broader thinking," Birhane tells Axios.

The big picture: In a recent essay, Frank Pasquale, a UMD law professor who studies AI, calls this a new wave of algorithmic accountability that looks beyond technical fixes toward fundamental questions about economic and social inequality.

  • "There's definitely still resistance around it," says Rachel Thomas, a University of San Francisco professor. "A lot of people are getting the message about bias but are not yet thinking about justice."
  • "This is uncomfortable for people who come up through computer science in academia, who spend most of their lives in the abstract world," says Emily M. Bender, a University of Washington professor. Bender argued in an essay last week that some technical research just shouldn't be done.

The bottom line: Technology can help root out some biases in AI systems. But this rising movement is pushing experts to look past the math to consider how their inventions will be used beyond the lab.

  • "AI researchers need to start from the beginning of the study to look at where algorithms are being applied on the ground," says Kate Crawford, co-founder of NYU's AI Now Institute.
  • "Rather than thinking about them as abstract technical problems, we have to see them as deep social interventions."

The impact: Despite a flood of money and politics propelling AI forward, some researchers, companies and voters hit pause this year.

  • Most visibly, campaigns to ban facial recognition technology succeeded in San Francisco, Oakland and Somerville, Mass. This week, nearby Brookline banned it, too.
  • One potential outcome: freezes or restrictions on other controversial uses of AI. This scenario scares tech companies, who prefer to send plumbers in to repair buggy systems rather than to rip out the pipes entirely.

But the question at the core of the debate is whether a fairness fix even exists.

The swelling backlash says it doesn't — especially when companies and researchers ask machines to do the impossible, like guess someone's emotions by analyzing facial expressions, or predict future crime based on skewed data.

  • "It's anti-scientific to imagine that an algorithm can solve a problem that humans can't," says Cathy O'Neil, an auditor of AI systems.
  • These applications are "AI snake oil," argues Princeton professor Arvind Narayanan in a presentation that went viral on nerd Twitter recently.
  • The main offenders are AI systems meant to predict social outcomes, like job performance or recidivism. "These problems are hard because we can’t predict the future," Narayanan writes. "That should be common sense. But we seem to have decided to suspend common sense when AI is involved."

This blowback's spark was a 2017 research project from MIT's Joy Buolamwini. She found that major facial recognition systems struggled to identify female and darker-toned faces.

What's next: Companies are tightening access to their AI algorithms, invoking intellectual property protections to avoid sharing details about how their systems arrive at critical decisions.

  • "The real problem is we citizens have no power to even examine or scrutinize these algorithms," says O'Neil. "They're being used by private actors for commercial gain."

Additional Stories

Researchers develop a AI program with manners

Illustration: Sarah Grillo/Axios

A team of scientists has developed a technique that automatically makes written sentences more polite.

Why it matters: As the authors themselves note in the paper, it is "imperative to use the appropriate level of politeness for smooth communication in conversations." And what better to determine the appropriate level of politeness than an unfeeling machine-learning algorithm?

Researchers say AI tools could make justice systems more just

Illustration: Eniola Odetunde/Axios

Researchers are calling for open and free access to U.S. court records and building an AI tool to analyze them.

Why it matters: Court records are publicly available but expensive to access and difficult to navigate. Freeing up that data — and using machine learning tools to make sense of it — would help make the justice system more just.

How the coronavirus pandemic boosted alternative meat

Illustration: Sarah Grillo/Axios

Thanks in part to pandemic-driven disruptions of conventional meat processing, sales and interest in plant-based alternatives are taking off, changing the future of food.

Why it matters: Meat-processing plants have proven especially vulnerable to coronavirus outbreaks, and meat consumption adds to climate change. Better-tasting alternatives could shrink that environmental footprint while solidifying the supply chain for protein.

A new AI tool to fight the coronavirus

Illustration: Sarah Grillo/Axios

A coalition of AI groups is forming to produce a comprehensive data source on the coronavirus pandemic for policymakers and health care leaders.

Why it matters: A torrent of data about COVID-19 is being produced, but unless it can be organized in an accessible format, it will do little good. The new initiative aims to use machine learning and human expertise to produce meaningful insights for an unprecedented situation.

The fragmentation of global trade

Illustration: Aïda Amer/Axios

The pandemic will accentuate the deepening uncertainty over the future of global trade, according to a new report.

Why it matters: Trade is the lifeblood of globalization, and it's helped lift hundreds of millions of people out of poverty. But populism, a growing rift between China and the U.S., and the wild card of COVID-19 could cause global trade to fracture into regional variations.

Fighting the coronavirus infodemic

Illustration: Sarah Grillo/Axios

An "infodemic" of misinformation and disinformation has helped cripple the response to the novel coronavirus.

Why it matters: High-powered social media accelerates the spread of lies and political polarization that motivates people to believe them. Unless the public health sphere can effectively counter misinformation, not even an effective vaccine may be enough to end the pandemic.

Scientists develop one-drop test for water contamination

Northwestern's ROSALIND water testing platform. Photo courtesy of Northwestern University

A new platform uses synthetic biology to quickly identify contaminants in a single drop of water.

Why it matters: Water pollution is a major health risk, especially for poor and minority communities. Technology that can cheaply screen water supplies for contaminants like lead could help anyone easily determine if their water is safe.

Michigan the latest state to order more coronavirus restrictions

Michigan Gov. Gretchen Whitmerat the Detroit- Hamtramck assembly plant in Detroit in January. Photo: Jeff Kowalsky/AFP via Getty Images

Michigan Gov. Gretchen Whitmer (D) signed an executive order closing indoor service at bars in south and central parts of the state "to protect the progress Michigan has made against COVID-19," she said in a statement Wednesday.

Why it matters: It's the latest state to readjust or pause reopening plans as COVID-19 cases soar across the U.S. Daily coronavirus case numbers surpassed 50,000 for the first time on Wednesday.

Read more at Axios
© Copyright Axios 2020