Americans ought to be excited about technology that can spot wildfires, help doctors detect an irregular heartbeat and maybe jolt the economy out of its slow-growth rut.
Instead, they’re scared.
The technology in question is artificial intelligence, or AI, which involves machines learning to do tasks that previously required human brainpower. Already commonplace in applications such as voice recognition and online ad targeting, AI holds immense promise in medicine, public safety and many other fields.
The public, however, is skeptical. In a Pew Research Center survey last year, 77 percent of respondents were worried about a future in which machines could do many human jobs. Just 33 percent were enthusiastic.
Concerns range from widespread job losses to privacy to biases hidden in computer code. These are important issues, but Joshua New, senior policy analyst at the Center for Data Innovation, says we shouldn’t let them impede technological progress.
“There definitely is a public trust crisis,” New told journalists last week at the National Press Foundation in Washington. He called for development of a national strategy that would include more investment in basic AI research and retraining help for people who lose their jobs to automation.
What we shouldn’t consider, New argued, is limiting AI to protect jobs. “That is about as aggressively Luddite an industrial policy as you can get,” he said.
New also rejects calls for something called algorithmic transparency, which would require AI programmers to disclose their source code. It “makes absolutely no sense from a technological perspective,” he said, and would discourage firms from writing code that others could copy.
Algorithmic accountability, instead of transparency, has been suggested as an alternative regulatory framework, and in fact is part of a new data-bias law in New York City.
Ryan Hagemann, senior policy director at the libertarian-leaning Niskanen Center, says the accountability approach asks whether an AI-based decision might harm someone and how that harm can be addressed.
What kind of harm? Recent experience provides a couple of examples.
A 2016 ProPublica investigation showed that software called COMPAS, which courts and parole boards were using to predict whether a defendant was likely to commit another crime, was biased against African-Americans. And, just last month, Reuters reported that Amazon had scrapped an automated hiring tool that was biased against women.
In both cases, the problem was with how humans trained the software. Since the tech industry is male-dominated, Amazon’s system saw more male résumés than female ones and decided the men were preferable.
Despite such prominent slip-ups, it’s important to remain a technology optimist and focus on real problems, not imagined ones. Take the often-predicted jobs crisis, for example. Robert Atkinson, president of the Information Technology and Innovation Foundation, notes that technology-driven job churn has been unusually low in the past decade, a symptom of slow productivity growth.
That same slow productivity growth has kept incomes stagnant. Breakthroughs in artificial intelligence, Atkinson argues, may be what the U.S. needs to achieve faster economic growth.
Meanwhile, those of us attending the National Press Foundation program caught glimpses of how the technology may be used in the near future.
NASA has trained computers to spot wildfires in satellite images before they’d be seen by a human. At UnitedHealth Group’s Optum unit, AI can help physicians diagnose atrial fibrillation, diabetes and opiate dependency.
This isn’t dystopian technology that’s destined to destroy jobs and subjugate humans to machines. It’s a life-improving, even life-saving, advance, and with the proper policy choices, the U.S. can lead the world in its development.