It was hard to tell whether hope or fear was the predominant sentiment about the future of artificial intelligence, according to a panel discussing the state of the field at the World Economic Forum in Davos, Switzerland, Wednesday.
A.I. systems are rapidly becoming more capable, the panel – – which included Ya-Qin Zhang, president of Chinese search engine company Baidu Inc., and Matthew Grob, the chief technology officer at Qualcomm Inc. — agreed: they’re able to learn from analyzing large data sets and they can increasingly discern human emotions by monitoring facial expressions and natural language.
A.I. researchers Andrew Moore, the dean of the School of Computer Science at Carnegie Mellon University, and Stuart Russell, a professor of computer science at the University of California, Berkeley, were also on the panel and concurred that as a result, A.I. is likely to vastly improve human lives in the coming decade.
But the researchers and executives voiced concern about possible downsides ranging from economic displacement to computers that escape the ability of humans to control them with potentially dire consequences.
What’s next for artificial intelligence?
Using A.I. to improve search engine results has the potential to transform search from a $1 trillion industry today to a $10 trillion industry, Russell said. Zhang said Baidu is already beginning to apply artificial intelligence to insurance and loan underwriting, where he sees real possibilities for such systems to better assess risk. “In insurance and consumer loans, A.I. and machine learning can help you identify all the patterns to help you reduce risk,” Zhang said.
Zhang also said he worries that as machines get smarter, people are in some ways actually becoming less smart than they once were. Already we don’t have to remember as much, because we rely on search engines and information stored on our mobile devices. Soon we might forget how to drive thanks to autonomous driving systems. This is fine, Zhang said, so long it makes us more efficient by freeing up our brains for more meaningful tasks. But he worried we might squander this new mindspace. He also fretted about what would happen if one day some of these systems failed. Would people be able to function?
At the same time, a growing number of professions are likely to be increasingly squeezed by A.I., including many white-collar jobs once thought immune to automation, such as law and even medicine, Moore said. He predicted there would be far fewer lawyers and doctors in the future, while there might be more jobs for teachers of young children or nurses, who could use artificial intelligence to aid their work while not being displaced by software.
Russell said that machines with general intelligence capability might not be that far off and that the world ought to devote serious thought to how to govern such machines — an idea Elon Musk and theoretical physicist Stephen Hawking have supported in the past, being co-signatories alongside Russell on an open letter entitled Research Priorities for Robust and Beneficial Artificial Intelligence.
He said one could not predict the speed at which A.I. will develop. “You can’t use Moore’s Law to predict how quickly this will happen,” he said, adding that it might take just a few breakthroughs to create general intelligence — and that breakthroughs were by their very nature unpredictable. “The possible risks from building systems more intelligent than us are not immediate but the need to think about how to keep such systems under control and make sure the decisions they make are beneficial to us, that needs to start happening now,” Russell said.