The original text of this article is on the website of the Centre for Public Impact.
In 1964, a group of Nobel-prize-winning scientists, economists, and civil rights activists wrote an open letter to then-President Lyndon Johnson to warn of the impending “cybernation revolution”. They heralded an era of “almost unlimited productive capacity” brought about by “the combination of the computer and the autonomous self-regulating machine”.
They were wrong, at least in the short term. Their warning was fuelled by the excitement of the first period of rapid growth in artificial intelligence (AI) – at that time researchers were developing the world’s first machine learning algorithm and had created programmes that could solve the analogy problems on standard IQ tests.
Now, more than 50 years later, we are again faced with the prospect of a productivity revolution driven by machine learning. Unlike earlier predictions, based on the theoretical potential of the concept of machine learning, technologists today have been applying AI techniques to practical problems of economic significance and observing impressive results. Self-driving cars, automatic fraud detection, and AI trading algorithms are all either have prototypes or are already implemented. History teaches us caution in predictions about the future, but at this point only the careless would not at least plan for radical changes driven by AI.
Moreover, technological progress can happen surprisingly quickly. Just a year ago, many university lecturers were still using the difficult east-Asian board game Go as a default example of a problem AI was not expected to solve any time soon. Then in March 2016 a London AI firm, Google DeepMind, defeated the world champion Go player 4-1.
Ripple effect – some good, some bad
The implications of AI are likely to be broadly positive for the economy. AI offers ways to radically boost productivity – to do things that people need to work on today without any human intervention at all. That will free up those people to do other work – potentially more interesting or fulfilling – or to take leisure.
Despite that, a technological revolution from AI poses significant risks which society must manage carefully. We should expect economic and social transitions as well as broader risks if AI research is faster than expected.
There is considerable debate about whether modern automation will primarily complement human labour – making people more productive – or substitute for it – competing with it and driving down the price of human labour. If AI is able to substitute for human labour, we might expect that things could get grim for workers who might not own shares in any of the AI systems that replace them. It is important to make sure that a new economy would fairly distribute the returns from this innovation.
In either case, there is the potential for significant job losses, which will be unevenly distributed and could devastate some communities. Job losses themselves are significantly less bad if people can easily get new work. This is especially hard when a local economy depends heavily on a single employer, perhaps a call centre or an administration centre.
Some civil servants have commented on administrative centres which could probably be nearly completely automated, but since entire local economies depend on employment from those centres with little else for miles around, closing them could mean abandoning hundreds of families. Resolving this difficulty, and unleashing public sector productivity, requires a heavy investment into retraining and the infrastructure for commuting easily to jobs that might be in other places.
AI researchers are trying to create software that can solve very general problems intelligently – and possibly more intelligently than any human. As leading AI researcher Professor Stuart Russell puts it, we ought to at least consider the possibility that they might succeed in creating a powerful general intelligence.
Powerful AI systems might exceed human capacities in many or even most domains. This is especially likely if artificial intelligence proves effective at aiding research into more advanced forms of artificial intelligence. If these systems become extremely powerful then it becomes very important to make sure they behave in a way that benefits humanity. This is more difficult than it might seem. Although we can set the AI’s goals, they often pursue them in slightly unpredictable ways.
That is fine when it means an AI making a chess move that no human expert would expect. But as the stakes rise, the risk increases that we might have mis-specified the goals the AI is to achieve. As Professor Nick Bostrom discusses in his book Superintelligence it is startlingly easy to think you were very clear about what you wanted your AI to do but to accidentally get something disastrously wrong. If systems become extremely generally powerful, this technical challenge of setting the goals for AI systems could become existentially important for humanity.