Since my recent post on the challenge of human jobs being rapidly replaced by robots (Humans Need Not Apply), there have been further developments.
Eminent minds as diverse as Stephen Hawking and Elon Musk have raised concerns about the threat posed by Artificial Intelligence (AI).
The argument is that robots (and/or software that drives machines of different kinds) will become so smart that they begin to learn independently of immediate human control. In this scenario AI ‘evolves’ at a much faster rate than humans are able to and we are therefore supplanted as a species by a superior capability.
Ray Kurzweil has been considering these issues for decades and believes that the point at which machines will become as capable as humans will be 2029. He feels very confident about this date and has a strong track record of being right. He calls this moment ‘The Singularity’ - when humans and machines effectively merge to become…something else.
The issue of whether (or when) AI becomes a significant existential threat to mankind isn’t, in my view, a technical one. Rather it forces us to consider questions of governance, ethics and regulation and presents a huge challenge to our collective moral compass. As always, technological capability can be harnessed for good or evil - just as fire warms us but can also burn down our houses. It just depends on how we choose to deploy it. As Ray Kurzweil puts it:
“Ultimately, the most important approach we can take to keep AI safe is to work on our human governance and social institutions.”
It is, therefore, the quality and effectiveness of our social progress that will determine whether technology creates more threat than opportunity to our collective wellbeing. Crucially it is our mechanisms for controlling the development of technology that will make the difference.
Making good decisions about the balance between collective control (regulation via governments and institutions) and individual freedom/enterprise remains a deeply contested political and economic arena. We are already seeing big ethical dilemmas around technologies that might end aging or provide designer babies to couples and the on-going debates about GM food continue to polarise opinion.
The likely reality is, of course, that organisations will probably be called upon to make decisions on how to effectively deploy emerging technologies well ahead of any global regulatory frameworks or legislation.
Will we continue to follow a strategy that ensures that costly human beings in the workplace are replaced by technology as soon as that technology becomes available? Standard economic logic suggests we will...Yet, while this strategy might make sense to an individual company, it could have disastrous consequences for the wider economy on which all companies depend.
We could see unemployment rates rocket and average incomes fall, significantly reducing the marketplace for the goods and services that robots and associated technology can produce. Professional and managerial classes will be hit equally as hard, as, for the first time, robots will replace highly skilled workers as well as more manual roles.
In the Kurzweil Singularity world all of these problems (and most other problems, like climate change, resource scarcity, etc.) are immediately and permanently solved by a new hybrid species of human-machine whose intelligence, individually and collectively, evolves very quickly and way beyond the scale of our understanding. Ray isn’t worried about the future.
But I am. I am worried that we will fail to evolve our economics (as well as our social and political systems) fast enough to usher in a new technologically enabled super-future. I am worried that we will end up in regulatory arguments about specific technologies and miss the collective impact of the big picture.
When I voice my concerns in this area, one of the common reactions is that we are a long, long way away from seeing technology evolve to the point where such concerns are relevant. “We have plenty of time” say the critics.
It is then that I turn to the story of Lake Michigan, which is beautifully illustrated below.
We are on the threshold of some amazing technological transformations at a time when we also face some hugely complex global problems. Are we capable of using the former to solve the latter? It seems to me that our best chance of success lies in our capacity to connect a sense of what the future holds with a clear sense of our collective purpose and intention. Maybe then we’ll avoid being outsmarted.
I’d love to hear your thoughts on these issues.