Immortality or Extinction: A Review of the Discourse on Artificial Intelligence

Since the coining of the term “artificial intelligence” more than seven decades ago, public interest in AI has waxed and waned. The community has endured several “AI winters,” during which optimism plummeted alongside funding. In recent years, progress has once again accelerated due to a confluence of factors that have made deep learninga subset of machine learning which employs the availability of big data to develop deep neural networkspossible. But what do advances in deep learning mean for artificial general intelligence? And what will the creation of artificial intelligence mean for us mere mortals? This article aims to review contemporary predictions as well as the increasingly heated debate on the future of artificial intelligence.

Historical predictions of when artificial general intelligence (as opposed to “weak” or “narrow” AI, which focuses on specific tasks such as image or speech recognition) will be achieved have generally proven too sanguine. According to most experts, we have been a couple decades away for quite a few decades now. Alan Turing, regarded by many as the father of computer science, predicted in 1950 that by the year 2000, computers would be able to fool human judges into believing they were human at least 30% of the time. But despite a few attempts by chatbots in recent years, computer programs have yet to credibly pass the Turing test. Turing’s prediction was far from the most optimistic of his timea 1972 poll of 67 AI and computer science experts found that a third of the respondents expected human-level intelligence in 20 years, another third in 50 years, and the remaining third in more than 50 years.

 

While past progress has failed to live up to expectations, recent developments are cause for increased optimism. A case in point is the breakthrough program AlphaGo developed by DeepMind, a London-based firm which was acquired by Google in 2014. Due to the complexity involved in games of Go (an ancient Chinese game of strategy which is played on boards of 19×19 grids), experts had predicted as late as 2015 that computer programs would not be capable of beating human players for at least another decade. Yet later that same year, the AlphaGo program won a series of five games against Europe’s reigning champion, Fan Hui, and has since gone on to win matches against some of the top-ranked humans in the world. This was made possible by a combination of deep learning (using a database of 30 million moves performed by expert human players in past games) and reinforcement learning, in which the program was able to continue to improve by playing against itself.

Advancements in deep learningresulting in part from increases in computing power and access to larger datasetsare valid reasons to expect that artificial general intelligence will become a reality in the not-too-distant future. According to data compiled by a researcher at the Machine Intelligence Research Institute, despite considerable variation, recent expert predictions demonstrate evidence of clustering around the years 2040 to 2050. If these predictions are accurate, artificial intelligence could be a reality within many of our lifetimes.

 

Source: https://aiimpacts.org/update-on-all-the-ai-predictions/

Yet while much effort and expense has gone into the pursuit of AI, much less has been devoted to considering what comes after. A number of prominent figures have warned against underestimating the risks of artificial intelligence. Elon Musk, for one, has been outspoken on the topic, and recently posted a tweet cautioning the public to be more concerned about AI safety. In 2015, Musk co-founded OpenAI, a non-profit research company working to ensure the development of “safe” artificial intelligence. Stephen Hawking has also issued warnings about the risks of artificial intelligence, noting in an interview that “the development of full artificial intelligence could spell the end of the human race.”

But perhaps the most influential voice of caution belongs to Nick Bostrom, an Oxford philosopher and founder of the Future of Humanity Institute. His views are laid out in Superintelligence: Paths, Dangers, Strategies, published in 2014, in which he examines the probability of an intelligence explosion—that is, the development of intelligent machines which could then design even more intelligent machines in a never-ending loop—and argues that such an explosion could pose an existential threat to the human race. As Bostrom puts it, “before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”

On the other side of the fence are those who think of Bostrom and ilk as “professional scare monger[s].” Some believe that the prospect of superintelligence is both too remote and too improbable to warrant consideration. As Andrew Ng, chief scientist at the Chinese search giant Baidu, describes it: “The reason I say that I don’t worry about AI turning evil is the same reason I don’t worry about overpopulation on Mars.” Others, like Google’s Eric Schmidt, are confident that human oversight will effectively curb any threats posed by superintelligence, and yet others (championed mainly by Ray Kurzweil) are supremely positive about superintelligence, trusting that it will lead to advances in nano and biotechnology that will eliminate poverty and disease.

Regardless of which side you land on, the fact that superintelligence (if and when it is achieved) will have huge implications cannot be disputed. At the extremes, we will either be immortal or extinct. More probably, superintelligence could lead to pivotal advances in health and engineering, but even these have the potential to cause immense social upheaval by replacing a large proportion of human workers. Moreover, the question of whether the prospective health and economic benefits will trickle down to all members of society remains an open one. In light of the rapid advances towards AI in recent years, ensuring that AI results in a net benefit to humanity is of greater concern than ever before, and will be possible only through the careful use of foresight and long-term planning. But while research into AI safety and policy is growing, it remains fairly neglected—in 2017, only US$9 million was spent on AI safety globally, compared to almost 1,000 times as much on the development of AI.

Perhaps one of the reasons for this relative neglect is the ambiguity underlying any conception of AI. How do we craft policy for an outcome that is essentially unknowable? The U.S. (under the Obama administration) has argued that “the best way to build capacity for addressing the longer-term speculative risks is to attack the less extreme risks already seen today, such as current security, privacy, and safety risks, while investing in research on longer-term capabilities and how their challenges might be managed.” A 2016 report by the National Science and Technology Council’s Committee on Technology concluded that “long-term concerns about super-intelligent General AI should have little impact on current policy,” while a separate report proposed seven priorities for federally-funded AI research, including understanding and addressing the ethical, legal, and social implications of AI as well as ensuring the safety and security of AI systems.

While more research is certainly needed to craft specific policies addressing the risks of superintelligence, it seems clear that any strategy proposing to grapple with the advent of artificial general intelligence will need to address at least the following: methods of regulation that minimise the potential risks of AI while not presenting an overt hindrance to progress, social safety nets and the development of the labour force in response to increasing automation of existing jobs, ensuring that any gains from AI are shared broadly and do not contribute to drastic increases in inequality, and creating avenues for international cooperation and collaboration to avert a technological arms race.

At the risk of invoking a cliché, artificial intelligence is one arena in which it certainly seems better to be safe than sorry. Bostrom begins Superintelligence with a parable of sparrows that decide to adopt a baby owl without regard to its taming or domestication. He leaves the story unfinished, but it takes little imagination to see that it does not end well. Avoiding the fate of the unfortunate sparrows will require a concerted effort on the part of us humans to collectively invest the time and resources necessary to ensure the safe development of artificial general intelligence. As Kurt Andersen writes, “on this brink of a new technological epoch I say: Whoa—both Whoa, it looks really awesome, and Whoa, let’s be very careful.”


Samantha Fu is a Master of Public Administration candidate at the London School of Economics. Prior to pursuing graduate studies, she worked on the analytics team for the 2016 Clinton campaign, as an economic consultant in New York, and earned a Bachelor’s degree from McGill University in Montreal.