Play Live Radio
Next Up:
Available On Air Stations
Support NHPR with a year-end gift today for 2 chances to win a trip to Aruba!

Is The Fear Of Intelligent Machines Justified?

Professional Go player Lee Sedol, of South Korea, was defeated in the Google DeepMind Challenge Match against Google's artificial intelligence program, AlphaGo, in Seoul on March 15.
Lee Jin-man
Professional Go player Lee Sedol, of South Korea, was defeated in the Google DeepMind Challenge Match against Google's artificial intelligence program, AlphaGo, in Seoul on March 15.

It's in the news everywhere, with near-apocalyptic hubris: Google's DeepMind machine beat the world champion of the game Go with a score of 4-1.

Or, according to Britain's Independent newspaper: "Google's Go-playing computer has definitely beaten the best human in the world, finishing a pioneering match at 4-1." The "best human in the world," South Korean professional Go player Lee Sedol, is actually ranked No. 5 in the world. Clearly a stellar Go player — but not the world's best.

Mind you, there are several kinds of rankings for Go and they don't always agree. In fact, there is much confusion once you start looking. The one that puts Sedol as No. 5 is known as WHR algorithm. But these are details. The fact is that a machine beat a master Go player 4-1 in a much-publicized event. As AI (artificial intelligence) expert Gary Marcus wrote in a recent essay, "DeepMind made major progress, but the Go journey is still not over...The real question is whether the technology developed there can be taken out of the game world and into the real world."

In other words, can machine game-playing prowess be applied to real-world challenges?

In their Jan. 28 Nature article about DeepMind, Google scientists state in the abstract that the machine "defeated the human European Go champion by 5 games to 0...a feat previously thought to be at least a decade away." This professional Go player was Fan Hui, three-time European champion, currently ranked at No. 507 according to this ranking. Moving from Hui to Sedol is indeed very impressive.

Google's DeepMind program uses a combination of a vamped-up machine-learning algorithm known as "value networks" that evaluates board positions ("machine learning" is the new version of neural networks, programs that try to emulate simplified neuronal activity able to learn patterns and behaviors) and "policy networks" that selects moves. Oversimplifying, one piece of the program analyses possible moves while the other chooses the optimal move for a given situation based on a statistical analysis of the best possibilities. The closing lines in the Nature article are important: "AlphaGo has finally reached a professional level in Go, providing hope that human-level performance can now be achieved in other seemingly intractable artificial intelligence domains."

To be useful in the real word, where rules are often not rigid and surprise events and behaviors continually throw wrenches into efforts to rationalize behavior — be it human, political or economic — intelligent programs need a sort of plasticity and adaptability not easily transferrable from a more focused game-playing platform. Although Google's DeepMind success moves progress in AI to a whole new level, the jump from game-playing to intelligence that mirrors anything closer to human intelligence functioning in a complex world is still a very huge one. To many, that's a very good thing.

Oxford University philosopher Nick Bostrom has been cautioning us about the dangers of a super-intelligence out in the world. And billionaire Elon Musk, physicists Stephen Hawking and Martin Rees, Bostrom himself — and more interestingly, Demis Hassabis, Shane Legg, and Mustafa Suleyman, all co-founders of DeepMind — have signed an open letter where they "recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do."

It remains to be seen if Musk's idea of empowering as many people as possible to have access to AI will work as a sort of deterrence policy against AI domination (somewhat like the nuclear deterrence policy against global destruction) — or if, given that intelligent machines could, in principle at least, network to become a more unified autonomous entity, the nightmare is inescapable. AI is no nuclear bomb. Fortunately, even with the amazing steps of DeepMind in the game of Go, we can still sleep in peace for the foreseeable future while we find safeguards that will protect us from our own inventions.

Marcelo Gleiser is a theoretical physicist and cosmologist — and professor of natural philosophy, physics and astronomy at Dartmouth College. He is the co-founder of 13.7, a prolific author of papers and essays, and active promoter of science to the general public. His latest book is The Island of Knowledge: The Limits of Science and the Search for Meaning. You can keep up with Marcelo on Facebook and Twitter: @mgleiser.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Marcelo Gleiser is a contributor to the NPR blog 13.7: Cosmos & Culture. He is the Appleton Professor of Natural Philosophy and a professor of physics and astronomy at Dartmouth College.

You make NHPR possible.

NHPR is nonprofit and independent. We rely on readers like you to support the local, national, and international coverage on this website. Your support makes this news available to everyone.

Give today. A monthly donation of $5 makes a real difference.