Google’s AlphaGo and the Imperfect Human: Can a Machine (AI) Learn Humility?

AlphaGo beats Korean Go champion 4-1. If Google’s artificial intelligence wins, does that mean humans lose?

Should we be worried about AlphaGo?

In mid-March, Google’s Go gaming computer whiz kid, AlphaGo, beat out long time Go champ Lee Sedol in a 5 match showdown in downtown Seoul in which Sedol managed to beat AlphaGo only once. Lee Sedol was quoted as saying “I am shocked.” He has since asked for a rematch.

The Korea Times calls AlphaGo “al sabum” translated as Master Al. Demis Hassabis, CEO and CO-founder of DeepMind, the artificial intelligence lab that created AlphaGo, has this to say about the match:

“While the match has been widely billed as “man vs. machine,” AlphaGo is really a human achievement. Lee Sedol and the AlphaGo team both pushed each other toward new ideas, opportunities and solutions—and in the long run that’s something we all stand to benefit from.”

Eric Schmidt, the global ambassador of all things Google as Chairman of Alphabet, says “the winner here, no matter what happens, is humanity.”


Machines have been beating humans’ at their own games now for a long time, first starting with chess games in the 1980s. In 1997, the chess supercomputer Deep Blue drew worldwide attention when it proved it was able to evaluate up to 200 million positions per second and beat world chess champion Garry Kasparov.

And although machines have beat humans at chess, checkers, Othello, Scrabble, and Jeopardy since, no machine has been able to beat a Go champion…until now.

“Go has always been the holy grail of AI research,” says Hassabis. Long considered impossible for computers to win at this intuitive game, this historic match confirms that what many have hoped for, and what many have feared, is indeed possible after all.”

Go is a game using black and white stones on a grid that has been played since the days of ancient China. A player typically has a choice of 200 moves compared with about 20 in chess. There are more possible positions in Go than atoms in the universe according to DeepMind’s team.

When a human plays against another human, it’s possible to follow their body response, watch their breathing, twitching, and eye movements. And also you can feel their energy.

Computers have no emotions, no energy, no body movements.

But they are beginning to develop intuition.

Hassabis and his team have been building AlphaGo based on “deep neural networks” reinforcement learning – where it has the ability to learn as it goes, learn as it plays against itself and that is how it learns to play at a much higher level.

This doesn’t mean that machines will be replacing people anytime soon. There are some challenges ahead when it comes to man-made intuition.

As Professor Dae Ryun Chang acknowledges:

“…what defines us may be humility in our imperfections and our resilience — qualities that still need development if AI is truly going to be a threat to managers.”

But according to Hassibis, this winning match is just the tip of the iceberg.

Comments

comments