iStock photo/iLexx

I love playing board games.

That’s why I was so shocked by a piece of news that came out of London on January 27 this year: AlphaGo, a program created by Google subsidiary DeepMind, defeated the European Go champion, five games to nothing.

Maybe you think that’s no big deal. After all, it’s almost 20 years since IBM’s Deep Blue beat Kasparov at chess in 1997. But chess and Go are two very different games.

Chess is about logic; Go involves imagination and intuition.

Go also has quite a different order of complexity. In chess, the number of possible moves is around 20; in Go, that number is more like 200.

Demis Hassabis, the founder of DeepMind, likes to point out that in Go “the possible number of configurations of the board is greater than the number of atoms in the universe.”

Because of this complexity gap, AI has to tackle chess and Go in different ways. With chess, the so-called “brute force” approach is good enough. The program can grind its way through every single possible option in a lengthy search tree. But because of the extraordinarily large number of options available in Go, this approach simply won’t work. The AI has to simplify the process.

In the case of AlphaGo, the program uses a “policy network” and a “value network” to reduce the search tree to a more reasonable size. As a first step, the policy network suggests a limited number of promising moves based on professional game records that the program has absorbed via deep learning. Based on these suggestions, the value network then plays out the game around 20 or so moves ahead and picks the best option up to that point in the game.

This process of simplification is very human-like. In fact, the program behaves just like a professional player (admittedly a professional player who never gets tired or makes mistakes!).

In March this year, DeepMind’s AIphaGo is going up against Lee Se-Dol, the world champion. Personally, I hope that Lee Se-Dol will win, at least on this occasion. Nonetheless, AI is progressing so fast that eventually it will crush all its human opponents, world champions or not.

So what? Why should anyone outside the relatively narrow confines of the Go world even care?

Because a program that can play Go is also capable of managing a company…your company, perhaps.

Consider the similarities between Go and management. In Go, the aim is to win territory (i.e., win market share) from your opponent (i.e., business competitor) until you control over 50% of the board (i.e., dominate your sector). AlphaGo follows the same thought process as a CEO.

CEOs build a “policy framework” based on management theories and case studies. Then, when presented with a specific problem, they use a “value framework” to weigh up the merits and demerits of several possible scenarios before picking the most viable one.

Management and Go, however, differ in three major respects.

1. In Go, you have just one opponent. In business, you have to compete against multiple opponents at the same time.

2. You play Go with inert stones. In business, you are dealing with sentient people who have different abilities and personalities.

3. In Go, external conditions are irrelevant. A storm outside the window has zero impact on the game going on indoors. In business, external shocks—financial crises, currency moves, recessions, whatever—can affect your business profoundly.

If—and it’s a big if—AI can evolve to incorporate the above three factors into its scenario planning, then it will be perfectly capable of performing all the duties of a human CEO. At the very least, I expect to see AI taking its place at the boardroom table in an advisory capacity in a few years’ time. After all, look at what happened in the world of games. After the chess and shogi champions were defeated by AI, they adopted AI as a partner to learn from and practice with. In the future, human CEOs will have to adopt AI as a partner, learning from it and consulting with it on decision making and strategy.

Get monthly Insights

Sign up for our newsletter! Privacy Policy