What’s all this fuss about the AI AlphaGo’s recent victory against the masters?
While it’s seemed like AI had hit a dead-end as much as a decade ago, if you’re like many of us sci-fi enthusiasts and have always wanted an AI best friend, the recent victory of AlphaGo has brought us much closer than you may have thought was possible.
AI is Finally Moving Forward
We’re not surprised if you haven’t been following the recent developments in AI all that closely because, for the most part, it’s seemed like nothing exciting has happened for quite a long time. Sci-fi dreams about computer powered best friends aside, AI for the general public has come to mean reasonably responsive and well-programmed computer assistance rather than independent thinking machines. Concepts like ‘smart’ chatbots somehow seem to pull us further from the Star Trek or Heinlinian dream of fully sentient and intuitive computers while many products and services that claim to integrate AI seem to be nothing more than a fast way to analyze large amounts of data. In fact, the last time most of us heard something hopeful about AI was when Deep Blue beat the world Chess champion, but what ever came of that AI? Surely it hasn’t used that incredible logical power to take over the world or begin making friends, so what do we even care?
Not All AIs are Equal
The answer lies in the fact that there many forms of Artificial Intelligence and most of them are limited by the tasks they were made to perform. That’s what makes AlphaGo so special, because while it was designed, named, and trained to play Go against the masters, its potential functionalities go well beyond the realm of board games unlike most of its AI contemporaries.
While practical applications for specifically built AI are growing, the tradition of training your AI programming skills on classic strategy games has existed since the 1950s when a computer was programmed to play and was able to win a game of tic-tac-toe. Since then a large variety of games and custom built AIs have been tested against each other to the great entertainment of experts in the field and curious nerds like us who care about that sort of thing. The real difference is not what they’re programmed for but how they are programmed to start with and, in fact, this is also what most profoundly distinguishes Alpha Go from its older-generation relative, the Chess champion Deep Blue.
Chess is a Closed Game
You may not know this, but there is a standard way to program an AI to play a board game known as the Search Tree in which the computer analyzes all the pieces and spaces in a game and determines which move during its turn is most likely to result in victory. However, for games with a limited total number of moves and responses, you don’t even have to spend too much time on programming good judgment, all you need is a complete understanding of the game. That said, consider how long people have been playing, analyzing, and writing down their analysis for chess.
Every possible arrangement of the limited and highly specialized pieces on the board has been replicated and studied in-depth. Do you know what they found? There is a finite number of possible piece arrangements on the board and each one of these finite arrangements has a finite number of moves that can be made and each of these moves can then be judged as a good or bad idea. In other words, you can contain every possible chess move and the best move for each board arrangement into a single database. That’s right, the quick and dirty way to make a chess “AI” doesn’t even require any thought, simply a database containing a complete knowledge of the game. Therefore AIs were always destined to master chess because it can simply store everything there is to know and reference it at will.
So how did Deep Blue win back in the 90s? You can breath easy knowing that the famous AI did not use the database method but instead relied on a parallel system designed to run a complex tree search. At each point in the game it would analyze the board and run an assessment on the possible moves it detected and which could move it closer toward a win. Defeating the world chess champion was a huge victory for Deep Blue by more than just capturing a king piece. It indicated that the AI’s board assessment program could be faster and smarter than a human strategy expert, but it was not what most of us sci-fi enthusiasts would think of as the beginning of independent computer thought. The only thing Deep Blue can do is play chess and because chess is a finite game, Deep Blue never needs to get smarter.
Go is Not a Closed Game
People have been trying to define Go for thousands of years. With computer analysis in hand, they have tried to discover if it is a finite game, like chess, and it simply cannot be done. With a near-infinite number of pieces available to each player and the complexities of the game itself, there are too many possibilities, board arrangements, and good or bad placement choices for any reasonable purpose-built program to handle. While you can make a program that plays go, until AlphaGo computer opponents only ever reached an intermediate level of capability and trying to fill a database with all the possible board arrangements and possible moves might well catch your servers on fire.
AlphaGo Learns to Play
It is for this reason that many people, Go masters included, were certain that a computer could never learn to beat the human champions of the game and for this reason that DeepMind decided to try. Why has AlphaGo succeeded where other AIs were judged to not even have a chance? The difference was that DeepMind decided to try something new in the world of games vs AIs: Machine learning and neural networks instead of custom built search trees. AlphaGo doesn’t just judge the board, it learns from its mistakes. Like a go expert who has been playing since their early childhood, they ran AlphaGo through thousands of games against itself and it learned from every one of those games how to be a better player, improve its strategy, and it never gets bored, frustrated, or tired during practice.
AlphaGo Teaches the Masters
Two years ago, DeepMind felt that AlphaGo was ready to start playing against expert human opponents and invited the European Go champion Mr. Fan Hui to a closed-door five-game test. To their surprise and delight, it won every single game and became the first computer program to defeat a professional go player. They then set it against the legendary winner of 18 world titles, Mr. Lee Sedol in Seoul in which it won 4–1 and earned a 9-dan professional ranking, the highest certification available. If this wasn’t awesome enough, during these games AlphaGo dazzled the audience and its opponent with creative winning moves, one of which effectively overturned hundreds of years of cumulative go wisdom.
DeepBlue Was Columbus Discovering America And AlphaGo Is The Moon Landing
Any computer scientist or programmer will admit that DeepBlue achieved something incredible when it beat Kasparov. But the amazing feat was in the computational power that DeepBlue had. It did not learn to play chess. It was programmed to search through thousands of chess games and evaluate the best move it had. Once DeepBlue had won the game and proven its strength, it was packed away and it has not been seen since. Everyone knew that its only purpose was to play chess and its programming could not be applied to much of anything else. AlphaGo, on the other hand, took the idea of computational power and added human reasoning or intuition — this combination makes it incredibly applicable to countless purposes.
Computer Scientists Versus Chess Masters
Another very unique aspect of how AlphaGo was created versus how DeepBlue was created is who the experts relied on. With DeepBlue, the computer scientists heavily relied on Chess experts, professionals, and masters to help the program have as many chess games programmed into it as possible. And the thing is, even after DeepBlue had strutted its stuff, it did not change much for the world of Chess. Chess players did not learn anything from it. With AlphaGo, however, the computer scientists simply used lots and lots of games from a myriad of players, who were all at different levels of Go knowledge and experience. And unlike when DeepBlue was unveiled, when AlphaGo was first shown to the world, Go players paid attention. They saw that AlphaGo was playing in innovative ways. It has taught them to think and play more creatively.
AlphaGo’s Intuitive Factor
It is easy to say that AlphaGo has intuition, which DeepBlue was missing. It is much more difficult to explain where that intuition comes from. To put it simply, it built on DeepBlue’s search and optimize idea. The DeepMind team programmed AlphaGo with 150,000 Go games that had been played by good players. It would then search through those games to base its next move on probability. To take AlphaGo to the next level, though, DeepMind used a neural network, or machine learning, so that through self-play and play against humans it could slowly make millions of tiny adjustments, allowing it to obtain something as close to intuition as possible.
And it is this intuition factor of recognizing good patterns and learning them that will have a much deeper impact on artificial intelligence. In the world of art, this type of artificial intelligence will expose a neural network to a specific artistic style, it will then show the network an image, and the network will replicate that image in the artistic style it was shown. In the world of language, the same neural networks are being used to recognize natural language. In the world of games, these networks are employed to improve video game experiences. And the list of future possibilities for expanding the impact of neural networks, machine learning, and artificial intelligence to provide the ability of intuition to computers is growing by the day — Think healthcare, smartphone assistants, and robotics. In fact, UK’s National Health Service has already signed a deal with DeepMind.
It Was Not Supposed To Be This Easy
Go is a game that has been around for 3000 years. It is widely accepted as the most challenging strategy game that exists. Individuals, especially in countries like South Korea and China, are sent to private school specifically to learn how to play the game at an expert level. It takes years of playing for several hours every day to master the game. In other words, even though it has simple rules, it is not a simple game to excel at. And due to its complexity, and how long it had taken computer scientists to create a machine that could win at Chess, experts estimated that a machine that could effectively play Go would only be created in about 10 years.
Surprise! Deep Mind managed to create a machine that could master the game, without being programmed with explicit rules and without being taught by a professional Go player. AlphaGo mainly played against itself and learned from this self-play. At its core, it learned like a human learns, by looking at the board, evaluating the options, making moves, and learning from mistakes — it just did it a lot faster than any human can.
This is extremely exciting because, at its core, what it means is that computer scientists have had all the tools they needed to do this for years. Neural networks have been known about and discussed since the middle of the last century. All it really took was simply getting creative with them, applying them in new ways. AlphaGo beating the world’s best Go player proves that AI has the potential to do anything. It can learn anything and understand anything, and from that learning and understanding it can accomplish what humans can accomplish in a much shorter period of time.
You’re probably wondering what this all means. The good news is that we’re much closer to the dream of an AI best friend than most of us would have dared to imagine a few years ago. Let it sink in for a moment: AlphaGo can learn the most complex, intuition and creativity based logic game known to man and it didn’t do so through a finite database or search trees alone. It learned from practice and experience, just like we do, and the ability to create amazing new solutions to ancient puzzles suggests a realm of digital creativity never before fathomed.
AlphaGo is not like other game playing AIs that have come before it. It is the future of intelligent and intuitive machines, one that we plan to turn toward more than just board games. From practical applications to that friend you’ve been hoping for, AlphaGo is sure to be the first of a new generation of self-learning intuitive AIs that go above and beyond the limited calculating capacities of its older siblings and contemporaries. If you love AI like I do, keep your eyes open for new practical applications for very real artificial intelligence popping up in places you may not have even imagined. The AI winter is over.