A simple question asked by Alan Turing opened doors to endless possibilities- “Can machines think?”. Fast forward 70 years: I am asking a machine that is the size of a hockey puck, ‘how to make a french omelette’. Artificial Intelligence(AI) proved to be the solution to Alan Turing’s question. This capability to think would mean that machines can do pretty much anything humans can. This theory was confirmed when computers started defeating humans at their own games. So let’s discover more about two of the most mind-blowing AI algorithms and take a look at their story.
Back in 1997, a computer by the name of Deep Blue made history. The IBM computer became the first-ever machine to beat a world champion chess player, a human: Gary Kasparov. IBM developed the Brute Force algorithm for Deep Blue, which went on to become a global phenomenon.
The algorithm itself is considerably straightforward. It works exactly as the name suggests. It uses sheer power; in this case, computing power, to solve problems more efficiently. Here, the algorithm goes through each and every possible scenario and then picks the one with the best outcome. In chess, there can be 10 to the power 39 plausible outcomes for each move made. The first version of Deep Blue failed to live up to the standards and lost the game. But the second version was much more powerful and faster. It could calculate 200 million outcomes every second! And the results showed.
Now, this algorithm is used in myriad other professions. In enormous businesses like sports, equity markets as well as preliminary tasks like sorting. But the algorithm has its downside. It can’t handle really massive data sets.
Artificial neural network
A similar situation presented itself in 2016, when AlphaGo defeated Lee Sedol at a game of Go. Go is one of the most ancient Chinese games. AlphaGo is a computer made by DeepMind, a subsidiary of Google. The checkerboard on which we play the game is comparable to a chessboard. But don’t let that fool you; the game is even trickier. Unlike chess, a Go-board has 19x19 checkers. Meaning every move has 10 to the power 80 possible outcomes.
So they worked out a new algorithm: Artificial neural networks(ANN). Does it sound familiar? Yes, “neural networks” as in a part of the animal neural system. Artificial neurons form a network here. These work like the neurons in the biological brain. The algorithm works in three phases. First, a certain amount of pre-recorded matches are given to the computer to study. Then in the second step, the computer plays quite a lot of games against itself. Here, the algorithm learns through its own mistakes. They call it reinforcement learning. And then, in the next level, the computer plays against humans to study. For these machines, the last phase never really ends so they always keep learning.
ANN is a general-purpose software. We could use it to make real-time dynamic decisions in various walks of life.
The rate at which AI is taking over every aspect of our life is praiseworthy. But how far can it be integrated into our lives?
About the Author: Ishwari Garge is a Second Year Computer Engineering student at RAIT.