3 times AI beat human champions at their own game

Shutterstock 1589205907 (1)

For years Hollywood has entertained audiences with fantastical tales of man vs machine, playing off the most apocalyptic stereotypes associated with Artificial Intelligence. In reality, these contests have historically taken the form of chess matches or quiz shows!

Here are three incredible examples of the most famous examples of AI beating humans at their own game.

Kasparov vs Deep Blue

In the most famous example of an AI facing a human Champion, Kasparov vs Deep Blue was the culmination of decades of research.

Relying on problem-solving and strategic thinking, chess was long-viewed as the litmus test of AI by computer scientists. A test IBM set about passing in 1985 with the development of Deep Blue.

In the same year, Garry Kasparov became the youngest ever chess world champion. A legend in the game, Kasparov successfully defended his title five times and achieved the title of Grandmaster (the highest-ranking a player can achieve).

Who better to challenge?

Garry Kasparov

Initially, things didn’t look too hopeful for IBM, with Kasparov easily defeating an early iteration of the project (Deep Thought) in 1989. The scientists went back to the drawing board, culminating in the creation of Deep Blue. Written in C, the computer was able to evaluate 100 million potential moves per second.

In an interview with Scientific American, Murray Campbell of IMB described how the program worked:

"We had software that ran on the supercomputer to carry out part of a chess computation and then hand off the more complex parts of a move to the accelerator, which would then calculate [possible moves and outcomes]. The supercomputer would take those values and eventually decide what route to take."

In 1996, Deep Blue became the first machine to win a game against a reigning world champion but went on to lose the series 4-2 overall. As was the case after the defeat of Deep Thought, a new version dubbed Deeper Blue was created with double the computational power of its predecessor.

This time Deeper Blue was able to do what many thought was decades away from fruition, defeating Kasparov by a single point in an event the Wall Street Journal described as one giant leap backward for mankind.”

In the initial aftermath, Kasparov played down the intelligence of Deep Blue, stating it was no more intelligent than an alarm clock: He explained "losing to a $10m (£7.6m) alarm clock did not make me feel any better."

Watson plays Jeopardy

IBM’s Watson is at the forefront of cognitive computing and can process the equivalent of 1 million books worth of information per second. Leveraging Machine Learning and Natural Language Processing, it can interpret data and infer meaning based on context and semantics, understanding idioms in the same way a human can.

To introduce the public to the computational power of Watson, IBM approached the producers of hit game show Jeopardy about the prospect of facing off against two of the show’s most well-known champions; Ken Jennings and Brad Rutter. Jennings is the wealthiest US gameshow contestant in history, amassing over $5 million worth of prize money.

Jeopardy is a quiz show where contestants must phrase their answers in the form of questions based on clues on a variety of topics. Unlike chess, the game requires reasoning and cannot be beaten using Deep Blue’s brute force style processing power, making it the perfect test for Watson.

IBM was initially concerned that showrunners would look to exploit Watson's cognitive limitations and turn the match into a Turing Test. To alleviate these doubts, a third-party was employed to randomly select previously unused questions and a two-game series was set for 2011.

Watson took an early lead in the first game, reeling off a series of quick-fire answers in a robotic and expressionless tone, buzzing far quicker than the visibly frustrated human opponents.

Despite the early success, Watson did display some cognitive limitations. In one example, it repeated an incorrect answer his opponent had given just seconds before and later named Toronto as the answer for a “US Cities” category.

David Ferruci of IBM Research later explained how Watson was able to make such seemingly fundamental mistakes:

“The category names on Jeopardy are tricky and Watson was trained to downgrade their significance. The way the language was parsed provided an advantage for the humans. "What US city" wasn't in the question. If it had been, Watson would have given US cities much more weight as it searched for the answer.

Despite this setback, Watson was able to take a commanding lead in game two, winning with a score of $77,147, €50,000 ahead of Jennings in second place.

The victory was a great PR stunt for IBM and introduced the general public to a new type of AI, one that can understand (and respond to) humans.

AlphaGo vs Lee Sedol

Go is a Chinese strategy board game dating back to 2356BC. Played with 181 black and 180 white pieces, there are more potential moves in a game of Go than there are atoms in the entire Universe!

Due to the sheer number of potential outcomes and the complexity of the game, Go has been described as the “Holy Grail” of AI research by Demis Hassabis of DeepMind.

AlphaGo (developed by DeepMind) combines advanced search tree and deep neural networks. The neural network started with no knowledge of Go and employed a form of reinforcement learning to learn as it played. In the three days after its conception, AlphaGo played five million games against itself, gradually improving after each turn.

In 2015, to coincide with the release of a scientific paper on deep neural networks, AlphaGo was pitted against European Go champion Fan Hui, winning the game 5-0. Building on this high-profile win a five-game series was set against Lee Sedol, one of the most decorated Go players in history.

Despite trailing for large portions of the first game, AlphaGo took the opening stanza after taking the lead in the final 20 minutes, forcing Sedol to retire.

Game Two resulted in a more emphatic victory for the program. In the now-famous “Move 37”, AlphaGo flummoxed Sedol with an unusual move that the commentary team initially thought was a mistake. In reality, the play was an example of the power of DeepMind’s neural network, pulling off a move with a 1/10000 chance of working.

This trend continued in Game Three before Sedol was finally able to hit back in the fourth, inflicting AlphaGo’s only recorded defeat. The series was rounded out in the fifth and final game with AlphaGo winning 4-1 overall.

In the aftermath, both Sedol and the commentary team praised AlphaGo’s genius, noting moves that required genuine creativity.

Lead researcher David Silver offered some insight into how AlphaGo was able to craft moves seemingly of its own accord:

"AlphaGo learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves, and gradually improving,".

From the brute force processing power of Deep Blue, to the seemingly genuine creativity of AlphaGo, AI has come a long way since the defeat of Garry Kasparov in 1997.

While the most common uses of AI in real-world applications range from Chatbots to fraud detection programs, don’t be too surprised if you see an AI face off against a human in a more complex game in the near future!

If you want to pursue a career in AI and work on the next breakthrough technology like DeepMind or AlphaGo upload your resume here and arrange a chat with one of our expert consultants.

Leave a Comment

* Indicates a required field