My Master’s thesis explored the links between intuition and intelligence. I found that measures of intuition were closely related with intelligence: people who tend to rely on quick, unconscious decision making also tended to be more intelligent. When poking at the implications, I wrote:
The intuition-intelligence link also holds promise in the advancement of artificial intelligence (AI). Herbert Simon (see Frantz, 2003) has used AI as a framework for understanding intuition. However, this also works in the other direction. A greater understanding of intuition’s role in human intelligence can be translated to improved artificial intelligence. For example, Deep Blue, a chess-playing AI, was said to be incapable of intuition, while the computer’s opponent in a famous 1997 chess match, Garry Kasparov, was known for his intuitive style of play (IBM, 1997). While it is difficult to describe a computer’s decision-making process as conscious or unconscious, the AI’s method does resemble analytic thought rather than intuitive thought as defined here. Deep Blue searched through all possible chess moves, step-by-step, in order to determine the best one. Kasparov, in contrast, had already intuitively narrowed down the choices to a small number of moves before consciously determining the most intelligent one. Considering that, according to IBM, Deep Blue could consider 200 000 000 chess positions per second, and Kasparov could only consider 3 positions per second, Kasparov’s unconscious intuitive processing must have been quite extensive in order to even compete with Deep Blue. Deep Blue’s lack of intuition did not seem to be an obstacle in that match (the AI won), but perhaps an approximation of human intuition would lead to even greater, more efficient intelligence in machines.
That was back in 2007, just a decade after Deep Blue beat Kasparov at chess. Here we are another decade later, and Google’s AlphaGo has beat a champion at the more complex game of Go.
I’m no expert on machine learning, but my understanding is that AlphaGo does not play in the same way as Deep Blue, which brute-forces the calculation of 200 000 000 positions per second. That’s the equivalent of conscious deliberation: considering every possibility, then choosing the best one. Intuition, however, relies on non-conscious calculations. Most possibilities have already been ruled out when an intuitive decisions enters consciousness, which is why intuition can seem like magic to the conscious minds experiencing it.
Intuition seems closer to how AlphaGo works. By studying millions of human Go moves, then playing against itself to become better than human (creepy), it learns patterns. When playing a game, instead of flipping through every possible move, it has already narrowed down the possibilities based on its vast, learning-fueled “unconscious” mind. AI has been improved by making it more human.
Which is to say: hah! I was right! I called this ten years ago! Pay me a million dollars, Google.
P.S. This Wired article also rather beautifully expresses the match in terms of human/machine symmetry.