DeepMind’s AlphaGo Kicks Butt. So What…now?
You may have heard by now that Google’s “AlphaGo” program (an artificially intelligent program specifically designed to play the Chinese game of Go) kicked the ever-loving crud out of any and all opponents it faced. I know nothing about the game itself other than it is “simple to play, difficult to master.”
According to the articles I’ve read, AlphaGo played the game in ways humans never had before. Because of this, some people are questioning whether humans were ever actually “good” at the game in the first place. Other folks wonder what this news means for humans and AI in general. Still others are already hunkering down in their hidey-holes, waiting for the end of the world to come, led by the AI-fueled monsters we humans are creating.
From my perspective, this whole “AlphaGo” scenario is important, but nothing to get all wigged out about. As one person observed, “Humans programmed it.” That’s the crux. *WE* did this (“WE” meaning the collective human race). Did we make something that is “smarter” then we are? In a way, yes. Perhaps more importantly, though, we created something that showed us a different way to what we had been doing for centuries. I believe THAT is what AI will do for us in many, many aspects of life.
The question is whether or not we humans can or will learn from the thing we created. According to interviews, one of the first emotions felt when the humans lost the games to AlphaGo was a gut-wrenching shock and disappointment. People were physically ill because they learned that a “computer” had beaten them. I suppose that stands to reason, though not for the mechanical/programming aspect of the loss, but because those are emotions most people feel when they are bested and removed from their place atop the leader boards of their respective sports. Remember: this was *NOT* a machine that beat the humans – it was a series of programmers who taught the machine to “think.” It was still humans that beat the humans.
So what…now? Now, we learn. We learn how the program beat the humans. What paths did the computer take that humans had not taken – and WHY had humans not taken those paths before? How much of what happened will come back to the statement, “I just never thought to try that. I just never knew I could play that way?” And, then, how will we answer the question, “WHY NOT?” Why hadn’t people played the way the computer did? What makes the moves so out-of-the-box that is confounded human players?
I have long argued that AI will never best humans in certain types of games because humans are highly unpredictable (or can be, anyway). In this situation, was the computer unpredictable – or had humans come to predict what they have always done? That is, did the humans simply see what they had been programmed to see? When it comes to AI, remember these machines can only do what they are told to do – even if that means learning how to play by ALL the rules and not just the ones we limit ourselves to. So what now? Learn, play, lose. Repeat until you win.