Cracking Go
So I was playing with Facebook today, and I ran across a link to this IEEE Spectrum article entitled "Cracking Go". Much more complex than chess, the game of go provides an extremely difficult challenge for computers due to the vast number of possible moves at any time and the complexity of evaluating board positions. Essentially what the author is saying in this article is that due to continually increasing computing power and improved search algorithms, he predicts that in the near future, computers will be able to beat the world's greatest masters at the game of go.
I don't disagree with him on that point. However, throughout the article he continually sings the praises of "brute force" techniques, touting them as being superior to AI techniques that mimic human reasoning because they are simpler to program, leading to fewer bugs, and they scale smoothly with computing power.
I don't disagree with him on those points either. Certainly, with sufficient processing power and clever pruning of decision trees and caching of information, brute force techniques can and will beat top-ranking go masters someday.
But is this interesting?
I would say no. It would certainly be an impressive accomplishment from an engineering standpoint, but I don't think I would call it artificial intelligence. Or even a significant step towards artificial intelligence. Brute force, even brute force with finesse, isn't intelligent.
Man vs Machine
I'd like to mention two other topics addressing this point. First, another IEEE Spectrum article, "Psyching out Computer Chess Players", (which is shorter and maybe a more interesting read for non-computer geeks than the previous article) described the current state of computer chess programs. Essentially, it says that chess grandmasters are still able to hold their own against today's most advanced computer opponents, in spite of the fact that both their hardware and software have improved dramatically since Deep Blue's historic defeat of Kasparov in 1997.
The point the article brings up is that human players are good at long-term strategy, while computers are good at rapid calculation. Thus, anticomputer strategies often hinge on setting up many moves ahead for critical positions which are beyond the computer's prediction horizon. To me, the ability to do that is a clear sign of great intelligence on the part of the human players, and absolute lack of intelligence on the part of the computer. I really don't think that we can say computers have become much more "intelligent" as long as they rely on primarily brute-force methods.
Or can we? What does intelligence mean in the first place? Why do we care whether a computer can beat a human at chess? I always thought that what we cared about was how intelligent the computer was, and assumed that chess was a good test of intelligence. After all, it requires analysis, understanding, strategy, creativity, flexibility, and guts. In my mind, however, a brute-force approach has the effect of removing all of those interesting elements from the problem! Instead of creating machines which can reason, understand, doubt, guess, be creative, take risks, and plan ahead, we are instead creating soulless systems that are ruthlessly efficient at evaluating hundreds of millions of board positions per second. That's not creating intelligence... it's sidestepping the problem.
My Latest Addiction
I bought a Rubik's Cube the other day, and it said on the package, "If you can solve it without looking at a book, your IQ is at least 130." What does that mean, solve it without looking at a book? Does that mean to solve it through on-the-spot intuitive analysis? Clearly, looking at a book would give you all the algorithms you need to solve the cube "without thinking". But... what if you developed those same algorithms yourself, and then used them? Where does the "intelligence" happen? What if you really are using algorithms subconsciously, and you don't realize it? I think that happens often. I tend to find myself making the same sequences of moves over and over again to solve the same kinds of problems, without ever having stopped to think "ok, this is an algorithm for solving this situation".
Maybe part of the "intelligence" is in the process of creative synthesis, where random twists of the cube are distilled into general patterns. Certainly that could be done in a brute-force way, but not very efficiently... the efficiency comes from recognizing patterns and arriving at the algorithms through educated guesses and learning from mistakes. A billion monkeys with a billion cubes might by chance arrive at a complete set of algorithms necessary for solving a cube, but given the choice I'd rather hire three real smart people to do it.
Another, and more essential, part of the "intelligence", I think, is the planning process. The problem definition. Who says we need algorithms in the first place? Seeing that need is the first important thing, and then you have to break the problem down into its component parts: algorithms for switching the corners, for twisting the corners, for interchanging the edges, etc. This top-level strategy is something that even a billion monkeys wouldn't come up with.
As for my personal cube technique, I'm teaching myself a new system for solving it. My old technique was to solve one side, then put the corners in place, then to exchange the edge pieces last. It was pretty easy, and I could pretty consistently solve the cube in under 5 minutes. Sometimes under 4. There wasn't much thinking involved, and I was limited mostly by my physical speed and the inefficiency of my algorithms. It was something that even one monkey (albeit VERY well trained) could do.
The new technique I'm learning is this speed-cubing technique. This strategy of building out from the corner rather than solving an entire face first makes a lot of sense. However, I have been solving one face of the cube since I was a little kid. The algorithms for doing that are so deeply ingrained in my mind that it's almost unconscious. Yet this approach requires a different set of techniques. I find I'm much slower with this new system, but the process of solving the cube is much more interesting. It's less mechanical, and I have to use my intuition more, especially during the first stages. It's a real challenge to my mind, not in terms of memorizing 7-move sequences or anything, but actually in terms of pushing the limits of my creativity, my spatial visualization, my lookahead. I'm limited much less by physical speed now and mostly by cognitive load. I'm slowly but steadily improving, and I certainly don't envision any monkeys catching up with me on this task.
The Future
So, where is this leading? Well, regarding AI, I would have to side with those who are trying to develop thinking machines rather than brute-force systems. Granted, statistical and brute-force approaches are useful and effective for the right tasks. Perhaps it's a trade-off - the cold, efficient reliability of a pocket calculator (or a machine gun) versus the creativity, insight, ... and perhaps also the fallibility, of man.
On the other hand, maybe the ability of computers to do boring, uninteresting calculations at blazing speeds will provide a much-needed wake-up call to our world's educational systems. We will abandon much of our rote memorization in favor of challenging, stimulating problems that push the limits of our creativity, of our insight, of our intelligent reasoning. We will do this because we are forced to. Like Kasparov, we need to take every advantage of our uniquely human abilities to out-think the computers, which are catching up to us at a ferocious pace, nipping, as it were, at our cognitive heels.
We must adapt and evolve, or be made obsolete. Maybe that "How to Survive a Robot Uprising" book I keep under my pillow will turn out to be useful after all.
2 comments:
Hi Dylan,
I agree with you on the inferior nature of brute force. It severely limits the force of serindipity. Casual mistakes and outright errors can often lead to powerful creations that would never had been stimulated my brute force.
BTW, Boston has a greater appreciation for Matsuzaka after his performance in the world series. I think he has lots more to show us too. Lots of fun to beat those Rockies!
Penny
Let me start by admitting I didn't read the whole thing. I succumbed to the urge to skim.
At any rate, I will put forth as a conjecture that the question of what intelligence is or is not is an intrinsically human one, and that we can't divorce it from the way we, as humans, solve problems.
So is a computer brute-forcing go "intelligent" in a human sense? No. Is it intellectually stimuating to design or study such a system? No. Is it "sexy" (in the Engineering sense). No.
But it may turn out to be the most appropriate strategy for a computing engine. They may, in fact, become so good at playing go with this strategy that they will one day study us, and marvel at why we would waste years of study to developing abstract theories of strategy and tactics for a problem with a finite solution set.
Of course, the idea of chess or go as the ultimate man vs. machine challenge is hugely biased. It's not a test of which is "better" or "smarter", it's ultimately a test of OUR ability to get machines to think the way we do. So in that sense, yeah, brute-force is entirely side-stepping the point of the test.
But that will likely be of little consolation to the go master who devotes 20 years of his life to the game, and STILL loses to 'bots on the 'net (okay, not yet, but someday).
Post a Comment