Reality-based computer go
From my position in the armchair, meaning you should take my comments with several grains of salt, I’m still thinking that for computer go there is useful work to be done in the area of goal structures, rich intermediate abstractions, and reasoning based on these. Perhaps this is considered old-fashioned today, but I’m biased, I suppose, by my experience as a mid-level human go player; I just don’t see how else programs will get to 1-dan and beyond.
Lately, though, I’ve been thinking about a different (complementary?) approach, which I call “reality-based computer go”. This is based on a couple of research directions in other fields. One is CG. For instance, to make the latest “Matrix” sequel, they developed a new CG approach which involves “painting” or “molding” actual photographic content onto computer-generated models (see related article). (This is not really new per se—people have been “painting” clothes on models for a long time now, for instance.) The point is that compared to previous approaches, where they tried to model everything down to the hairs on somebody’s chin, now they get the hairs “for free” just by distorting a picture of a real actor’s face (with real hair) to map onto the mathematical model of the face. Voila;—much more realistic-looking results at less cost (and modeling the hairs is expensive).
A similar direction can be seen in music synthesizers (of which I know virtually nothing). It seems that the latest approach is to take actual recorded sounds and transform and blend them, instead of trying to create sounds totally from scratch mathematically. Same idea.
I’ve got a passing interest in computational linguistics, and it seems to me that the same model should be applicable there as well. Of course, people have been doing corpus-based CL for years, Statistical and corpus-based approaches do somwhat presage the “paint reality onto the model” idea, but in practice are still basically limited to post-processing (in the CG model, “smoothing”) model-based output, to creating word or phrase-level dictionaries, or dealing with local problems such as disambiguation. We have “example-based MT”, but this has not yet reached the stage of being generally applicable. It seems attractive to me to consider “painting” linguistic content onto mathematically-generated language models.
In the go area, and I realize this is abstract in the extreme, we should consider “painting” low-level go content (individual moves and sequences) onto a higher-level model-based framework. (I suppose you could make the case that this even mimics a possible human mental structure involved in playing go—a higher-level “thought”-based process and a lower-level “pattern”-based process.) Leaving aside long-term research topics like what is the higher-level framework (well, obviously it’s the goal structures and rich intermediate abstractions I mentioned above), the low-level go content to be painted onto that framework, just as in the CG case, is derived from “reality”—in this case, game collections. In the CG case, in order to be able to morph and strectch and snap the content onto the model, the photographic/reality images need to be “marked”—for example, with points giving the location of Keanu Reeve’s chin. So in the go case, we also need to develop libraries of reality-derived content with the appropriate mapping indicators that show how that content is fitted onto the model.
I don’t claim to be fully up on current research based on professional game collections (in CL terms, “corpora”), but I’d like to do a research project, or work with someone on one, which attempts to do a broad-based analysis of professional games in terms of the low-level move sequences. To do that, we need a “vocabulary” for types of moves. Then the “grammar” (allusion to CL intended) is a series of rules or empirical patterns tying together those vocabulary items. Now, instead of arbitrarily imposing our own vocabulary (“hane”, “tobi”), the initial phase of the analysis should be based on well-known cluster analysis techniques which will result in identifying the vocabulary based on co-occurence patterns. (A fascinating by-product would be if this process actually identified new groupings or types of moves not identified as such by humans yet.) One type of grammar that could then be developed from this vocabulary is an n-gram grammar; this type of approach has already found wide application in computational linguistics. A computer go engine based on this type of thinking would be more focused on sequences of moves which make sense together. At a minimum, such a low-level vocabulary and grammar could be effective in move generation, or choosing or optimizing possible moves found by “traditional” techniques.
A trivial example of this is where Black pushes and White extends. A more sophisticated example might be the case where Black commonly makes a peep on one side of a one-point jump before jumping himself on the other side.