Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This doesn't seem to be the main reason why Go is harder than chess for computers. It was noted that even in 9x9 Go, with a comparable branching factor to Chess, traditional Go programs are still no stronger than on big boards. The main difficulty for Go is that it is much harder to evaluate board positions. So in Chess the depth of the search can be significantly reduced by using a reasonable evaluation function, whereas in Go no such function seems to exist.


>It was noted that even in 9x9 Go, with a comparable branching factor to Chess, traditional Go programs are still no stronger than on big boards.

Are they not? MoGo beat pros of 9 Dan on 9x9 in 2011: https://www.lri.fr/~teytaud/mogo.html


Well, I guess it was more true before the advent of Monte Carlo Tree Search. Even so, note that even in the case of MoGoTW in 2011, it played blind Go (this helps the computer), and out of 4 games, won two games against a 9p player, and lost 1 game to a 5p player. Though it is perhaps better than MoGo's performance on 19x19, it still isn't very good, doesn't seem much better than MoGo on 13x13, and performs much worse than computer Chess, despite a similar branching factor.


The branching factor is much larger, around 75 legal moves after the opening, while chess has at most like 30.

Fuego beat a pro in 2008 using MCTS actually.


The branching factor of 9x9 Go isn't 75. 75 could be the factor in early game, but the average factor is somewhere between 40 and 50, versus 35 in chess. State-space complexity is also considerably higher in Chess than in 9x9 Go.

Not sure what you meant regarding MCTS, I never said anything about MCTS not being able to beat pros.


This evaluation function does exist, and it's better than the super-simple chess evaluation function.

See, a chess program needs to find a lot of valid moves (see Deep Blue which won because it had stupid but extremely fast HW move generators), evaluate the moves and do a very deep search, up to 14, out of the very few alternatives. Russian chess programmers were better those times. They came up with AVL trees e.g. But hardware won.

In Go it's completely different. A move generator makes no sense at all, and a depth search of 14 neither. There are not a few alternatives, there are too many. What you need is a good overall pattern matching of areas of interest and an evaluation of those areas. And we saw that this feature outplayed Lee Sedol. Sedol couldn't quite follow in the recalculation of the areas.

Same as in chess AlphaGo learned the easy thing, that the center is more important than the corners, something Lee forgot during the game. But it's not a deep search, it's a very broad search, and very complicated evaluation function. A neural net is perfect for this function.

> whereas in Go no such function seems to exist.

It does exist. It's the neural net. It's a simple pattern recognizer, which learns over time more and more.


AlphaGo has a learned evaluation function for each move.

Evaluation function exists but it is not as simple as it can be for chess.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: