Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Seeing the way chess computers have evolved, this won't be far into the future.

With chess, there were two breakthroughs. First, there was Deep Blue, which threw massive hardware resources at the problem and achieved world champion level play.

That was interesting, of course, but didn't really do anything for human chess, because most humans did not have access to the necessary hardware.

The second breakthrough was when the developers of chess programs that ran on commodity desktop computers improved their algorithms to the point that they could play at (and far beyond) Deep Blue's level even though they were only able to search about 1/100th as many positions per second.

That was when humans started being able to really use computers to help the humans understand chess.

The breakthroughs in chess algorithms on commodity computers had little, if anything, to do with the Deep Blue breakthrough. The two are just too different.

Can AlphaGo be made available on hardware that top human Go players have access to, or is AlphaGo to Go as Deep Blue was to chess?



The fact that Lee Sedols hardware is a couple of pounds of wetware running on a peanut butter sandwich suggests the answer to your question is yes.

Whether those insights will come soon or not is the big question.


Hey, that wetware is ten time as powerful as what AlphaGo has to work with. Give or take a few orders of magnitude. And given that Lee's brain only uses a portion of that on Go.


> Hey, that wetware is ten time as powerful as what AlphaGo has to work with.

I don't think that's actually true. The hardware that AlphaGo is on is probably a lot more powerful than the one that is available in a single human brain, the big difference is in the software.

See the difference between the very best chess programs of a decade ago versus the ones now.


Related, the highest ELO chess program is open source: https://en.wikipedia.org/wiki/Stockfish_(chess)


Seems like somewhere in-between. They created a novel approach that is scalable and improves as you throw more hardware at it. And Google is throwing a lot of hardware at it based on their past matches with hundreds of CPUs and GPUs. I think the fact that they have been so mum about what hardware they're using suggests it's quite extreme, but hopefully they release more details soon.

Its somewhat interesting to think about the differences in marketing between IBM and Google; IBM was marketing hardware and HPC with deep blue, but Google is marketing AI when so much of their advances in AlphaGo are enabled by distributed systems and HPC running billions of games training deep neural networks. It feels a little smoke and mirrors which is probably why they won't release much until after they get enough marketing value from this tournament :)


AlphaGo uses much simpler hardware for play than for training. I think the Go associations can afford to run the hardware.


It's only a couple hundred GPUs for training. You can afford to rent that in the cloud for probably a hundred bucks or less per game.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: