H. G. Muller wrote on Tue, May 13, 2008 01:58 PM UTC:
Jianying Ji:
| Two suggestion for settling debates such as these. First distributed
| computing to provide as much data as possible. And bayesian statistical
| methods to provide statistical bounds on results.
Agreed: one first needs to generate data. Without data, there isn't even
a debate, and everything is just idle talk. What bounds would you expect
from a two-game dataset? And what if these two games were actually the
same?
But the problem is that the proverbial fool can always ask more than
anyone can answer. If, by recruting all PCs in the World, we could
generate 100,000 games at an hour per move, an hour per move will of
course not be 'good enough'. It will at least have to be a week per
move. Or, if that is possible, 100 years per move.
And even 100 years per move are of course no good, because the computers
will still not be able to search into the end-game, as they will search
only 12 ply deeper than with 1 hour per move. So what's the point?
Not only is his an énd-of-the-rainbow-type endeavor, even if you would get
there, and generate the perfect data, where it is 100% sure and prooven for
each position what the outcome under perfect play is, what then? Because
for simple end-games we are alrady in a position to reach perfect play,
through retrograde analysis (tablebases).
So why not start there, to show that such data is of any use whatsoever,
in this case for generating end-game piece values? If you have the EGTB
for KQKAN, and KAKBN, how would you extract a piece value for A from it?