Kevin Pacey wrote on Tue, Dec 15, 2015 07:07 PM UTC:
Fwiw, I once looked at a review for the chess program Houdini (some
version), and the inventor had succeeded in making it very selective as far
as choosing (i.e. pruning for) promising candidate moves to look at more
deeply at a given level. About 5 of the legal moves per turn; the average
is thought to be 35 for chess. With this success, such an engine can see 40
ply ahead in lots of important cases of move sequences. Some pruning done
apparently can risk throwing out some vital sequences once in a while, but
it seems the risk is low enough as to be worth it for such chess engine
programmers.
I was wondering if the above might mean that a program seeing 5/35 deeper
could see a ratio of 1/7 deeper typically for a chess variant, even if it
had significantly more than 35 legal moves available to a side on average.
Note that 5/35 is also not far from (sq. root (35))/35 as a ratio, which
could favour humans less than a flat 1/7 ratio. For example the approx.
(i.e. truncated) value of 100/7 moves examined more deeply would be more than
10 moves (out of 100) examined more deeply. Least favourable of all to
humans would be if an engine could always prune down to exactly 5
continuations to look at more deeply, no matter how many legal moves per
turn on average there are in a given chess variant. So it may be vital what
Houdini pruning to 5/35 means exactly when extrapolating to engines for
chess variants. Sorry if my terminology is not up to snuff.
Also fwiw, as a veteran chess master (Canadian), I have some feeling for
the concern that strong chess engines have caused to players, tournament
directors and organizers over the years, especially due to possible
computer-assisted cheating. Also, on a Canadian chess message board some
years ago, one poster noted that upon hearing chess engines were stronger
than people, some non-chess players he had met immediately lowered their
opinion of chess and chessplayers.