H. G. Muller wrote on Mon, May 26, 2008 10:10 PM UTC:
| I just cannot understand how any rational, intelligent man could
| believe that introducing chaos (i.e., randomness) is beneficial
| (instead of detrimental) to achieving a goal defined in terms of
| filtering-out disorder to pinpoint order.
It would be very educational then to get yourself acquainted with the
current state of the art of Go programming, where Monte-Carlo techniques
are the most successful paradigm to date...
| When you reduce the power of your algorithm in any way to
| filter-out inferior moves, you thereby reduce the average
| quality of the moves chosen and consequently, you reduce
| the playing strength of your program- esp. at long time controls.
Exactly. This is why I _enhance_ the power of my algorithm to filter out
inferior moves. As the inferior moves have a smaller probability to draw a
large positive random bonus than the better moves. They thus have a lower
probability to be chosen, which enhances the average quality of the moves,
and thus playing strength. At any time control.
It is a pity this suppression of inferior moves is only probabilistic, and
some inferior moves by sheer luck can still penetrate the filter. But I
know of no deterministic way to achieve the same thing. So something ais
better as nothing, and I settle for the inferior moves only getting a
lower chance to pass. Even if it is not a zero chance, it is still better
than letting them pass unimpededly.
| In any event, the addition of the completely-unnecessary module of
| code used to create the randomization effect within Joker80 that
| you desire irrefutably makes your program larger, more complicated
| and slower. Can that be a good thing?
Everything you put into a Chess engine makes it larger and slower. Yet,
taking almost everything out, only leaves you with a weak engine like
micro-Max 1.6. The point is that putting code in also can make the engine
smarter, improve its strategic understanding, reduce its branching ratio,
etc. So if it is a good thing or not does not depend on if it makes the
engine larger, motre complicated, or slower. It depends on if the engine
still fits in the available memory, and from there produces better moves
in the same time. Which larger, more complicated and slower engines often
do. As always, testing is the bottom line.
Actually the 'module of code' consists only of only 6 instructions, as I
derive the pseudo-random number from the hashKey.
But the point you are missing is this: I have theoretical understanding of
how Chess engines work, and therefore are able to extrapolate their
behavior with high confidence from what I observe under other conditions
(i.e. at fast TC). Just like I don't have to travel to the Moon and back
to know its distance from the Earth, because I understand geometry and
triangulation. So I know that if including a certain evaluation term gives
me more accurate scores (and thus more reliable selection of the best move)
from 8-ply search trees, I know that this can only give better moves from
18-ply search trees. As the latter is nothing but millions of 8-ply search
trees grafted on the tips of a mathematically exact 10-ply minimax
propagation of the score from the 8-ply trees towards the root.
Anyway, it is not of any interest to me to throw months of valuable CPU
time to answer questions I already know the answer to.