Check out Atomic Chess, our featured variant for November, 2024.


[ Help | Earliest Comments | Latest Comments ]
[ List All Subjects of Discussion | Create New Subject of Discussion ]
[ List Earliest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]

Single Comment

Including Piece Values on Rules Pages[Subject Thread] [Add Response]
H. G. Muller wrote on Sun, Mar 10 04:13 PM UTC in reply to Kevin Pacey from 03:26 PM:

Sure, methods can be wrong, and therefore have to be validated as well. This holds more for true computer studies using engines, than for selecting positions from a game database and counting those. The claim that piece A doesn't have a larger value B if good players cannot beat each other when they have A instead of B more often than not is not really a method. It is the definition of value. So counting the number of wins is by definition a good method. The only thing that might require validation is whether the person having applied this method is able to count. But there is a point where healthy skepsis becomes paranoia, and this seems far over the edge.

Extracting similar statistics from computer-generated games has much larger potential for being in error. Is the level of play good enough to produce realistic games? How sensitive are the result statistics to misconceptions that the engines might have had? It would be expected of someone publishing results from a new method to have investigated those issues. And the method applied to the orthodox pieces should of course reproduce the classical values.

For the self-play method to derive empirical piece values I have of course investigated all that before I started to trust any results. I played games with Pawn odds at many different time controls, as well as with some selected imbalances (such as BB-vs-NN) to see if the number of excess wins was the same fraction of the Pawn-odds advantage. (It was.) And whether the results for a B-N imbalance were different for using an engine that thought B>N as for using one that thought N>B. (They weren't.)

New methods don't become valid because more people apply them; if they all do the same wrong thing they will all confirm each other's faulty results. You validate them by recognizing their potential for error, and then test whether they suffer from this.