Check out Janggi (Korean Chess), our featured variant for December, 2024.


[ Help | Earliest Comments | Latest Comments ]
[ List All Subjects of Discussion | Create New Subject of Discussion ]
[ List Earliest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]

Comments by HansAberg

EarliestEarlier Reverse Order Later
Aberg variation of Capablanca's Chess. Different setup and castling rules. (10x8, Cells: 80) [All Comments] [Add Comment or Rating]
💡📝Hans Aberg wrote on Mon, Apr 28, 2008 08:35 PM UTC:
H.G.Muller:
| I am not sure what 'brute force' you are referring to.

See
  http://en.wikipedia.org/wiki/Computer_chess

| What do you mean by 'classical theory'?

What was used before the days of computers. A better term might be 'deductive theory' as opposed to a 'statistical theory', i.e., which has the aim of finding the best by reasoning, though limited in scope due to the complexity of the problem.

| What does it matter anyway how the piece-value system for normal Chess
| was historically constructed anyway?

It is designed to merge with the other empiric reason developed to be used by humans.

You might have a look at a program like Eliza:
  http://en.wikipedia.org/wiki/ELIZA
It does not have any true understanding, but it takes while for humans to discover that. The computer chess programs are similar in nature, but they can outdo a human by searching through many more positions. If a human is compensated for that somehow (perhaps: allowed to use computers, take back at will, or making a new variant), then I think it will not be so difficult for humans to beat the programs. In such a setting, a statistical approach will fail sorely, since the human will merely play towards special cases not covered by the statistical analysis. The deductive theory will be successively strengthened until in principle the ideal best theory will emerge. This latter approach seems to have been cut short, though by the mergence of very fast computers that can exploit the weakness of human thinking, which is the inability of making large numbers of very fast but simple computations.

💡📝Hans Aberg wrote on Tue, Apr 29, 2008 12:58 PM UTC:
H.G.Muller:
| Larry Kaufman has applied the method on (pre-existing) Human
| Grand-Master games, to determine piece values for 8x8 Chess.

If I look at:
http://home.comcast.net/~danheisman/Articles/evaluation_of_material_imbalance.htm
he says things like:
  [...] an unpaired bishop and knight are of equal value [...], so
  positional considerations [...] will decide which piece is better.
and also see the section 'Applications'.

In other words, he is using a statistical approach merely as a point of departure for developing a theory which combines point values with other reasoning, such as positional judgement.

The values he gives though are interesting:
  P=1, N=3¼, B=3¼, BB=+½, R=5, Q=9¾
where BB is the bishop pair.

💡📝Hans Aberg wrote on Tue, Apr 29, 2008 09:49 PM UTC:
H.G.Muller:
| If you read Larry Kaufman's paper, you see that he continues
| quantifying the principal positional term ...

This is nothing new: this was done in classical theory. He just uses statistical input in an attempt to refine the classical theory.

| The piece values Kaufman gets are very good. And, in so far I tested
| them, they correspond exactly to what I get when I run the the piece
| combinations from shuffled openings.

You can sync your method against his values, to get piece value suggestions. But that is just about what you get out from it.

💡📝Hans Aberg wrote on Wed, Apr 30, 2008 02:07 PM UTC:
H.G.Muller:
| Define 'suggestions'. What I get are a set of piece values from which
| you can accurately predict how good your winning chances are, all other
| thing being equal or unknown.

You do not get a theory that predicts winning chances, as chess isn't random. If the assumption is that opponents will have a random style similar in nature to the analyzed data, then it might be used for predictions.

It is clear that Larry Kaufman does not think of his theory in terms of 'x pawns ahead leads to a winning chance p'. You can analyze your data and make such statements, but it is an incorrect conclusion it will be a valid chess theory predicting future games - it only refers to the data of past games you have analyzed.

💡📝Hans Aberg wrote on Wed, Apr 30, 2008 03:02 PM UTC:
Rich Hutnik:
| 2. When pitting one side against another, if the sides are unbalanced,
| this system should allow a balancing in points for handicapping reasons
| of the forces.

Games where both sides have equal material are also unbalanced, as in general, there is an advantage to play the first move.

💡📝Hans Aberg wrote on Wed, Apr 30, 2008 08:24 PM UTC:
H.G.Muller:
| Chess as we play it is a game of chance...

The main point is that such a statistic analysis is only valid with respect to a certain group of games, as long as as the players stick to a similar strategy. The situation is like with pseudo-random numbers, where in one case one discovered that if the successive numbers generated were plotted in triples, they fell into a series of sloped planes. Such a thing can be exploited. So there results a circle of making better pseudo-random generators and methods to detect flaws, without ever resulting in true random numbers. A similar situation results in cryptography.

In chess, the best strategy is trying to beat whatever underlying statistical theory the opponent is playing against. When playing against computer programs this is not so difficult, because one tries to figure what material exchanges the opposing program favors and shuns, and then tries playing into situations where that is not valid. Now, this requires that the human player gets the chance of fiddling around with the program interactively for some time in order to discover such flaws - learning this through tournament practice is a slow process - plus a way to beat the computers superior combinatorial skills if the latter is allowed to do a deeper search by brute force.

| Anyway, you cannot know what Kaufman thinks or doesn't.

His stuff looks like all the other chess theory I have seen, only that he uses a statistic analysis as an input, attempting to fine-tune it. By contrast, you are the only guy I have seen that thinks of it as a method to predict the average outcome of games. You might benefit from asking him, or others, about their theories - this is how it looks to me.

You might still do values and percentages and display them as your analysis of past games in a certain category, but there is gap in the reasoning claiming this will be true as a prediction for general games.

💡📝Hans Aberg wrote on Fri, May 2, 2008 01:08 PM UTC:
H.G.Muller:
| If piece values cannot be used to predict outcomes of games, they
| would be useless in the first place.

If one is materially behind, one knows one should better win the middle game, or win material back before coming into the end-game, unless the latter is a special case.

| Why would you want to be an exchange or a piece ahead, if it might
| as frequently mean you are losing as that you are winning?

This is indeed what happens if with programs focusing too much on material, or weak players starting with a piece ahead.

| Precisely knowing the limitations of your opponent allows
| you to play a theoretically losing strategy (e.g. doing bad trades) in
| order to set a trap.

Sure, this seems to be essentially to be the effect of a brute force search on a very fast computer.

| In general, this is a losing strategy, as in practice
| one cannot be sufficiently sure about where the opponent's horizon will
| be.

In human tournament playing, a top player either plays against opponents of lower horizon, or against well known opponents whose playing style has been well analyzed. In the first case, there is not much need to adapt playing as one can see deeper, but in the latter case, one certainly does choose strategies adapted against the opponent. Now, with computer programs, at least in the past, the GMs were pitted against program which they did not know that well, the latter which ran in special versions and on very fast computers when tried on humans. So humans do not get much of  chance to develop better strategies. But this may not matter if the strategy does not allow them to handle the theoretical faulty combinations used by computer by relying on a somewhat deeper search.

| Fact is that I OBSERVE that the piece values I have given below
| do statistically predict the outcome of games with good precision.

You only observe past events, not the future, and a statistical prediction is only valid for for a true stochastic variable, or in situations that continue behave as such. But don't worry about this:

If you find methods to duplicate the analysis by Larry Kaufman, and use that to compute values for various pieces on boards like perhaps 8x8, 10x8, 12x8, then it seems simple enough to modify engines to play with different chess variants (if protocols like UCI are extended to cope with it).

I think though the real test will be when humans play against those programs.

💡📝Hans Aberg wrote on Fri, May 2, 2008 05:21 PM UTC:
H.G.Muller | Fairy-Max is already able to play most Chess variants, and WinBoard | protocol already supports those variants. I just found Jose-Chess that supports both Xboard and UCI protocols, and may have the future for it worked up (right now it is somewhat buggy), as it is open source. | Many engines are now able to play Capablanca-type variants under | WinBoard protocol, some of them quite strong. Perhaps the Dragon knight D = K+N and what you call the Amazon M = Q+N should be included. I am thinking about a 12x9 variant R D N B A Q K M B N C R which has the property that all pawns are protected, and tries to keep a material balance on both king sides. On a 12x10 board, one might use a rule that pawns can move 2 or 3 steps, if that does not make them cross the middle line. | I have no interst in convincing anyone to use my empirically derived | piece values. The normal thing would be that the values are just published, with indications on how they were derived. Different authors may have different values, if using different methods to derive them.

💡📝Hans Aberg wrote on Fri, May 2, 2008 09:42 PM UTC:
H.G.Muller:
| Indeed, I plan to submit a paper to the ICGA Journal discussing the
| piece values and the empirical statistical method used to obtain them.

You might have a look at things like:
  http://en.wikipedia.org/wiki/Perfect_information
  http://en.wikipedia.org/wiki/Complete_information
  http://en.wikipedia.org/wiki/Nash_equilibrium
  http://en.wikipedia.org/wiki/Prisoner's_dilemma
Your claims are similar to the idea that chess players under some circumstances get a Nash equilibrium. This might happen, say, if the players focus on only simple playing strategies where piece vales have an important role, and they are unable to switch to a different one. Note that the prisoner's dilemma leads to such an equilibrium when repeated, because players can punish for past defections. In chess, this might happen if chess players are unable to develop a more powerful playing theory, say due to the complexity. - Just an input, to give an idea of what reasoning one might expect to support claims of predictions.

💡📝Hans Aberg wrote on Sat, May 3, 2008 09:46 AM UTC:
H.G.Muller:
| Note that a Nash equilibrium in a symmetric zero-sum game must be the
| globally optimum strategy.

Chess isn't entirely symmetric, since there is in general a small advantage of making the first move. But for players (or games) adhering to a piece value theory throughout as a main deciding factor, perhaps such balance may occur. The only world champion that was able to do that, winning by playing with very small positional and material advantages, was perhaps Karpov. Kasparov learned to break through that heavily positional playing, in part by training against the Swedish GM Andersson who specialized in a similar hyper-defensive playing. A more normal way of winning is at some point making material sacrifices in exchange for strong initiative particularly combined with a mating attack, and then winning by either succeeding by a mate or via some material gains neutralizing to a winning end-game. Perhaps when determining piece values, such games should be excepted. And since computers are not very good at such strategies, perhaps such game exclusion occurs naturally when letting computers playing against themselves.

💡📝Hans Aberg wrote on Sat, May 3, 2008 11:50 AM UTC:
H.G.Muller:
| As piece values are only useful as strategic guidelines for quiet
| positions, they cannot be sensitive to who has the move.

At the beginning of the game, white is thought to have a slight advantage, and the first task of black will be attempting to neutralize that. And it might be possible set a picee value to that positional advantage, just as when reasoning in terms of getting positional compensation for a sacrifice. Somewhat less han a pawn, perhaps. If one knows about the black/white winning statistics, one might be able to set a value on it that way. It may not be usable for a computer program as it does not change sides, but only computes the relative values of moves.

💡📝Hans Aberg wrote on Sat, May 3, 2008 04:43 PM UTC:
H.G.Muller:
| But the point is that this does not alter the piece values.

Right, though that might be just a preferred way to structure theory because it suits human thinking. Essentially, define contexts, and attach values to them. First define piece values in nutralk settings. Then observe that the bishop pair gets an added value. Then try to figure out values for good and poor bishops. And so on. By contrast, computers tend to be very poor at handling such contexts, so other methods might be suitable for programs.

12 comments displayed

EarliestEarlier Reverse Order Later

Permalink to the exact comments currently displayed.