[ List Earliest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]
Single Comment
I'm not sure that the 'newcomer vs. veteran' thing is that important,
but if you guys feel strongly about it, I'm not going to argue too
strenuously about it. While I don't have time to be a full fledged
judge, I am willing to give my input as a veteran CV designer on the
judging process, so that it will be handled fairly and equitably. I am
also willing to play-test my own two entries in the contest against
any judge by email (see earlier comment). I have a few more ideas:
Perhaps you should break the judging into groups of four rather than
three: three with eight games each and one with nine. Eleven games apiece
seems like an awful lot to me, and depending on how many judges you can
get you may be able to distribute things a little easier this way.
(Just how many games should come out of each group into the second
round of judging is something you can all decide for yourselves.)
I am
assuming that you will have more than one judge giving their input on a
particular group. For example, if you have 12 preliminary round judges
to cover the four groups, you have three judges per group. (Another
advantage of going with four inital groups rather than three is,
as I have already pointed out, there are four different games using
7 files and 12 ranks, four different games using a 10 by 10 board with
the middle 16 squares as a 'no-entry zone', and four games submitted
by members of the Newton family, and each of the four games for each
of the aforementioned subsets could be put into the four different
preliminary round groups.) It makes sense to have two judges from the same group play-test the same games with each other, either by email or in real-time. (With internet communication what it is today, moves can be sent in algebraic notation via instant message, using ZILLIONS to record the game, if the direct interactive ZILLIONS in-game link cannot [for whatever reason] be used.) I would also hope that each judge play-test each game they are responsible for at least TWICE. One time does not necessarily give one a good feel for a game, and often once a game has been played for the first time one might begin to pick up on certain strategies once they get into the flow of the game that had not occurred to them simply from reading the rules
page and looking at the initial setup, which they will then be better
prepared to use (either for attack, or to defend against) during the
second time around. After that, if someone wants to play-test it even
further than that, that's up to them based on how much free time they
have on their hands. I also think that it's always helpful whenever a
game's inventor can volunteer to play-test his own game with a judge
(as I have offered), since who else has a better initial feel for a game
than the creator itself, and I hope that as time goes by more entrants
will come forward to volunteer to play-test their own games with judges.
Even the games which do not yet have a ZILLIONS OF GAMES zrf file
available can be play-tested by email, by including an ASCII diagram
with each move transmitted, altered by hand in each reply to reflect
the new position that occurs with the move that is being sent. (btw
that might also be another initial criteria to use to seperate games
into different preliminary round groups: those games which do not have
ZILLIONS OF GAMES zrf files available for them, should be equally
distributed amongst the groups).
<p>Finally I was wondering if anyone was going to make any suggestions
as to what criteria judges should use to when evaluating games in
the contest. I don't mean to say that I think anything would be
written in stone, that anyone MUST use certain criteria when evaluating
games, because I would think that judges should be allowed a certain
amount of flexibility and freedom in deciding which criteria they
feel are most important, and obviously different judges will probably
weigh different criteria with different importance, but I wonder if
anyone will come out and state POSSIBLE criteria that could be used,
or if you would prefer that judges figure that out for themselves
without any outside influence. I would certainly be willing and able
to give my two cents worth in this forum as to which criteria should
be used in evaluating games if I am asked to, but for now will keep
my mouth shut in case you'd all rather everyone be silent about that
topic. Please let me know. I'll tell you this much though: If you ask
me to state the various criteria I think are important in evaluating
games, I'll tell you which ones I think are more important than others,
but I won't try to come up with any kind of elaborate points-scoring
system to give games grades or scores. To me that's a bit too scientific
and it's not going to be appropriate or userful for different judges
who may personally disagree that a particular criteria I stated should
be more (or less) important than the emphasis I would seem to give it.