Arimaa Forum (http://arimaa.com/arimaa/forum/cgi/YaBB.cgi)
Arimaa >> Events >> 2013 Arimaa Challenge
(Message started by: Fritzlein on Mar 11th, 2013, 5:31pm)

Title: 2013 Arimaa Challenge
Post by Fritzlein on Mar 11th, 2013, 5:31pm
The first screening games of 2013 have already been played, and the bots are off to a slow start.  All four developers said there were not many improvements from last year, so most of the increase in performance should come from running on eight cores instead of four, but is it possible that they will have a lower performance than in 2012?

Year  Pairs  Decisive  Winner / Score / Perf  Loser / Score / Perf
----  -----  --------  ---------------------  --------------------
2007     12    .    2    . bomb / 2 / 2087    . Zombie / 0 / 1876
2008     16    .    7    . bomb / 6 / 1918    .  sharp / 1 / 1576
2009     23    .    7  clueless / 5 / 1910    . GnoBot / 2 / 1792
2010     25    .   11    marwin / 6 / 2065    clueless / 5 / 1960
2011     40    .   11    marwin / 6 / 2110    .  sharp / 5 / 2109
2012     33    .    7  briareus / 5 / 2232    . marwin / 2 / 2128
2013     25    .    6    marwin / 4 / 2121     ziltoid / 2 / 2055

Title: Re: 2013 Arimaa Challenge
Post by rbarreira on Mar 11th, 2013, 6:20pm

on 03/11/13 at 17:31:45, Fritzlein wrote:
most of the increase in performance should come from running on eight cores instead of four


I think ziltoid2013 only sees a small increase in nodes per second vs briareus2012, and this is with a doubling of the number of threads (which hurts the search). So the hardware may be about as powerful as last year's, at least for my bot.


on 03/11/13 at 17:31:45, Fritzlein wrote:
but is it possible that they will have a lower performance than in 2012?


Maybe yes, if marwin and ziltoid are pretty similar to their 2012 versions and more people got used to their weaknesses.

Title: Re: 2013 Arimaa Challenge
Post by Fritzlein on Mar 13th, 2013, 1:41pm
Congrats to Max on scoring a clean sweep!

Marwin draws first blood with two victories this morning, and now has a calculable performance rating, but ziltoid is still scoreless.  Together they are 2-8, counting my half-game as a victory for humanity.

Title: Re: 2013 Arimaa Challenge
Post by tize on Mar 13th, 2013, 2:03pm

on 03/11/13 at 17:31:45, Fritzlein wrote:
All four developers said there were not many improvements from last year

I don't think I said that (or maybe I did, it's hard to remember everything that one says).  ???
But I made a lot of changes (note the word changes here and not improvements :)  ), so it might be true for marwin that not many improvements where made.
Of course many of them doesn't affect the rating much (or even measurable), like the new improved handling of repetions.

I have not made any measurements of the changes, but maybe I will.

Title: Re: 2013 Arimaa Challenge
Post by Fritzlein on Mar 13th, 2013, 2:16pm

on 03/13/13 at 14:03:30, tize wrote:
I don't think I said that (or maybe I did, it's hard to remember everything that one says).  ???
But I made a lot of changes (note the word changes here and not improvements :)

Oh, I apologize for misquoting you.  I wasn't paying close attention, and I just reported my general impression of developer chat, so I certainly forgot who said what.  Perhaps all of your changes will indeed make a noticeable difference.

By the way, did you see the same lack of improvement from 4 cores to 8 cores that Ricardo did?  Maybe parallelization gets increasingly difficult as the number of cores increases.

Title: Re: 2013 Arimaa Challenge
Post by rbarreira on Mar 13th, 2013, 2:26pm

on 03/13/13 at 14:16:45, Fritzlein wrote:
By the way, did you see the same lack of improvement from 4 cores to 8 cores that Ricardo did?  Maybe parallelization gets increasingly difficult as the number of cores increases.


I don't think my observations of performance on the 2013 challenge hardware have much to do with the general difficulty of parallelization (although it definitely plays a part too), because I have run my bot on 32-64 core machines with a nice speedup.

This year's CPU is based on the AMD Bulldozer architecture, in which each pair of cores is contained in a module with common instruction fetchers/decoders, in addition to the shared L2/L3 cache. The latter was already common in other architectures. This means that, depending on the particular software, going from 4 threads to 8 threads on a 8-core CPU can easily result in a speedup well below 2x.

Title: Re: 2013 Arimaa Challenge
Post by tize on Mar 13th, 2013, 2:28pm
No problem, it might also be that you were correct, if we talk about improvements.

As for the 8 cores: I didn't do any testing to see how much faster he got (just tested that it worked and he used all 8 ). And it depends heavily on the position for marwin, since I only split at the root level and run the first move in "single core mode". This means that marwin switches between using 8 cores and 1 (slightly simplified...). But my guess is that marwin got 50% faster. (The number is scientifically pull from thin air.) :)

Title: Re: 2013 Arimaa Challenge
Post by foggy on Mar 13th, 2013, 4:13pm
Actually, I dont think this year's HW is much improvement. There were 8 cores instead of 4, but Bulldozer "core/unit" has at least 1.5 less performance comparing to Intel.

I was expecting i7 architecture this year. According to Fritz (I mean chess progarm, not the arimaa player/TD :) measurements (which I suppose should be close to arimaa bots), i7 is better than Bulldozer 8 cores (which are similar to i7 - sharing decoder and cache makes it similar to Intel hyperthreading).

Title: Re: 2013 Arimaa Challenge
Post by Fritzlein on Mar 16th, 2013, 1:21pm
We now have two sweeps of the bots, equaling last year's total.  Ziltoid is finally on the board with a win and a calculable performance rating, but marathon wins by Nombril and novacat keep it down to a paltry 1659.  Meanwhile Harren knocked marwin down to 1972.

There's a long way to go yet in the screening, and the bots' performance ratings are quite likely to rise, but it already seems like a long shot for them to equal their excellent results from last year.

Marwin remain 0.5 point ahead by virtue of winning the only decisive pair so far.

Title: Re: 2013 Arimaa Challenge
Post by omar on Mar 19th, 2013, 5:24am

on 03/13/13 at 16:13:32, foggy wrote:
Actually, I dont think this year's HW is much improvement. There were 8 cores instead of 4, but Bulldozer "core/unit" has at least 1.5 less performance comparing to Intel.

I was expecting i7 architecture this year. According to Fritz (I mean chess progarm, not the arimaa player/TD :) measurements (which I suppose should be close to arimaa bots), i7 is better than Bulldozer 8 cores (which are similar to i7 - sharing decoder and cache makes it similar to Intel hyperthreading).


I would assume more cores would be better for the bots than slightly faster cores. But that's just a guess.

If any of the bot developers have some benchmark program that I can use to gauge the hardware performance I would like to start using it. Maybe something that runs for about a minute on some fixed game position and prints out the number of nodes that were evaluated. It should auto detect the number of cores and memory and adjust itself to the hardware (perhaps using a launch script that checks the hardware and starts the benchmark program with the best parameters for the hardware).

It would be good to start using something like this and keeping track of the hardware improvement year to year. Wish we had started doing this earlier.

I can use one of the bots to do this, but just wanted to check if anyone already had a benchmark program they were using.

Title: Re: 2013 Arimaa Challenge
Post by lightvector on Mar 19th, 2013, 7:23am
Note that the number of nodes is probably *not* the right thing to measure. For a fixed search depth, the number of nodes will on average increase as the number of threads increases due to losses in the efficiency of the search as the number of threads goes up. For example, searching two branches of the tree in parallel will be worse than searching them sequentially if searching one of them first would have provided better alpha/beta bounds for the second, or even a beta cutoff so that the second branch need not have been searched at all.

Instead, you probably want to measure the time taken to reach a given fixed depth. Although if a bot has some unsafe pruning heuristics and such that depend heavily on dynamically gathered information in the search, that might not exactly be right either. It might also differ slightly from the actual effective strength due to how a bot handles cases where the final depth is only partially searched, rather than fully searched. But probably these are second-order and not too big of a deal.

As for whether 8 cores is better, I haven't looked in detail, but for sharp, above 3-4 cores, I recall that the loss becomes very noticeable, so that 8 cores gives far less than 8x the effective search power. Although some of that might be simply sharp being underoptimized - I did implemented a threading framework that gives a lot of freedom to choose any parallelization policy, but have spent very little time tuning the policy so far. Probably other developers could provide better stats.

Title: Re: 2013 Arimaa Challenge
Post by rbarreira on Mar 19th, 2013, 7:38am
I don't have any benchmark script. I usually just try ziltoid on a position a few times and find the maximum achieved NPS. It can take quite a few tries due to parallel non-determinism, and some positions can be bad for this. In particular, if one move takes much longer to calculate than all others the bot might be using just one thread for quite a while. For most positions this is not the case.

My feeling for the last few years is this: The 2010 hardware was exactly as powerful as the 2011 hardware (the X3360 (http://ark.intel.com/products/33933/Intel-Xeon-Processor-X3360-12M-Cache-2_83-GHz-1333-MHz-FSB), AFAIK is exactly like the Q9550 (http://ark.intel.com/products/33924/Intel-Core2-Quad-Processor-Q9550-12M-Cache-2_83-GHz-1333-MHz-FSB) CPU except for being a server part). The 2012 hardware is about as powerful as the 2013 one as I said earlier (for my bot).

So the remaining question is how much of a boost happened between 2011 and 2012. According to most of these benchmarks (http://www.anandtech.com/bench/Product/363?vs=50), it seems to be around 40%. Both are quad-core CPUs, so parallelization should be a non-issue in this particular comparison.

My not-so-scientific guess is that, for my bot, between 2010 to 2013 the hardware got around 40% faster. But it's hard to give a concrete number without trying benchmarks again.


on 03/19/13 at 07:23:18, lightvector wrote:
Instead, you probably want to measure the time taken to reach a given fixed depth.


I agree this is probably the best way. Either the minimum or the median time of several tries should be taken, to account for non-determinism.

Title: Re: 2013 Arimaa Challenge
Post by Fritzlein on Mar 22nd, 2013, 10:29pm
In the general man vs. machine contest, there has been some tit-for-tat resulting in humanity staying well ahead of where it was last year.  In the bot vs. bot contest, marwin has opened up a commanding 2.5-point lead by beating both RmznA and arimaa_master, each of whom turned around and beat ziltoid.

Title: Re: 2013 Arimaa Challenge
Post by Fritzlein on Mar 25th, 2013, 6:50pm
Since last update, marwin went 0-2 while ziltoid was 3-1.  This hasn't changed marwin's 2.5-point lead yet, but the incomplete pairs are now more favorable to ziltoid.  Humanity, meanwhile, continues to score better overall than last year, although the bots are inching up from their dismal opening to currently weigh in at 2034 and 1892 respectively.

Title: Re: 2013 Arimaa Challenge
Post by Boo on Mar 26th, 2013, 10:35am
What if some players end up having played only 3 screening games? I think the results are calculated in weird way. E.g. both aaaa and arimaa_master have played 3 games, 2 against ziltoid and 1 against marwin. Both won 1 game against ziltoid, and lost 2 other games. however the score is 1-1 for aaaa, and 0-1 for arimaa_master. Why does a colour of a game have such a big impact to the final result? I think the same amount of points for marwin and ziltoid should be assigned in such a case.

Title: Re: 2013 Arimaa Challenge
Post by tize on Mar 26th, 2013, 5:50pm
That would be even stranger, as one bot would then get two games to get one point and the other bot only one game. As winning all games and winning half of the games is not equal.

The matching of the color is just a simple way to enforce pairs of games in the score, it's not the color that is the important part it's the order...

But I do agree that unfinished pairs do make the scoring look strange though.

Title: Re: 2013 Arimaa Challenge
Post by Fritzlein on Mar 26th, 2013, 6:44pm

on 03/26/13 at 10:35:00, Boo wrote:
What if some players end up having played only 3 screening games? I think the results are calculated in weird way. E.g. both aaaa and arimaa_master have played 3 games, 2 against ziltoid and 1 against marwin. Both won 1 game against ziltoid, and lost 2 other games. however the score is 1-1 for aaaa, and 0-1 for arimaa_master. Why does a colour of a game have such a big impact to the final result? I think the same amount of points for marwin and ziltoid should be assigned in such a case.

For maximum fairness, screening games should always be played in pairs.  The "play bots" page tells people not to play one game of a pair unless they can play both.  Nevertheless, in real life it isn't always possible for people to know whether they will have time to complete every pair, so every year there are several uncompleted pairs.

The only year in which this has been an issue was 2011, when marwin won by half a point but the uncompleted pairs favored sharp.  If all of those pairs had been finished, there was a good chance sharp would have won the Screening.  But what can we do about it?  Throwing away an incomplete pair is unfair, but counting an incomplete pair seems even more unfair, since the other bot didn't have the same chance.

http://arimaa.com/arimaa/forum/cgi/YaBB.cgi?board=events;action=display;num=1299781791;start=60

I expect that at some point the Challenge Screening will move away from the current open format to an invitation-only format with, say, 15 people hand-picked by Omar who each commit to play all 4 games.  The main reason for this change would be to prevent abuse by sock-puppet accounts, but a secondary reason would be increase the chances that every pair that gets started also gets completed.

Title: Re: 2013 Arimaa Challenge
Post by Boo on Mar 27th, 2013, 4:27am

Quote:
But what can we do about it?

1) You can count only those players who have played all 4 games.
2) You can change the bot strength evaluation method into calculating their performance instead. Something like:

Quote:
the bots are inching up from their dismal opening to currently weigh in at 2034 and 1892 respectively.


It is now a weird system. One game played - too little data. 2 games played - ok, enough data for strength evaluation. 3 games played - too little data again (???). But logically thinking, the more games are played, the more exact strength evaluation should be. It should not be that more games played (3) make strength evaluation more obscure then with less games (2).

Title: Re: 2013 Arimaa Challenge
Post by Fritzlein on Mar 27th, 2013, 7:39am

on 03/27/13 at 04:27:00, Boo wrote:
2) You can change the bot strength evaluation method into calculating their performance instead.

The case against using performance rating rather than raw results is that performance rating relies on the gameroom ratings of the human players, and the gameroom ratings of the human players are notoriously inaccurate, and can even change significantly between when they play one bot to when they play the other.


Quote:
It is now a weird system. One game played - too little data. 2 games played - ok, enough data for strength evaluation. 3 games played - too little data again (???).

With 3 games played, the first two are still used.  But you suggested above that even two is still too few, and only players with all four games should count?

Title: Re: 2013 Arimaa Challenge
Post by Boo on Mar 27th, 2013, 9:52am

Quote:
The case against using performance rating rather than raw results is that performance rating relies on the gameroom ratings of the human players, and the gameroom ratings of the human players are notoriously inaccurate, and can even change significantly between when they play one bot to when they play the other.


Rating inaccuracies will neglect each other as the number of played games (opponents) increases.
'can even change significantly' - yes (for one game), but again this change is chaotic and its influence decreases as the number of games played increases. I think performance is the best way to compare bots, as it counts in every game, however both bots should play the amount of games as close as possible. (Now all new players start with ziltoid, if it is idle.)


Quote:
But you suggested above that even two is still too few, and only players with all four games should count?


Yes, I suggested that as an alternative.


Quote:
With 3 games played, the first two are still used.


Yes, and but the 3rd game is not used at all. Though as I understand from "Games where only one of the bots were played are not counted. ", the 3rd game should be counted in as both bots were played.

EDIT - there is a 3rd alternative - compare total win% of each bot. E.g. now ziltoid has 8/21 = 38.1% and marwin has 5/13 = 38.5%.

Title: Re: 2013 Arimaa Challenge
Post by tize on Mar 27th, 2013, 1:57pm
That will compare a win against a weak with a win against a strong player without trying to account for the difference in thoose wins.

By only counting game pairs it's ok to just count wins (or win %). But if all games should be counted then a more advanced system must be used, like a normal rating.

Title: Re: 2013 Arimaa Challenge
Post by browni3141 on Mar 27th, 2013, 3:01pm

on 03/27/13 at 13:57:40, tize wrote:
That will compare a win against a weak with a win against a strong player without trying to account for the difference in thoose wins.

By only counting game pairs it's ok to just count wins (or win %). But if all games should be counted then a more advanced system must be used, like a normal rating.

I'd be wary of using ratings in the Screening. Improving players and bot-bashers will have very inaccurate ratings. Is it fair to consider ratings when a rapidly improving player's rating of 1830 is not reflective of his current strength and he plays in the screening? Or when a 2300 bot-basher whose true strength is closer to 1800 plays?

Title: Re: 2013 Arimaa Challenge
Post by Boo on Mar 27th, 2013, 3:34pm
Yes, using ratings has the luck factor involved. But I think it is of much lesser impact, than it is now, when a bot gets a point in a 3 game series essentially based on a coin flip. The current result is 2-5 for marwin, and 2 points out of 5 for marwin are won on a coin flip. The result could easily be 2-3, if ziltoid had guessed the winning colour.  Isn't it too much luck? And how many players who lose/win 300pts in a month are in the screening?

Title: Re: 2013 Arimaa Challenge
Post by browni3141 on Mar 27th, 2013, 5:40pm

on 03/27/13 at 15:34:15, Boo wrote:
Yes, using ratings has the luck factor involved. But I think it is of much lesser impact, than it is now, when a bot gets a point in a 3 game series essentially based on a coin flip. The current result is 2-5 for marwin, and 2 points out of 5 for marwin are won on a coin flip. The result could easily be 2-3, if ziltoid had guessed the winning colour.  Isn't it too much luck? And how many players who lose/win 300pts in a month are in the screening?

I don't understand what you mean, Boo. How were any games "won on a coin flip?"

Title: Re: 2013 Arimaa Challenge
Post by browni3141 on Mar 28th, 2013, 2:09am
Poor marwin has lost its last 6 of 7 games, bringing its gameroom rating down to 2133 and below ziltoid's. It seems ziltoid has been faring much better in the recent match-ups, losing only 2 of 7. Of course I chose the number 7 in an unfair way, but anyway it is still looking like a close race!
It's funny how novacat has now won both of his two games by elimination, and he's the only one to win a screening game by elimination so far. Maybe it has something to do with his style?

Title: Re: 2013 Arimaa Challenge
Post by Boo on Mar 28th, 2013, 2:31am

Quote:
I don't understand what you mean, Boo. How were any games "won on a coin flip?"


I talk about 3 game serie, not a single game.
E.g. against arimaa_master (The same applies to RmznA). Now ziltoid has won with silver and lost with gold and thus marwin leads, because it won with gold. If ziltoid had won with gold and lost with silver, it would be 1-1 as opposed to the current 0-1. What is the difference between those two scenarios?

Title: Re: 2013 Arimaa Challenge
Post by novacat on Mar 28th, 2013, 7:27am

on 03/28/13 at 02:31:51, Boo wrote:
E.g. against arimaa_master (The same applies to RmznA). Now ziltoid has won with silver and lost with gold and thus marwin leads, because it won with gold. If ziltoid had won with gold and lost with silver, it would be 1-1 as opposed to the current 0-1. What is the difference between those two scenarios?

The difference is that when bot_ziltoid played silver, it was the first game ever between the player and the bot on the current hardware.  The game with bot_ziltoid as gold was the second encounter.  Bot_ziltoid may have learned from its first game.  We would certainly assume that for the human if the results were reversed.  

Also, the human player may have decided to try a more risky strategy the second time around since they already beat the bot the first time.  It is not unprecedented for someone to play the bots in less than ideal conditions just for fun, and these people are typically conscientious enough to play the two bots in the same manner.  

Title: Re: 2013 Arimaa Challenge
Post by Fritzlein on Mar 28th, 2013, 9:58am
Since my last update, ziltoid went 2-1 while marwin went 1-4.  This includes ziltoid's first point of the screening to pull within 1.5 of marwin.  Ziltoid also pulls closer in performance rating, 1908 to 1993.  It would be quite an unexpected coup for humanity to beat down both bots to have a sub-2000 performance rating by the end of the screening.

Title: Re: 2013 Arimaa Challenge
Post by Fritzlein on Mar 30th, 2013, 7:25pm
A late flurry of activity brings the number of humans completing the full screening to eight: Thiagor, Max, aaaa, arimaa_master, Fritzlein, gthreepwood, RmznA, and supersamu.  The total number of games played probably won't match that of last year, but it is great to see so many people finishing what they start.  Indeed, there are presently only two incomplete pairs: 722caasi and mightyfez.

After supersamu completed his sweep, the bots went on a five-game winning streak, pushing their ratings up to 2069 and 1978 respectively.  Ziltoid picked up a point, but marwin got it right back, to keep its lead at 1.5 points with just under 24 hours remaining.  Things are looking bleak for ziltoid unless 722caasi and mightyfez each finish their pairs by beating marwin.

Title: Re: 2013 Arimaa Challenge
Post by browni3141 on Mar 30th, 2013, 11:58pm

on 03/30/13 at 19:25:39, Fritzlein wrote:
Things are looking bleak for ziltoid unless 722caasi and mightyfez each finish their pairs by beating marwin.

That's too bad, I have a formula for ziltoid which I think is pretty much infallible half the time.  Even a beginner could do it, although some of the experts might have difficulty.
I haven't tested it at 2 minutes/move, and I don't plan to.
Can you figure it out without looking at my game history? (or the chat archive)
Consider it a riddle! You've got three clues :)

Title: Re: 2013 Arimaa Challenge
Post by harvestsnow on Mar 31st, 2013, 2:59am

Quote:
infallible half the time

You could start two games and relay the moves to make it play against itself. Though that would theoretically work for any bot, and wouldn't comply with the challenge format.

"Experts would have difficulty" because they would resent some of the bot's choices :D

Title: Re: 2013 Arimaa Challenge
Post by browni3141 on Mar 31st, 2013, 3:46am

on 03/31/13 at 02:59:18, harvestsnow wrote:
You could start two games and relay the moves to make it play against itself. Though that would theoretically work for any bot, and wouldn't comply with the challenge format.

"Experts would have difficulty" because they would resent some of the bot's choices :D

Very clever! Unfortunately it's wrong, though. It doesn't take into account one of the clues.
My strategy will work playing a single game at a time.
"I haven't tested it at 2 minutes/move, and I don't plan to. " was a subtle clue.

Title: Re: 2013 Arimaa Challenge
Post by clyring on Mar 31st, 2013, 7:17am
I really don't want to stall for eight hours in any games as Silver if I can possibly avoid it, brownipi.

Title: Re: 2013 Arimaa Challenge
Post by rbarreira on Mar 31st, 2013, 7:31am

on 03/31/13 at 07:17:29, clyring wrote:
I really don't want to stall for eight hours in any games as Silver if I can possibly avoid it, brownipi.


Especially not with an unstable Internet connection.

Title: Re: 2013 Arimaa Challenge
Post by Fritzlein on Apr 1st, 2013, 4:26pm
The bots won all five games in the final day to finish with a combined ten-game winning steak.  Marwin ends 12-13 with a 2121 performance rating, and ziltoid ends 13-15 with a 2055 performance rating.  The strong finish offset the weak start, leading to a respectable overall showing for the bots.

All three uncompleted pairs were wins for ziltoid, which is unlucky for ziltoid, but marwin clearly performed better in total, and takes home the 1.5-point victory in the official standings.  My personal guess is that marwin will also pose the greater challenge to the defenders.

I'm looking forward to the official Challenge games!

Title: Re: 2013 Arimaa Challenge
Post by Fritzlein on Apr 13th, 2013, 4:00pm
Now that browni has won his second game, the Challenge is defended.  Thanks, browni, for keeping my money safe for another year.  This win was particularly impressive for its style.  If you can out-slug a bot in a wild, tactical position, what hope is there for silicon?

http://arimaa.com/arimaa/gameroom/comments.cgi?gid=263067

Title: Re: 2013 Arimaa Challenge
Post by chessandgo on Apr 14th, 2013, 3:19am
Congrats browni!

Fritz, I'm not sure that game was so tactical. Browni's opening was on the aggressive side, but all the rest was fundamentally sound strategic play (cut the caMel away from your attackinc horse, camel hostage, goal attack in a weakened quadrant ... it was all very well executed though! I hope you win again with significant material handicap browni.

Title: Re: 2013 Arimaa Challenge
Post by clyring on Apr 26th, 2013, 8:07pm
All three of the minimatches have now been won in favor of the humans. Let the festivities begin!

Title: Re: 2013 Arimaa Challenge
Post by Fritzlein on Apr 26th, 2013, 9:40pm
Congrats on fighting back for the victory in the game and the mini-match, clyring.  After computers seemed to gain ground on humans for four consecutive  years (2009, 2010, 2011, 2012), it is good to see signs that we humans have re-opened the gap a little bit in 2013.

Title: Re: 2013 Arimaa Challenge
Post by omar on Apr 27th, 2013, 6:43am
Big congrats to the defenders for pulling off a 8-1 victory. You are helping to carry on the legacy of the Man vs Machine match and show that humans can still outsmart computers even on a chess board.



Arimaa Forum » Powered by YaBB 1 Gold - SP 1.3.1!
YaBB © 2000-2003. All Rights Reserved.