Arimaa Forum (http://arimaa.com/arimaa/forum/cgi/YaBB.cgi)
Arimaa >> Off Topic Discussion >> AI Challenge Contest
(Message started by: Janzert on Oct 24th, 2011, 2:19am)

Title: AI Challenge Contest
Post by Janzert on Oct 24th, 2011, 2:19am
(aka the contest previously called The Google AI Contest) ;)

I know several here have participated in the last few contests, planetwars and tron. We've just launched the latest contest. This time featuring ants and hills. Underneath it features most of the core concepts from a traditional real time strategy game.

Feel free to come take a look at aichallenge.org (http://aichallenge.org). Hope you find it interesting and come join the fun.

Janzert

Title: Re: AI Challenge Contest
Post by aaaa on Oct 24th, 2011, 1:59pm
Interesting. Having more than two players in a game for once adds a dynamic not seen in the earlier contests, although it remains to be seen whether the various possibilities for collusion add to the appeal of the challenge or actually detract from it.

Title: Re: AI Challenge Contest
Post by Hippo on Oct 24th, 2011, 5:36pm

on 10/24/11 at 02:19:40, Janzert wrote:
(aka the contest previously called The Google AI Contest) ;)

I know several here have participated in the last few contests, planetwars and tron. We've just launched the latest contest. This time featuring ants and hills. Underneath it features most of the core concepts from a traditional real time strategy game.

Feel free to come take a look at aichallenge.org (http://aichallenge.org). Hope you find it interesting and come join the fun.

Janzert


I hope to be able to stay cool. But the rules are interesting ;). Exploration, fight avoiding, hills attacking, map construction. Map guessing.

Title: Re: AI Challenge Contest
Post by Fritzlein on Oct 24th, 2011, 10:08pm
How are the rankings calculated?  I notice that players now have a "skill" instead of an Elo score.  The exact formula in use will have a substantial impact on correct strategy, particularly in multiplayer games, but I didn't find a description of the ranking algorithm in a brief pokeabout.  I did, however, notice a game in which the top player came in third out of five and slightly improved his rating.  I can't think of a reasonable system in which an average result against competition that is weaker than you on average results in a rating gain.

Title: Re: AI Challenge Contest
Post by Janzert on Oct 25th, 2011, 1:14am
The rankings are calculated using trueskill (http://research.microsoft.com/en-us/projects/trueskill/). Basically the skill displayed is a conservative estimate of the player's actual skill (i.e. best estimate minus 3 standard deviations). So for a player with few games (pretty much everyone at this point) even if the best estimate of your skill goes down the reduction in uncertainty can more than make up for it in your displayed skill. Hovering over a skill display on the site will show the break out of best estimate (mu) and uncertainty (sigma).

Having said that I neither implemented it nor have ever gotten deeply into the trueskill math so may very well have misrepresented it above. ;)

Janzert

[Edit: Correct number of stddevs subtracted for conservative skill estimate]

Title: Re: AI Challenge Contest
Post by Hippo on Dec 19th, 2011, 2:50pm
So even when I wanted to stay cool, I spent a lot of time codding ... let us see how I will finish ... http://aichallenge.org/profile.php?user=6888.

Rabbits participated as well just by testing the demo bot.
Who else participated?

Title: Re: AI Challenge Contest
Post by Dolus on Dec 19th, 2011, 4:07pm
I participated with just the demo bot. I never had time to even try programming anything. :(

Title: Re: AI Challenge Contest
Post by omar on Dec 20th, 2011, 10:56pm
Congrats Hippo; placing 92 out of almost 8000 contestants is great.

If you or Janzert are in touch with the organizers can you suggest a simplified version of Arimaa for a future contest. Maybe with just EMHDRRRR per side to make it go faster.

I could sponsor the contest by providing some servers.

Title: Re: AI Challenge Contest
Post by ingwa on Dec 21st, 2011, 8:19am
Without having done the math, I'm pretty sure that a 4x4 playing field is too small.  It could be precalculated in a not-so-giant table. (Besides with 8 pieces per side there would be no room for movement, the board would be full.)

I think that 6x6 could work, though, for instance using EMHHDD+6R.

Title: Re: AI Challenge Contest
Post by megajester on Dec 21st, 2011, 12:10pm

on 12/21/11 at 08:19:52, ingwa wrote:
Without having done the math, I'm pretty sure that a 4x4 playing field is too small.  It could be precalculated in a not-so-giant table. (Besides with 8 pieces per side there would be no room for movement, the board would be full.)

I think that 6x6 could work, though, for instance using EMHHDD+6R.

I think that exact configuration might make things too cramped.

32 pieces on 64 squares becomes
24 pieces on 36 squares.
88% of the pieces on 56% of the squares.

56% of the pieces works out at 9 pieces. How about EMHH+5R?

Title: Re: AI Challenge Contest
Post by Dolus on Dec 21st, 2011, 1:15pm

on 12/21/11 at 08:19:52, ingwa wrote:
Without having done the math, I'm pretty sure that a 4x4 playing field is too small.  It could be precalculated in a not-so-giant table. (Besides with 8 pieces per side there would be no room for movement, the board would be full.)

I think that 6x6 could work, though, for instance using EMHHDD+6R.


You could change the 4x4 board to have a border of trap spaces. Maybe even with the added caveat that adjacent friendly pieces don't "protect" your pieces from the traps. It would definitely make the setup a major part of the game, possibly giving a superior advantage to silver. You may need to decrease the movement to 2 steps per turn.

But I'm also not sure that Omar had in mind a smaller board by his suggestion. I could be wrong, but I imagined he was referring to one of his variants of Arimaa where you use the same board with fewer pieces.

Title: Re: AI Challenge Contest
Post by Hippo on Dec 21st, 2011, 3:42pm

on 12/20/11 at 22:56:27, omar wrote:
Congrats Hippo; placing 92 out of almost 8000 contestants is great.

If you or Janzert are in touch with the organizers can you suggest a simplified version of Arimaa for a future contest. Maybe with just EMHDRRRR per side to make it go faster.

I could sponsor the contest by providing some servers.


My prediction is slighty behind 30, anything better would be nice :).

I expect I would be the only ply 1 bot in the top 35.
Unfortunately I had not time to debug speeded up food gathering to start thinking about alpha-beta.
... One more month or at least 14 days ...

Title: Re: AI Challenge Contest
Post by Hippo on Dec 24th, 2011, 9:20am
So great success at the end. Finished at 23rd place, being on 20th at the start of the last game. A bit of unluck seccond to last game finishing last while dominating board 5 turns to finish 2nd, around 30 turns to finish first.

I lost at least twice due to using default parser which was not able to finish on server input parse first turn in time.

Title: Re: AI Challenge Contest
Post by Tuks on Dec 24th, 2011, 1:59pm
congrats! You were pretty stable at that position, stuck around there for the last 24 hours

Title: Re: AI Challenge Contest
Post by Hippo on Dec 25th, 2011, 1:01pm

on 12/21/11 at 15:42:42, Hippo wrote:
My prediction is slighty behind 30, anything better would be nice :).

I expect I would be the only ply 1 bot in the top 35.
Unfortunately I had not time to debug speeded up food gathering to start thinking about alpha-beta.
... One more month or at least 14 days ...


Finally after reading the xiathis code and description ... it was P1 bot as well ;) and in a lot of aspects it was much more simple than mine ... good job of him.

Title: Re: AI Challenge Contest
Post by Fritzlein on Dec 25th, 2011, 6:09pm
Congratulations, Hippo, on your 23rd-place finish!

Title: Re: AI Challenge Contest
Post by omar on Dec 28th, 2011, 3:13pm

on 12/21/11 at 08:19:52, ingwa wrote:
Without having done the math, I'm pretty sure that a 4x4 playing field is too small.  It could be precalculated in a not-so-giant table. (Besides with 8 pieces per side there would be no room for movement, the board would be full.)

I think that 6x6 could work, though, for instance using EMHHDD+6R.


Sorry I should have been more clear about this. The board would still be the usual 8x8 Arimaa board, only the number of pieces used in the starting would be less. Fewer pieces in the starting tends to make the games go faster, but I've never tried it with EMHDRRRR per side. So it would need to be play tested a bit. I proposed this based on Karl's guess that having one of each non-rabbit piece would make the game more sharp. Perhaps this could be play tested by bots. A bot playing about 100 games against itself thinking for just 2 ply should provide enough data to give us an idea of how many moves a typical game takes using this set of pieces.

Title: Re: AI Challenge Contest
Post by omar on Dec 28th, 2011, 3:20pm

on 12/24/11 at 09:20:24, Hippo wrote:
So great success at the end. Finished at 23rd place, being on 20th at the start of the last game. A bit of unluck seccond to last game finishing last while dominating board 5 turns to finish 2nd, around 30 turns to finish first.

I lost at least twice due to using default parser which was not able to finish on server input parse first turn in time.


Amazing finish. It's good to know we have such great AI researchers working on the Arimaa challenge.

Title: Re: AI Challenge Contest
Post by Janzert on Dec 28th, 2011, 7:11pm

on 12/20/11 at 22:56:27, omar wrote:
If you or Janzert are in touch with the organizers can you suggest a simplified version of Arimaa for a future contest. Maybe with just EMHDRRRR per side to make it go faster.


I guess you could say I'm in touch with the organizers since I now am one of the primary organizers. ;)

Unfortunately, as much as I like Arimaa, I don't think it's a very good fit for the challenge. Unlike Arimaa where strategic depth is a primary concern for its design, we need just enough to keep the game from being "solved" in a few months time. Not that extra depth would hurt, it's just not a primary constraint. A few of the primary constraints for a challenge game are:


  • Visually interesting and exciting replays with only minimal explanation of the game
  • Very easy for a new programmer to write a basic bot and obvious directions to improve it
  • A full game in no more than 10 minutes of time for each player and the average game in less than 5, preferably less than 2

Title: Re: AI Challenge Contest
Post by omar on Jan 7th, 2012, 7:15pm
Cool, I didn't know you were that involved with organizing it. Unfortunately none of the primary constraints have much to do with AI. But I understand why they are still important for attracting a large number of participants. Never the less, for such constraints I would suggest CCRRRR with 2 seconds per move. If you try such games with your own bot you see they go very fast, are easy to understand and visually exciting to watch. If a sample bot is provided where the participants just need to change the eval function, I think it could work.


Title: Re: AI Challenge Contest
Post by Fritzlein on Jan 8th, 2012, 11:23am
I was with Janzert in thinking Arimaa didn't fit the AI contest very well.  It's too slow, too opaque, and relies too heavily on a body of strategic knowledge.

Reduced-material Arimaa, however, is a great idea; The replays are visual, and can be understood with as little explanation as ants, or less.  (Ants combat rules are more opaque than Arimaa freezing and trapping, and Ants scoring is more opaque than getting a rabbit across the board).  The time control can be easily set to keep the games down to a couple of minutes.  And a basic bot would be only slightly fussier protocol than the other AI challenges.

However, I would strongly recommend ECRRRR or even EDCRRR for the material instead of CCRRRR.  The elephant increases the variety of piece interactions, makes the endgame even sharper, and (importantly for us) uses the iconic Arimaa piece in the promotion of Arimaa.  Keeping the same ratio of half-rabbits, half non-rabbits also has an attraction in my mind, not only to echo the full game, but to keep elimination on the table as a victory criterion.  3&3 would keep the capture side of Arimaa alive, whereas 2&4 would make it all about the goals.  And finally, more pieces would slow the game down, giving more time and more moves for superior strategy to manifest.  (Are my intuitions correct?  I haven't played a lot of reduced-material Arimaa.)

Also we would have to revisit the rule for deciding games that aren't over after 120 moves.  The easiest idea would be to accept draws.  Second easiest would be furthest-advanced rabbit, ties broken by next furthest, etc., with silver winning if both sides' rabbits are equally far advanced.  I would not like the current material rule for deciding draws, because I expect most draws in this format will not have any captures.  Not that I would expect many draws in such a sparse endgame, but bots will find ways to be clueless, and rabbit advancement is a way to reward a bot that is even minimally less clueless than its opponent.

A possible objection would be "why Arimaa" rather than another abstract strategy board game.  Plenty of people will be able to think of obscure abstract strategy games that look cooler (just from the rules) and haven't been deeply studied yet.  Last month I got an e-mail from boardspace.net announcing Volo: "Strategy and tactics for Volo are completely unknown - there are some obvious hex-like connections, but the rearrangement of the "birds" makes chains of connections difficult to maintain.  It's fair to say that no one plays it well yet, and it's hopelessly difficult for the robots."  So why Arimaa and not Volo?

One differentiator is that the $10,000 Arimaa Challenge lends the introductory challenge some lustre.  Participants can think, "After I win this, I will win the big bucks."  Another bonus is that Arimaa is sufficiently well studied that it is unlikely to break in the middle of the challenge.  Brand new games that haven't been beaten on as heavily as Arimaa are more likely to break under scrutiny.  It would be very awkward for a game to exhibit a strong drawing tendency or a strong player advantage for one side midway through the contest; there are few games with a legitimate claim to hold up on both counts given a rush of attention.

One of the charms of reduced-material Arimaa as far as I am concerned is that it will look simpler than it is.  There won't be hundreds of pieces flying around like Planet Wars and Ants.  Some people will even be tempted to think that they can achieve perfect play.  (12-piece tablebases?  Ha!)  But as experimentation progresses, folks will discover that correct play can be very subtle and visually non-obvious.  There will be plenty of discrimination between the best and second-best bot.

Another attraction is the elimination of randomized playing fields.  All previous contests needed an unsightly element of luck to keep the battles fresh, and participants legitimately complained that their bots did better on some maps but worse on others, so the map selection influenced their chances of winning.  Ugly.  Arimaa needs no such thing, thanks to the free setup and huge branching factor.

The only downside I can think of is one that Janzert didn't mention in his list, but which I bet the organizers will weigh heavily: with Arimaa people will be starting further from scratch.  For "new" problems, human intuition is more important than effectively implementing what it already known to work.  Boardgames in general (and Arimaa in particular), are too well studied for anyone to start from nothing and have a fighting chance.  In particular, static goal detection will be essential, and anyone without it won't be able to compete.  To level the playing field, one would almost have to release a static goal detector for everyone to use.

But with that possible objection aside, I think reduced material would make a fine AI challenge.  There is no great body of "strategic knowledge" of how to play Arimaa endgames.  Nobody will have to read books to figure our current theory.  Our existing material evaluators probably stink when it is down to EDCRRR.  We humans are clueless, so clueless developers starting out might have intuitions just as good as any in the Arimaa community today.

I am quite curious, Janzert, as to how you see reduced-material Arimaa meshing with the Google AI challenge.

Title: Re: AI Challenge Contest
Post by clyring on Jan 8th, 2012, 12:21pm

on 01/08/12 at 11:23:00, Fritzlein wrote:
One differentiator is that the $10,000 Arimaa Challenge lends the introductory challenge some lustre.  Participants can think, "After I win this, I will win the big bucks."  Another bonus is that Arimaa is sufficiently well studied that it is unlikely to break in the middle of the challenge.  Brand new games that haven't been beaten on as heavily as Arimaa are more likely to break under scrutiny.  It would be very awkward for a game to exhibit a strong drawing tendency or a strong player advantage for one side midway through the contest; there are few games with a legitimate claim to hold up on both counts given a rush of attention.
With only 6 pieces in a multiple-rabbit endgame, it would surprise me if there wasn't a significant first-player advantage. That's hardly enough to close every file and leaves a defender without much leverage with which to resist once their opponent has advanced rabbits on both wings. IMO in order to make a more seriously playable variant without multiple minor pieces, there need to be either few enough rabbits that the elephant and minor pieces alone can hold them back or enough rabbits to serve as a serious defense on their own.

Title: Re: AI Challenge Contest
Post by Fritzlein on Jan 8th, 2012, 2:29pm

on 01/08/12 at 12:21:22, clyring wrote:
With only 6 pieces in a multiple-rabbit endgame, it would surprise me if there wasn't a significant first-player advantage.

I agree that tempo is more important in an endgame than when the board is full.  Jdb verified that Gold has a forced win in ER vs ER, as explained in this thread (http://arimaa.com/arimaa/forum/cgi/YaBB.cgi?board=talk;action=display;num=1217960526).   On the other hand, with EDCRRR on each side,  Silver can align e vs D, d vs. C, and c vs. E.  The ability to be winning two of three fights might well compensate Silver for moving second.  Indeed, I would not be surprised if it was a forced win for Silver.

But of course the point is to have the game be so balanced and so complicated that we can't tell which side has the advantage.  I would be surprised if EDCRRR Arimaa showed a 50 Elo advantage for either side at the level that bots are able to play it.

Arimaa has a natural "pie rule" to balance winning chances: swap sides after the setup.  At present, however, we can't implement the pie rule, because we don't know whether the option to swap should occur after the setup of Gold or the setup of Silver! :D

Title: Re: AI Challenge Contest
Post by Fritzlein on Jan 8th, 2012, 3:08pm

on 12/28/11 at 19:11:23, Janzert wrote:
Unlike Arimaa where strategic depth is a primary concern for its design, we need just enough to keep the game from being "solved" in a few months time.

It is interesting that Tron, with its super-simple rules, was nowhere close to being solved after a few months, although draws became enough of an issue to force many maps out of rotation.  With Planet Wars and Ants, it seems the game was prevented from being solved by having lots and lots of units.  The only downside of that as far as I can see is that humans can't play games once the number of units gets too large.

Is there any yearning for simplicity and elegance on AI contests?  In particular, is it a plus for the battleground to be a game that humans could compete at, if not outright vanquish the computers?  Perhaps coders don't care at all what un-assisted humans can do.

Title: Re: AI Challenge Contest
Post by christianF on Jan 9th, 2012, 4:19am

on 01/08/12 at 15:08:08, Fritzlein wrote:
It is interesting that Tron, with its super-simple rules, was nowhere close to being solved after a few months, although draws became enough of an issue to force many maps out of rotation.
...
Is there any yearning for simplicity and elegance on AI contests?

What about Havannah (http://mindsports.nl/index.php/arena/havannah/)?
Of the six or seven articles and theses listed there, I'll only mention Playing and Solving Havannah (https://www.cs.ualberta.ca/news-events/event-calendar/2011/timo-ewalds-msc-thesis-presentation-playing-and-solving-havannah), an MSc thesis presentation by Timo Ewalds, M.Sc. Student, Department of Computing Science - University of Alberta, with Jonathan Schaeffer as supervisor. Timo is the creator of Castro, as far as I know the strongest Havannah bot to date.

Havannah is uniform and doesn't need loads of different pieces and an exploding branch density to be extremely hard to solve.

Never mind Symple (ah yes, you already do that ;) ), that will be the featured game at the 2013 CodeCup Challenge (http://www.codecup.nl/intro.php).

You ask for simplicity ...  ???

Title: Re: AI Challenge Contest
Post by omar on Jan 9th, 2012, 9:55am
Actually Havannah would also be a great game for the AI-challenge. Especially this year it would be perfect since the Havannah challenge match is coming up; which we've been anticipating for 10 years since Christian announced the challenge in 2002. Even on a base 5 board Havannah is not yet solved; and the game is gaurenteed to finish in 30 moves. I think the base 4 board was solved last year to be a win for the first player. The base 5 game should be sufficiently hard that it can't be solved during the contest and even if it is the organizers could easily change the contest to base 6 midway. The person who solved it would have achieved something they can write an academic paper about.

As Fritzlein mentioned, the nice thing about using games like Arimaa for the AI-challenge is that after the contest is over there is still a bigger challenge waiting. This applies for Havannah too.

Title: Re: AI Challenge Contest
Post by rabbits on Jan 9th, 2012, 3:03pm

on 01/08/12 at 11:23:00, Fritzlein wrote:
Another attraction is the elimination of randomized playing fields.  All previous contests needed an unsightly element of luck to keep the battles fresh, and participants legitimately complained that their bots did better on some maps but worse on others, so the map selection influenced their chances of winning.  Ugly.  Arimaa needs no such thing, thanks to the free setup and huge branching factor.


I actually like the idea of randomized playing fields in reduced-material Arimaa!  It would reduce the advantage that current bot developers have over people who had never heard of Arimaa, and the maps would be interesting!  The grid wouldn't have to be 8x8.  Figuring out how to represent the board would be an interesting problem in itself, since bitboards would not always be easily applicable.  Traps could be in different places.  Some squares could be designated as walls, where no piece may step.  If you want to be creative, you could introduce "sticky squares," on which a piece cannot step on and off in the same turn.  There are probably other interesting tweaks.

Title: Re: AI Challenge Contest
Post by christianF on Jan 9th, 2012, 3:22pm

on 01/09/12 at 15:03:13, rabbits wrote:
Traps could be in different places.  Some squares could be designated as walls, where no piece may step.  If you want to be creative, you could introduce "sticky squares," on which a piece cannot step on and off in the same turn.  There are probably other interesting tweaks.

Loads of them, that's how we came to 2000+ chess variants. But bot performance is only meaningfull within the context of strong human opposition, so imo. deviating into forests full of variants isn't the way to go. Small Arimaa ok, if players and developers can agree on that, but I fear 'arimaaish' will open the floodgates. ;)

Title: Re: AI Challenge Contest
Post by Tuks on Jan 11th, 2012, 5:38am
if the games are fast enough you could just have each bot play as both gold and silver against a specific opponent before going to the next one then people won't ever be able to use the excuse of he played gold and i had a much better chance of winning if i had played with gold based on my 80% win ratio as gold.

Title: Re: AI Challenge Contest
Post by Migi on Jan 11th, 2012, 8:32am
Congratulations Hippo on your 23rd place! My bot only made it to 31st place. I don't really know why it even got that high, though, because I only worked on it for like 3 days. Near the end of the contest, I made a second version of my bot, starting (almost) from scratch, which was mostly the same but with some small improvements here and there. I worked about a week on that, and locally it could beat my first version hands down, but to my surprise it performed pretty badly against other bots.

So in the end I just decided not to upload my second version.

I think the main reason my 2nd version is worse is the code I added to keep ants closer together and closer to my hills. The first version spreads out its ants a lot more, and often some ants randomly walk into an undefended hill. Also some of its ants end up behind enemy lines, which could be confusing the battle logic of some bots, and also allows me to surround enemy ants better. I also spent very little time on combat logic, and focused mostly on exploring and food gathering. I don't really know why though, better combat could have given me some easy ELO points.

About Arimaa (or a variant) being the next AI challenge game, I wouldn't get my hopes up. These (https://github.com/aichallenge/aichallenge/wiki/Game-Criteria) are the criteria they are using to select the next game. I'm afraid Arimaa falls a bit short for the "easy" criterium. A lot of people in this challenge are new to programming and just want to use the challenge as a fun way to learn programming. I think they would be scared off by game like Arimaa. Also, I'm not sure many people would find it "familiar" and "fun to watch".

A lot of people also liked the multi-player aspect of Ants. I personally didn't, but it looks like the two games that are receiving most consideration right now (Risk and multiplayer Asteroids) are also multi-player. So all in all I don't think we'll be seeing Arimaa as the next Google AI challenge game, but I don't think that's bad thing, it's a completely different challenge with a completely different audience. And there already is an Arimaa challenge, why would we need another one? ;)

Title: Re: AI Challenge Contest
Post by Hippo on Jan 11th, 2012, 2:12pm

on 01/11/12 at 08:32:07, Migi wrote:
Congratulations Hippo on your 23rd place! My bot only made it to 31st place. I don't really know why it even got that high, though, because I only worked on it for like 3 days. Near the end of the contest, I made a second version of my bot, starting (almost) from scratch, which was mostly the same but with some small improvements here and there. I worked about a week on that, and locally it could beat my first version hands down, but to my surprise it performed pretty badly against other bots.

I think that's because of the code I added to keep ants closer together and closer to my hills. The first version spreads out its ants a lot more, and often some ants randomly walk into an undefended hill. Also some of its ants end up behind enemy lines, which could be confusing the battle logic of some bots, and also allows me to surround enemy ants better. I also spent very little time on combat logic, and focused mostly on exploring and food gathering. I don't really know why though, better combat could have given me some easy ELO points.


Wow, I have not noticed another arimaa player was so close to me ;). I have replayed some of your bot's games. There were some points to easily improve ...
1) You had no code to break stalemate even when your  bot is very much ahead it is not able to break weak defense. ... But small number of bots had "army tactics implemented".
2) I have seen your bot to run with a single ant to the place attacked by pair of enemy ants ... I was trying to avoid such things from version 4, but I have removed last bugs in version 14 ... it was very important to minimize unnecessary loses.
3) your bot's ant distribution was bad ... it was worse than my and my was bad as well. (I have hoped the symmetry food gathering would help me in the distribution), but xiathis simple idea dominated.
4) you have timeouted too often.
I am fascinated that with all these issues you have finished 31st. I have spent a lot of time codding and debugging. Seems you have spent all the development time very efficiently. Congratulation. (I have hoped for place 31 for myself at the final turnament start).

Seems your bot got a lot of points thanks to high priority to walk to enemy hills what was successful against unaware opponents. And your main strategy was simillar to mine ... to win by eating faster than opponents.

Our games in final: 1 (http://aichallenge.org/visualizer.php?game=337201&user=7617) 2 (http://aichallenge.org/visualizer.php?game=340145&user=7617) 3 (http://aichallenge.org/visualizer.php?game=341439&user=7617) 4 (http://aichallenge.org/visualizer.php?game=341452&user=7617) 5 (http://aichallenge.org/visualizer.php?game=342802&user=7617) 6 (http://aichallenge.org/visualizer.php?game=343040&user=7617) 7 (http://aichallenge.org/visualizer.php?game=346310&user=7617).
You had fantastics finish ...

Title: Re: AI Challenge Contest
Post by Migi on Jan 11th, 2012, 7:33pm

on 01/11/12 at 14:12:09, Hippo wrote:
2) I have seen your bot to run with a single ant to the place attacked by pair of enemy ants ... I was trying to avoid such things from version 4, but I have removed last bugs in version 14 ... it was very important to minimize unnecessary loses.

I know. As I said, I spent very little time on battle logic. All ants are given orders individually, and they assume that all my other ants will stay where they are, but that's obviously not always true. If 2 of my ants are fighting 2 enemy ants, one might decide to attack, thinking it's at least an even trade, and the other might have seen some food nearby and runs away to go get it, leaving the first ant to die.


on 01/11/12 at 14:12:09, Hippo wrote:
1) You had no code to break stalemate even when your  bot is very much ahead it is not able to break weak defense. ... But small number of bots had "army tactics implemented".

To break stalemates you have to charge with all your ants at the same time. My simple battle logic that only looks at one ant at a time just couldn't do that.

There were some small improvements in this area for my second version, but all in all not much. A P1 minimax search, which in retrospect probably wouldn't have taken very long to write, would have done a lot better for these first 2 points.


on 01/11/12 at 14:12:09, Hippo wrote:
3) your bot's ant distribution was bad ... it was worse than my and my was bad as well. (I have hoped the symmetry food gathering would help me in the distribution), but xiathis simple idea dominated.

Yup. There were many big improvements in this area for my second version, which had some really good exploring, food gathering and ant distribution.


on 01/11/12 at 14:12:09, Hippo wrote:
4) you have timeouted too often.

I know, but I don't really know why. My main loop checks if it's within 10ms of the maximum time and if so, it stops. The loop itself takes only like 0.1ms, so timeouts shouldn't ever happen. But they did. Maybe an issue with the contest servers? I don't know.

Anyway my 2nd version was much faster and never gets even close to the timeout limit, so it didn't have that issue either.


on 01/11/12 at 14:12:09, Hippo wrote:
I am fascinated that with all these issues you have finished 31st.

Seems your bot got a lot of points thanks to high priority to walk to enemy hills what was successful against unaware opponents.

This fascinated me too. I knew that my first version had all the flaws you mentioned above, and I knew version 2 fixed or improved most of them (and other things as well). But when running both versions on the TCP servers, it was clear that my 2nd version was overall just worse. A lot worse. It just didn't score any points.

My first version expands and gathers food pretty fast, suicides some ninja ants into enemy lines and occasonally scores a few points with that, and then it dies. My second version is too defensive and builds up a huge army but doesn't have the multi-ant battle logic to do anything with it. Unfortunately I came to this conclusion too late in the contest (just a few hours before the deadline) to still make big changes.

So that's why I had to upload the version I only worked on for 3 days and had to throw away about 5 days of work. Note to self: don't start over from scratch unless you know for sure you'll have enough time to fix all issues that come up, no matter how big.


on 01/11/12 at 14:12:09, Hippo wrote:
You had fantastics finish ...

Thanks, you too.

Title: Re: AI Challenge Contest
Post by Hippo on Jan 12th, 2012, 1:02am
I had safetyTime turnTime/5. (I have expected that to be 200ms as I somehow was convinced the turntime is 1000ms), but sections of my code had calculated time where they should finish to allow some time for further tasks as some lower priority tasks should be finished before higher priority ones ... fortunately I made these times proportional to the remaining time after the parsing.
I was very aware of timeouts and I had no experience with profiling in java. It could be that 10% of my computation time was spent in routine checking the time remaining. I was very surprised how much computational time 500ms gives on current hardware. (Compared to the case 20 years back ;)).

I would never go to 10ms safetime as DOS/Windows used to measure time in units of 11ms.



Arimaa Forum » Powered by YaBB 1 Gold - SP 1.3.1!
YaBB © 2000-2003. All Rights Reserved.