|
||||||||||||||||
Title: Corner cases Post by lightvector on Feb 16th, 2008, 2:46pm I just finished debugging my goal tree, and just for fun, here are some interesting situations that I found. It's these kind of cases that make it difficult to get a completely correct implementation. And there are a LOT of them, although they are usually not as strange as the ones here. All of them are lowercase to capture or goal, in 4 steps. Goal: (not possible in the this position...) Code:
(but it is possible now!) Code:
Sacrifice to make room. Code:
Make the trap safe first. Code:
Remove defender. Code:
Remove defender. Code:
Capture: Sacrifical capture 1 Code:
Sacrifical capture 2 (this one I don't generate in the tree, because it is unlikely to ever be good) Code:
The numerous conditions that have to be checked around frozen pieces and traps made the goal tree a real pain to write. The traps in particular created about a dozen additional special cases that needed to be checked for detecting 4-step goals. But I'm glad to say that the goal tree is finally working now. There are possibly a few exceptional cases it is missing, but in tests, it seems bug-free and accurate. Yay! Now we'll see how much I can improve the evaluation before the CC. |
||||||||||||||||
Title: Re: Corner cases Post by Fritzlein on Feb 16th, 2008, 3:12pm Your sacrificial capture #1 I have seen in live games, and even made use of myself once if I remember correctly. I've never seen #2, but it could be useful in a bizarre goal defense situation when it's worth giving up two little pieces to kill a rabbit and buy time. These special cases are very amusing, and illustrate well why a decision tree is so hard to implement. The fact that you have done it and care to do it right is very hard core. I'm impressed. I deduce that Bomb's goal decision tree is imperfect, not only from occasional strange play by Bomb but also from debugging output when I analyze with it. I hope very much that you (or anyone) can dethrone Bomb this year, because competition would whet Fotland's appetite for participation. Since he dropped out a few years ago we've been waiting for someone to raise the bar. |
||||||||||||||||
Title: Re: Corner cases Post by fotland on Mar 6th, 2008, 1:52am Bomb's goal tree has many bugs. It's only really accurate at 3 steps. If Bomb loses I'll probably work on it again, but I also plan to put a lot of time into monte carlo go this year. David |
||||||||||||||||
Title: Re: Corner cases Post by lightvector on Mar 6th, 2008, 10:09am By the way, if anyone's interested in the way I tested my tree: I simply pulled all the Arimaa game records up to the end of 2007, filtered them down to only the games that ended in goal, and played batches of them randomly to within the last few recorded moves. Then, I compared the results of a 4-ply alpha-beta search on each position to the eval given by the goal tree, coding it to notify me whenever there was a discrepancy. I tested at first in batches of about a hundred games at a time, finding errors in my goal tree every few dozen games. As I fixed all of these, the errors quickly became quite rare, perhaps only every few hundred positions or so. I also did some experimentation with deliberately introducing an error into the code to see how many positions it would take to detect the error - it was usually at most a few hundred games, and for commonly hit parts of the tree, only a few dozen games. After my goal tree passed something like 8000 consecutive tests without errors, I decided it was good enough. I might repeat testing later but instead do a deeper search, like 6 ply, so that the goal tree gets tested hundreds of times on each position, but I think its fine for now. Anyways, it should be accurate except possibly in some very exotic positions, and for 3 step goals, the tree is correct beyond any doubt. It actually only took a few hours to do all this, since I already have testing code that I run routinely after each change I make to the search. It was really quite quick to test, after the long slog of writing the tree in the first place. (I tested my capture tree in a similar way) |
||||||||||||||||
Title: Re: Corner cases Post by 99of9 on May 25th, 2008, 5:55pm on 02/16/08 at 14:46:56, lightvector wrote:
I have a growing appreciation of your work lightvector! I decided to finally put some static goal detection into gnobot. My approach is to draw out every possible type of goal in 1 step, 2 steps, 3 steps, and 4 steps, and code them. If I've missed any or found some false positives, I can debug for them at the end. 1 step was easy! One case only :-). 2 steps was also pretty tame. 4 cases, so not too much code. [As an example of what a case is, the four are: FF, SF, VF, UF. Where F means rabbit move forward, S means rabbit move sideways, V means vacate a square for the next piece to move into, U means unfreeze the next piece to move.] 3 steps is already an annoyingly large set. I think I can list all the possibilities (around 20... I don't have my list on me at the moment), but each one takes about 50 lines of code, so it'll take me a while to implement them all. 4 steps looks truly scary. I've started a list of the cases, and even the obvious ones get me up to 56 cases and I haven't even used pushes or pulls yet. I'm starting to wonder if I have to write a general function that can cover any case I ask of it, which might even need to be recursive (which effectively turns it into a sort of limited quiescence search). So, lightvector, or anyone else who's done this. How many lines of code have you written to satisfy you of catching almost all of the 4 step goals? |
||||||||||||||||
Title: Re: Corner cases Post by BlackKnight on May 25th, 2008, 9:25pm on 05/25/08 at 17:55:51, 99of9 wrote:
I never did that for goals but when I wrote some code for botV I did this for captures. The code is not complete and once even led to a elephant 'sacrifice' when botV tried to drag some piece behind itself through the trap but no other piece of botV was securing the trap. ;) That piece of code has about 600 lines. Rather than coding all those cases botCat had a list of all moves ever played in 2x6x64 (color x type x square) separate files. Then it went through all pieces on the board, opened the respective file, and tried all moves. Of course all the moves ever played are only a small subset, so Cat missed tons of good moves. I recently tried to describe the capture moves in a more general format, and wrote a little tool to mirror, rotate and translate the moves to all four traps. I didn't finish the list, but some of the items are: // 1 push to victim (straight) // 1 push to victim (L-shaped) // 1 pull to victim (straight) // 1 pull to victim (L-shaped) // 1 push to hand-holding friend of victim // 1 pull to hand-holding friend of victim (straight) // 1 pull to hand-holding friend of victim (L-shaped) // pull-push (L, L) // pull-push (straight, L) // push-push (straight, straight) // push-push (L, straight) // push-push (straight, L) // push-push (L, L) far // push-push (L, L) close // pull-push (L, L) // pull-push (straight, straight) // approach-push (straight,straight) // approach-push (L,straight) // approach-push (straight, L) // approach-push (L, L) same for pull 8 combinations for appr-appr-push 8 combinations for appr-appr-pull There must be plenty of other combinations like unfreezing the killer, stepping aside to let the killer through, approaching the hand-holding friend, ... For example // 1 push to victim (straight) with Z > v described as vc7s vc6x Zc8s gives: vc7s vc6x Zc8s vd6w vc6x Ze6w vc5n vc6x Zc4n vb6e vc6x Za6e vc4s vc3x Zc5s vd3w vc3x Ze3w vc2n vc3x Zc1n vb3e vc3x Za3e vf7s vf6x Zf8s vg6w vf6x Zh6w vf5n vf6x Zf4n ve6e vf6x Zd6e vf4s vf3x Zf5s vg3w vf3x Zh3w vf2n vf3x Zf1n ve3e vf3x Zd3e Even with this list the bot program still would need to make sure that Z is not frozen, that Z > v, and no friend of v is protecting the trap, and more things for more complicated captures. Actually, I liked the code in botV better: For example: It was checking whether a possible victim was standing close to an unprotected trap, and then defined the squares next to the victim as target squares, and tried to reach those within at most two steps with a stronger pieces so that it could perform the capture at the end. So botV really had a 'plan'. ;) |
||||||||||||||||
Title: Re: Corner cases Post by lightvector on May 26th, 2008, 4:06pm Currently, it's about 3000 lines. But please don't count on that as any sort of guide. It almost certainly could be shorter. I bet I could get it to around 1000 lines or so. If planned better from the beginning, I could probably written it with only that many lines to begin with. However, I haven't touched it since I got it working, because the it's not the current bottleneck (only called in the main search, and not in quiescence, making it reasonably cheap), and because it would be more trouble than it's worth if I accidentally broke something. Luckily, it is well-organized, so I can easily add cases or locate the relevant code if problems arise. As much as possible is handled statically, but the tree does do some recursion (mostly in the 4-step cases), otherwise it would have been impossible to write. This probably has a significant cost in speed, but as long as the runtime is still reasonable, there's no problem, since again, the tree is not the bottleneck. Sadly, I'm finding that goal is the easiest problem. There's always a definitely correct answer. All you have to do is put in the effort, and you are guaranteed to have results. To me, it's waaaay harder to get a bot to behave well in vague strategic situations, which is precisely the way in which OpFor has succeeded. Hats off to Janzert. =) Also, don't let the 4-step cases put you off from at least writing a tree to handle up to 3 steps. 4 is probably not a huge improvement over 3, as evidenced by Bomb's strong goal search. As long as you have something passable, the real potential for improvements definitely lie elsewhere. |
||||||||||||||||
Title: Re: Corner cases Post by Fritzlein on May 27th, 2008, 2:18pm on 05/26/08 at 16:06:16, lightvector wrote:
But generating 4-step goals and captures serves not only to improve static eval but also to extend selective search, so the extra effort can have a double payoff, right? |
||||||||||||||||
Title: Re: Corner cases Post by 99of9 on May 27th, 2008, 2:38pm on 05/27/08 at 14:18:45, Fritzlein wrote:
I can see how that would work for captures. For goals, there's not much point selectively searching past where you see a goal, so I don't see how it would work. [EDIT] I suppose you could search deeper every time you saw that either side was *threatening* a goal. But at the moment I don't see that being selective enough for my liking. |
||||||||||||||||
Title: Re: Corner cases Post by 99of9 on May 27th, 2008, 3:36pm on 05/25/08 at 21:25:01, BlackKnight wrote:
Thanks for bringing this up, I expect this will be my next task once goals are working.[/quote] Quote:
Yes, my preliminary look suggests that there are quite a few cases, but the outlook is not as bad as the four step goals. |
||||||||||||||||
Title: Re: Corner cases Post by 99of9 on May 27th, 2008, 3:45pm on 05/26/08 at 16:06:16, lightvector wrote:
I'm already past 1000 and have only done half of the 3-steps! Quote:
Fair enough, no point fiddling if it's not the bottleneck. Quote:
This is why I was silly to ignore it so far! My rationale went something like: if you can't win the opening, you'll always be on the back foot, so there's no point working on goals until you can win the opening. But now I see that even if the bot does win the opening, humans can often score an easy win by forcing a rabbit through while the bot is winning material. Quote:
True. Obviously we all have a long way to go on this front. This is the idea of Gnobot's total evaluation rewrite. I've tried to include every influence a strategic human would consider. I think the tough thing for me will be how to combine them all well. Quote:
I suppose it depends how deep your search goes. If you get to exactly 12 steps, then I would imagine a 4-step goal would be very useful, but if you search to 13 steps, then 3-step goals are all you really need! I'm committed to 3-step goals, and will reevaluate after that. |
||||||||||||||||
Title: Re: Corner cases Post by jdb on May 27th, 2008, 3:47pm Clueless has about 1600 lines of code to test for 4 step goals. The 4 step case is recursive. Most of the time, examining the following three things is enough to quickly know if a goal is possible. 1) Goal Square 2) Square on seventh rank touching goal square 3) Distance to closest player rabbit If there is an enemy piece on the goal square, it takes at least two steps to remove it. If there is a player piece on the goal square it takes at least one step to remove it. The same applies for the square on the seventh rank, except if the piece is a friendly rabbit. The sum of these three items can be quickly computed. Most of the time this sum is greater than 4. Of course, if the sum is 4 or less, the fun begins. |
||||||||||||||||
Title: Re: Corner cases Post by jdb on May 27th, 2008, 3:55pm on 05/27/08 at 15:45:47, 99of9 wrote:
Having full 4 step goal detection is much better than 3 step goal detection, in a goal race situation. Consider a position with 4 steps remaining. With 4 step goal detection, its a single call to the goal test routine to see if the player has a winning move. With 3 step goal detection, potentially all the 1 and 2 step moves need to be generated and checked by the 3 step routine. This will likely be 10-20 times slower than doing it with a 4 step goal routine. |
||||||||||||||||
Title: Re: Corner cases Post by 99of9 on May 27th, 2008, 3:57pm on 05/27/08 at 15:47:28, jdb wrote:
Yes, good point, pre-screening is quite useful here. I do it based on rabbits rather than on target squares, and basically try to test for the most easily violated conditions in order. |
||||||||||||||||
Title: Re: Corner cases Post by Janzert on May 27th, 2008, 8:17pm Thanks for the kind words lightvector. All the strategy tips I've managed to pick up from the community in my years of hanging around are hopefully finally being put to good use. :) Of course right now in the sharp-opfor postal game sharp isn't looking too shabby in the strategy department while opfor is imploding. While OpFor still sorely needs a good goal search as well, it does have a decent trap search. I also think that both of these can be used to benefit a bot in very similar ways. OpFor finds all 4 step and some 6 step traps. This information is then used three different ways, move ordering, quiescense search and static eval. For move ordering capture moves are tried right after the hash move. The capture moves of course form the basis of the quiescense search. In both of these I originally just tried steps that led to a capture within the current move (i.e. capture length <= steps left) but I found it beneficial in both cases to try all the captures. Finally of course in the static eval, my ability to capture opponent pieces is good the opponent able to capture mine is bad. Janzert |
||||||||||||||||
Arimaa Forum » Powered by YaBB 1 Gold - SP 1.3.1! YaBB © 2000-2003. All Rights Reserved. |