|
||
Title: Evaluation and current depth Post by GorgeTranche on Jan 13th, 2014, 3:20am Hallo guys, i am rewriting the evaluation function of my bot right now and I am wondering some things... Up to now, when I test my bot and the evaluation of it, I switch between a depth of 4 and 8 because that's when a move is completed. But between these two are huuuge performance differences. And as well and most important - the used time per move... So i am thinking. Should'nt I write different evaluation functions for different depths or am I missing something here? For example - If I run my bot with depth = 8 its going to be too cautious because it thinks a situation is not safe, but it is because its my turn afterwards... And If i would want to compromise and use a depth of for example 6 - Do I have to evaluate it different because of the half finished move for one side? So how have You solved this problem? Thanks a lot in advance Maurice |
||
Title: Re: Evaluation and current depth Post by JimmSlimm on Jan 13th, 2014, 3:38am To my knowledge, this is a classic problem with shallow search depths when using alpha-beta, that is: we can see the bot is being too cautios or making pointless threats, because of the shallow search. To solve it, you need to compensate these shortcomings with either: 1. Search deeper (time consuming) 2. Use more heuristics in the evalution function (like expert knowledge, decision trees) For capture threats and goal threats, I recommend using "Quiescence Search" (google or look in this forum for details) |
||
Title: Re: Evaluation and current depth Post by GorgeTranche on Jan 13th, 2014, 4:06am Thanks for the answer! Especially for goal threats a quiescence-search is already planed and partly implemented. Right now I am concentrating on the evaluation function over parallelization or quiescence-search because faster or deeper search doesn't help if the evaluation is bad ;) So basically it all comes to distinguish between current search depth and try and adjust constants and variables, right? ;) |
||
Title: Re: Evaluation and current depth Post by lightvector on Jan 13th, 2014, 7:07am Glad to see new developers getting into Arimaa! I think many bots use mostly the same evaluation function regardless of depth, except that there is a bonus to the player whose turn it is, representing the value of being up a tempo, or having the initiative. Although it's okay if some terms in the evaluation function differ a little as well, if your testing shows that such changes are good. Note that a (flat) tempo bonus won't matter if your search goes to a totally uniform depth, but will matter once you start doing quiescence search, extensions, or reductions. This is an example of how in general, the design of the eval is not totally independent of the design of the search. My bot does search to depths that end on partially completed turns, and the tempo bonus and quiescence search both "understand" that the state of the board being evaluated is that, for example, "silver has played 2 steps and has 2 steps left". But some other bots do fine only searching to whole turn depths, so really it's a choice you can consider testing for yourself to see which way is better for your particular bot. You might already be doing this, but one important way to boost the bot's effective depth and smooth out differences in performance versus time a little is that when you do iterative deepening, make sure you can use the results of partial searches when you run out of time. In particular, at the root, always search the best move from the previous iteration first. Then, if time runs out and you've only partially completed a search at a certain depth, you can go ahead and use the best move found at that depth so far. Because even if you haven't proven it's the absolute best move at that depth, you've at least proven that it's as good or better than the first move searched, which is the move from the previous depth that you would have played otherwise. This lets your bot get major use out of being able to search even a little bit into a given iteration, even if it can't complete it in time. |
||
Title: Re: Evaluation and current depth Post by GorgeTranche on Jan 13th, 2014, 7:28am glad that I found this topic for my bachelor thesis as it is always way more fun to do something one likes :) The bonus idea is not bad, thanks. So at least for trap control I will do separate blocks for different search depths. All other non-positional evaluation-subjects won't mind how's turn it is anyway. Where I am not sure yet is the fuss about iterative deepending... Although I see, that if the search is stopped early, the result could be very bad and very wrong if whole parts of the tree aren't evaluated at all. Right now it is just Alpha-Beta. Even for Move-Ordering I use an Instance of the same Alpha-Beta search (without Memory and Moveordering..). Other than good partial results - where is the benefit from using iterative deepening? thanks for the fast replies :) |
||
Title: Re: Evaluation and current depth Post by rbarreira on Jan 13th, 2014, 7:37am on 01/13/14 at 07:28:35, GorgeTranche wrote:
There are two other big advantages: 1- You get better move ordering for the later iterations. 2- Your program will always use as much thinking time as it can. Without iterative deepening you're forced to guess what depth you should search in order to use the available thinking time, which is very hard / impossible to do well. |
||
Title: Re: Evaluation and current depth Post by GorgeTranche on Jan 14th, 2014, 7:08am Yes, your right. Originally the plan was to switch to the MTD(f) as it provides iterative deepending benefits and (if the moveordering is good...) outperformances AlphaBeta according to Aske Plaat. http://askeplaat.wordpress.com/534-2/mtdf-algorithm/ OK, right - it benefits from iterative deepening cause it is called inside an iterative deepening framework ;D I probably will not get to do this for the bachelor thesis, but I will give it a try later on, i guess. And then parallelization. First the evaluation function :) Thanks a lot guys! thumbs up! |
||
Arimaa Forum » Powered by YaBB 1 Gold - SP 1.3.1! YaBB © 2000-2003. All Rights Reserved. |