Welcome, Guest. Please Login or Register.
Dec 4th, 2024, 9:26am

Home Home Help Help Search Search Members Members Login Login Register Register
Arimaa Forum « Evaluation and current depth »


   Arimaa Forum
   Arimaa
   Bot Development
(Moderator: supersamu)
   Evaluation and current depth
« Previous topic | Next topic »
Pages: 1  Reply Reply Notify of replies Notify of replies Send Topic Send Topic Print Print
   Author  Topic: Evaluation and current depth  (Read 2023 times)
GorgeTranche
Forum Junior Member
**



Arimaa player #8904

   


Gender: male
Posts: 9
Evaluation and current depth
« on: Jan 13th, 2014, 3:20am »
Quote Quote Modify Modify

Hallo guys,
 
i am rewriting the evaluation function of my bot right now and I am wondering some things...
 
Up to now, when I test my bot and the evaluation of it, I switch between a depth of 4 and 8 because that's when a move is completed.
 
But between these two are huuuge performance differences. And as well and most important - the used time per move...
 
So i am thinking. Should'nt I write different evaluation functions for different depths or am I missing something here?
 
For example - If I run my bot with depth = 8 its going to be too cautious because it thinks a situation is not safe, but it is because its my turn afterwards...
 
And If i would want to compromise and use a depth of for example 6 - Do I have to evaluate it different because of the half finished move for one side?
 
So how have You solved this problem?
 
Thanks a lot in advance  
Maurice
« Last Edit: Jan 13th, 2014, 3:22am by GorgeTranche » IP Logged
JimmSlimm
Forum Guru
*****




Arimaa player #6348

   


Gender: male
Posts: 86
Re: Evaluation and current depth
« Reply #1 on: Jan 13th, 2014, 3:38am »
Quote Quote Modify Modify

To my knowledge, this is a classic problem with shallow search depths when using alpha-beta, that is: we can see the bot is being too cautios or making pointless threats, because of the shallow search.
 
To solve it, you need to compensate these shortcomings with either:
1. Search deeper (time consuming)
2. Use more heuristics in the evalution function (like expert knowledge, decision trees)
 
For capture threats and goal threats, I recommend using "Quiescence Search" (google or look in this forum for details)
IP Logged
GorgeTranche
Forum Junior Member
**



Arimaa player #8904

   


Gender: male
Posts: 9
Re: Evaluation and current depth
« Reply #2 on: Jan 13th, 2014, 4:06am »
Quote Quote Modify Modify

Thanks for the answer!
 
Especially for goal threats a quiescence-search is already planed and partly implemented.  
Right now I am concentrating on the evaluation function over parallelization or quiescence-search because faster or deeper search doesn't help if the evaluation is bad Wink
 
So basically  it all comes to distinguish between current search depth and try and adjust constants and variables, right?  Wink
IP Logged
lightvector
Forum Guru
*****



Arimaa player #2543

   


Gender: male
Posts: 197
Re: Evaluation and current depth
« Reply #3 on: Jan 13th, 2014, 7:07am »
Quote Quote Modify Modify

Glad to see new developers getting into Arimaa!
 
I think many bots use mostly the same evaluation function regardless of depth, except that there is a bonus to the player whose turn it is, representing the value of being up a tempo, or having the initiative. Although it's okay if some terms in the evaluation function differ a little as well, if your testing shows that such changes are good.  
 
Note that a (flat) tempo bonus won't matter if your search goes to a totally uniform depth, but will matter once you start doing quiescence search, extensions, or reductions. This is an example of how in general, the design of the eval is not totally independent of the design of the search.
 
My bot does search to depths that end on partially completed turns, and the tempo bonus and quiescence search both "understand" that the state of the board being evaluated is that, for example, "silver has played 2 steps and has 2 steps left". But some other bots do fine only searching to whole turn depths, so really it's a choice you can consider testing for yourself to see which way is better for your particular bot.
 
You might already be doing this, but one important way to boost the bot's effective depth and smooth out differences in performance versus time a little is that when you do iterative deepening, make sure you can use the results of partial searches when you run out of time. In particular, at the root, always search the best move from the previous iteration first. Then, if time runs out and you've only partially completed a search at a certain depth, you can go ahead and use the best move found at that depth so far. Because even if you haven't proven it's the absolute best move at that depth, you've at least proven that it's as good or better than the first move searched, which is the move from the previous depth that you would have played otherwise. This lets your bot get major use out of being able to search even a little bit into a given iteration, even if it can't complete it in time.
« Last Edit: Jan 13th, 2014, 7:24am by lightvector » IP Logged
GorgeTranche
Forum Junior Member
**



Arimaa player #8904

   


Gender: male
Posts: 9
Re: Evaluation and current depth
« Reply #4 on: Jan 13th, 2014, 7:28am »
Quote Quote Modify Modify

glad that I found this topic for my bachelor thesis as it is always way more fun to do something one likes  Smiley
 
The bonus idea is not bad, thanks.
 
So at least for trap control I will do separate blocks for different search depths. All other non-positional evaluation-subjects won't mind how's turn it is anyway.
 
Where I am not sure yet is the fuss about iterative deepending...  
 
Although I see, that if the search is stopped early, the result could be very bad and very wrong if whole parts of the tree aren't evaluated at all.
 
Right now it is just Alpha-Beta. Even for Move-Ordering I use an Instance of the same Alpha-Beta search (without Memory and Moveordering..).
 
Other than good partial results - where is the benefit from using iterative deepening?
 
thanks for the fast replies Smiley
IP Logged
rbarreira
Forum Guru
*****



Arimaa player #1621

   


Gender: male
Posts: 605
Re: Evaluation and current depth
« Reply #5 on: Jan 13th, 2014, 7:37am »
Quote Quote Modify Modify

on Jan 13th, 2014, 7:28am, GorgeTranche wrote:

Other than good partial results - where is the benefit from using iterative deepening?

 
There are two other big advantages:
 
1- You get better move ordering for the later iterations.
2- Your program will always use as much thinking time as it can. Without iterative deepening you're forced to guess what depth you should search in order to use the available thinking time, which is very hard / impossible to do well.
IP Logged
GorgeTranche
Forum Junior Member
**



Arimaa player #8904

   


Gender: male
Posts: 9
Re: Evaluation and current depth
« Reply #6 on: Jan 14th, 2014, 7:08am »
Quote Quote Modify Modify

Yes, your right.
 
Originally the plan was to switch to the MTD(f) as it provides iterative deepending benefits and (if the moveordering is good...) outperformances AlphaBeta according to Aske Plaat. http://askeplaat.wordpress.com/534-2/mtdf-algorithm/
 
OK, right - it benefits from iterative deepening cause it is called inside an iterative deepening framework Grin
 
I probably will not get to do this for the bachelor thesis, but I will give it a try later on, i guess. And then parallelization.
 
First the evaluation function  Smiley
 
Thanks a lot guys! thumbs up!
« Last Edit: Jan 14th, 2014, 7:09am by GorgeTranche » IP Logged
Pages: 1  Reply Reply Notify of replies Notify of replies Send Topic Send Topic Print Print

« Previous topic | Next topic »

Arimaa Forum » Powered by YaBB 1 Gold - SP 1.3.1!
YaBB © 2000-2003. All Rights Reserved.