Welcome, Guest. Please Login or Register.
Apr 24th, 2024, 4:41am

Home Home Help Help Search Search Members Members Login Login Register Register
Arimaa Forum « Sharp 2015 Challenge paper »


   Arimaa Forum
   Arimaa
   Bot Development
(Moderator: supersamu)
   Sharp 2015 Challenge paper
« Previous topic | Next topic »
Pages: 1  Reply Reply Notify of replies Notify of replies Send Topic Send Topic Print Print
   Author  Topic: Sharp 2015 Challenge paper  (Read 4633 times)
lightvector
Forum Guru
*****



Arimaa player #2543

   


Gender: male
Posts: 197
Sharp 2015 Challenge paper
« on: Oct 3rd, 2015, 4:09pm »
Quote Quote Modify Modify

Paper published in the ICGA Journal!
 
Uploaded and openly available with permission from the journal:
http://icosahedral.net/downloads/djwu2015arimaa_color.pdf
 
Wu, D. (2015). Designing a Winning Arimaa Program. ICGA Journal, Vol. 38, No. 1, pp. 19-41.  
 
I wonder what the 2016 CC (if it does happen) will look like Smiley.
IP Logged
chessandgo
Forum Guru
*****



Arimaa player #1889

   


Gender: male
Posts: 1244
Re: Sharp 2015 Challenge paper
« Reply #1 on: Oct 4th, 2015, 10:51am »
Quote Quote Modify Modify

Congrats Smiley
IP Logged

Fritzlein
Forum Guru
*****



Arimaa player #706

   
Email

Gender: male
Posts: 5928
Re: Sharp 2015 Challenge paper
« Reply #2 on: Oct 4th, 2015, 11:48am »
Quote Quote Modify Modify

What an excellent paper to complement an excellent bot!  Who knew that lightvector would be as accomplished at writing as he is at coding?
 
I would have guessed, before reading, that sharp's leap in strength did not have much to due with improved evaluation, but it seems evaluation improvements played a critical role.  Probably a weaker player than lightvector would have had difficulty identifying and encapsulating the strategic flaws which he fixed.  It is interesting how much (and what sort of) human domain expertise is necessary to create a machine expert.
 
The improvements to search are what I find most fascinating, in particular the innovations that allowed sharp to be more selective.  It has been my intuition for some time that it would be important to do extensions/reductions on some basis other than static evaluation or even shallow evaluation.  To my way of thinking some moves are "critical" for evaluating a position even if they look superficially bad, and these moves must be examined.
 
I am therefore delighted to read section 5.3.  Lightvector calls the moves that deserve special attention "tactical" rather than critical, but the point is the same.  Most moves can be left out of the deepest search without significantly impacting playing strength, but a few moves simply must be examined to greatest depth, and the method of choosing these moves is not superficial evaluation.  I note that human domain knowledge was again required to craft this move selection heuristic.
 
Not only does this paper give a rich sampling of features that figured prominently in sharp's victory, it also leaves me with a sense of oceans of small-but-cumulatively-immense features that are not discussed.  I am left to wonder whether it is possible that lightvector spent even more time developing sharp than I spent developing my human Arimaa expertise.
 
Congratulations, lightvector, on a well-earned, monumental accomplishment, and thank you for sharing your insights!
IP Logged

half_integer
Forum Guru
*****



Arimaa player #8819

   


Gender: male
Posts: 104
Re: Sharp 2015 Challenge paper
« Reply #3 on: Nov 7th, 2015, 5:36pm »
Quote Quote Modify Modify

Who can add the link to the papers page of the Arimaa site?  It obviously should be there.
IP Logged
Migi
Forum Senior Member
****



Arimaa player #4643

   


Gender: male
Posts: 26
Re: Sharp 2015 Challenge paper
« Reply #4 on: Jan 22nd, 2016, 10:34am »
Quote Quote Modify Modify

Thank you for sharing this well-written and very interesting paper, and congratulations on winning the challenge!
 
One question I had after reading it though is why you are using a simple "always-replace" scheme for the transposition table. With an always-replace scheme, it seems to me that you'll end up doing a lot of unnecessary recalculation. The nodes near the root will very frequently be overwritten by leaf-nodes, and every time that happens you'll have to re-visit that whole subtree again (with better alpha-beta bounds, sure, and some nodes will still be cached also, but still). Surely a slightly smarter replacement scheme could prevent a lot of this unnecessary recalculation, at a negligible cost (both memory and calculation)?
 
It just seems like such a low-hanging fruit. As you said yourself, it's the simplest replacement scheme there is, and this stands in very stark contrast with all the other heuristics in Sharp, which you've clearly spent massive amounts of work on, to great effect. So for this "low-hanging fruit", did you really not try any different replacement schemes, or did the things you tried all perform worse than the simple always-replace scheme?
 
(edit: spelling)
« Last Edit: Jan 22nd, 2016, 4:17pm by Migi » IP Logged
lightvector
Forum Guru
*****



Arimaa player #2543

   


Gender: male
Posts: 197
Re: Sharp 2015 Challenge paper
« Reply #5 on: Jan 24th, 2016, 10:04pm »
Quote Quote Modify Modify

Good question. I didn't try anything else yet, but my observation has been that there has been almost no improvement in performance or decrease in the number of nodes searched in sharp from increasing the size of the transposition table. One would expect that if the replacement scheme were important, simply making the table larger should also show an improvement, and it doesn't.
 
It makes sense - sharp's evaluation is very heavy. I forget more precise numbers, but order of magnitude wise, I think I get only 100k nodes/sec typically on a single thread, which means that if you have a table with, say, 1G entries (x16 bytes/entry = 16GB), you have on the order of 10000 cpu-seconds before a random entry in the table gets in expectation one write. Even with 10 threads that's about 16 minutes, and the CC time control is only 2 minutes/move. That's consistent with the observation that a bigger table only barely decreases #nodes searched to reach a given depth.
 
It could be a few percent improvement in average performance, so by far not the biggest issue, but certainly worth implementing and testing a different scheme for the next version.
« Last Edit: Jan 24th, 2016, 10:17pm by lightvector » IP Logged
knarl
Forum Guru
*****



Arimaa player #1648

   


Gender: male
Posts: 104
Re: Sharp 2015 Challenge paper
« Reply #6 on: Feb 7th, 2016, 3:53am »
Quote Quote Modify Modify

Just wondering if any profiling has been done on the multithreading implementation? If there's any blocking going on, it would be interesting to port it to a non-blocking thread pool implementation...
IP Logged
Pages: 1  Reply Reply Notify of replies Notify of replies Send Topic Send Topic Print Print

« Previous topic | Next topic »

Arimaa Forum » Powered by YaBB 1 Gold - SP 1.3.1!
YaBB © 2000-2003. All Rights Reserved.