Welcome, Guest. Please Login or Register.
May 4th, 2024, 8:17pm

Home Home Help Help Search Search Members Members Login Login Register Register
Arimaa Forum « Big Perceptron bot, move encoding and BM »


   Arimaa Forum
   Arimaa
   Bot Development
(Moderator: supersamu)
   Big Perceptron bot, move encoding and BM
« Previous topic | Next topic »
Pages: 1  Reply Reply Notify of replies Notify of replies Send Topic Send Topic Print Print
   Author  Topic: Big Perceptron bot, move encoding and BM  (Read 1145 times)
NIC1138
Forum Guru
*****




Arimaa player #65536

   
WWW

Gender: male
Posts: 149
Big Perceptron bot, move encoding and BM
« on: Apr 28th, 2006, 1:34am »
Quote Quote Modify Modify

Hi there... I know it's not very exciting to some people, but I would like to see a bot based on a big perceptron neural network... Do you think we would need a very big one to have interesting results?? At least I would like to see it learning to make 4-move captures, for example...  Just watching the network learning the rules would be already interesting!! Cool
 
 
Thinking about this I saw another interesting question. I always see people talking about moves instead of plys...  But the truth is that there is a lot of redundancy in the moves, isn't it?...  There are many situations where the order of the moves doesn't matter. When we have 4 free stones, we can move them in 24 different orders (is this it?) and end up with the same result. In some captures, we can move our stone to 2 different directions, or use two of our stones to end up with the same result...
 
I would like to see a MLP, or something like it, encoding states and plys (4-moves)... Also, It could be fun to see what happens  
when we start to strangle the encoding bottleneck!...
 
I would like to know too if a MLP could generalize a move in respect to the relative strenght of the stones, instead if their absolute strenght value, and also explore symmetries.
 
This may not be hard AI, but is very interesting from the learning systems and information theory POV...
 
And since we are here...  Does anybody know Boltzmann Machines? I saw someone talking about simmulated annealing...  I'm a fan of the BM!...  Roll Eyes
IP Logged
99of9
Forum Guru
*****




Gnobby's creator (player #314)

  toby_hudson  


Gender: male
Posts: 1413
Re: Big Perceptron bot, move encoding and BM
« Reply #1 on: Apr 28th, 2006, 1:54am »
Quote Quote Modify Modify

I've only played with small neural nets before.  I dare not apply a NN to a problem like arimaa with so many inputs, and poorly defined training sets.
 
on Apr 28th, 2006, 1:34am, NIC1138 wrote:
And since we are here...  Does anybody know Boltzmann Machines? I saw someone talking about simmulated annealing...  I'm a fan of the BM!...  Roll Eyes

I was the one talking about SA.  Since you mentioned Boltzmann Machines, I've just read about them on Wikipedia.  I'm glad you brought it up!  Wikipedia suggests that they have not been very successful at anything but the simplest problems so far...
IP Logged
leo
Forum Guru
*****





   


Gender: male
Posts: 278
Re: Big Perceptron bot, move encoding and BM
« Reply #2 on: Apr 28th, 2006, 8:21am »
Quote Quote Modify Modify

Yes, a specific neural network can handle all kinds of symmetry, including temporal symmetry, but I imagine that only a whole bunch of very well interconnected networks could do an interesting work. Except for whole board analysis it seems to me more efficient to split the board into areas to turn and flip and treat in separate specialized modules. Piece-type blindness would be useful too for some kinds of quick recognition such as trap neighborhood and population dispersion.
 
I once considered using NN for Arimaa but I've opted for quicker hard-coded techniques to handle symmetry, temporal sequence memorization, and any type of learning I plan to implement. I'd be very interested in seeing a NN Arimaa player though.
IP Logged
leo
Forum Guru
*****





   


Gender: male
Posts: 278
Re: Big Perceptron bot, move encoding and BM
« Reply #3 on: Apr 28th, 2006, 8:36am »
Quote Quote Modify Modify

on Apr 28th, 2006, 1:34am, NIC1138 wrote:
Thinking about this I saw another interesting question. I always see people talking about moves instead of plys...  But the truth is that there is a lot of redundancy in the moves, isn't it?...  There are many situations where the order of the moves doesn't matter. When we have 4 free stones, we can move them in 24 different orders (is this it?) and end up with the same result. In some captures, we can move our stone to 2 different directions, or use two of our stones to end up with the same result...

 
This is an aspect that can be tackled well by manipulating goals and subgoals. Disconnected subgoals can be handled in any order, local motions have alternative paths, complex operations (clearing the way by pushing, or rescueing a frozen piece for it to perform an action...) require strict sequential treatment.
IP Logged
IdahoEv
Forum Guru
*****



Arimaa player #1753

   


Gender: male
Posts: 405
Re: Big Perceptron bot, move encoding and BM
« Reply #4 on: Apr 29th, 2006, 6:12am »
Quote Quote Modify Modify

Using ANNs for Arimaa is not as difficult a problem as chess, but it's still extremely difficult.   Among the basic problems is how to present the input to the network ... you can't just have a cell with a number from 0-6 representing 0=empty 1=rabbit, 2=cat, etc.   I've spent about three months so far just on the problem of how to represent the board to a network.
 
And then the question becomes ... what are your outputs?   Is the network going to output a coordinate pair and a direction for each step, telling which piece to move?   Or will you have an output layer the size of the board, with four output cells per board square representing the four possible move directions  ... with a winner-take-all network picking one piece to move one direction?
 
I think one giant perceptron will produce extremely poor results.  To start with, a fully connected network requires 64*N*M weights in the first layer, where N is the number of first-layer neurons and M is the number of replicates of the board you need to represent the current board state.   I think the absolute minimum you need is 4 replicates (white rabbits, white officers, black rabbits, black officers), so that's 256*64*N weights in the first layer alone.   Call it 1024 first-layer neurons to describe different possible board states, and suddenly you're adapting 2^24 weights in the first layer, plus a similar number in the output layer assuming you are trying to output move coordinates.   All this from a training set of only about a few million board positions that have ever been explored in the history of the game ... there's just not enough data to train it, not by many orders of magnitude.
 
A perceptron, though, ignores the repeating internal structure of Arimaa, and leveraging repeating structure can chop several dimensions out of your search space.  (i.e., since a cat freezes a rabbit anywhere on the board, there's no need for a network to learn this fact independently at every location.)  
 
Other structures with  repeating sub-structures  and feature detection ... maybe.   I am currently working on a few ideas that look like giant customized neocognitrons, but frankly I'm skeptical they will work at all, and certain they will be incredibly slow if they do.   I will definitely finish writing a basic bot first (to be called bot_Zombie because it *doesn't* use a brain) before I even consider tackling the ANN approach in earnest.
IP Logged
NIC1138
Forum Guru
*****




Arimaa player #65536

   
WWW

Gender: male
Posts: 149
Re: Big Perceptron bot, move encoding and BM
« Reply #5 on: Apr 30th, 2006, 9:38am »
Quote Quote Modify Modify

Good, great answers! =)
 
First, about the BM:  In the classical articles it is used to solve the TSP, and other combinatorial problems, so it doens't have anything of "inherently weak"!...  The most interesting aspect is that we can control the simmulated annealing to use all the avaiable time. The quality of the play would depend strongly on the time avaiable to play, and in an obvious and easy-to-program way... If we could program that (probably huge)  BM, it would be really nice to see it getting dumber with the shorter play time! Smiley
 
about leo's idea of a bunch of interconnected ANNs... I gave that some tought...  I was trying to see how exactly we coud spend less resources with that.   I imagined something like that to start:  separate networks trained to play according to different strategies.  First a network would learn the opening, then another would learn the finalization, then other to learn the "middle-of-the game", perhaps with seprate networks to atack/defense (?)...  
 
Where do we win resources?... Well, for example, in the beginning of the game we seldom move the rabbits, while they are the key stones in the ending of the game (altough they don't move frequently also)... The fewer the stones we have to look at, less neurons and connection we need, the learnin gets easier and everything gets quicker...
 
I think this would perhaps form what they call a "comitee of networks", is it?...    The problem becomes when to switch between the networks! Cheesy another network could do this perhaps, I don't know, I would start switching after a fixed number of moves, just to see what happens...
 
 
Now, about IdahoEv's post, I've been doing the same calculations!...   Cool    I still don't know wether the -6 .. 6 numbering scheme in the 64 places of the board would be a good place to start, but cetainly it is the most cute scheme...  Another scheme would be to enter the line and column of each stone, with a "0" for killed...
 
Whatever is the input, the important thing to see is that the output of the hidden layer can be trained to be an encoding of  the board state... And if we restrict the networks to the beginning or the ending of the game, we can reduce the number of these neurons, since there are fewer states to be encoded, and we will have different encodings!... It would be interesting to see if this really happens...
 
Anyway, I'll finish my client first, then I'll start to mess around with this, so we can argue over real scientific evidence instead of crazy oniric speculations!...  Roll Eyes
IP Logged
Pages: 1  Reply Reply Notify of replies Notify of replies Send Topic Send Topic Print Print

« Previous topic | Next topic »

Arimaa Forum » Powered by YaBB 1 Gold - SP 1.3.1!
YaBB © 2000-2003. All Rights Reserved.