Welcome, Guest. Please Login or Register.
Jun 16th, 2019, 9:56pm

Home Home Help Help Search Search Members Members Login Login Register Register
Arimaa Forum Evolving a bot


   Arimaa Forum
   Arimaa
   Bot Development
(Moderator: supersamu)
   Evolving a bot
« Previous topic | Next topic »
Pages: 1 2 3  Reply Reply Notify of replies Notify of replies Send Topic Send Topic Print Print
   Author  Topic: Evolving a bot  (Read 6031 times)
thomastanck
Forum Senior Member
****



6366th Person to register!

   


Gender: male
Posts: 30
Re: Evolving a bot
« Reply #30 on: Apr 10th, 2011, 7:03am »
Quote Quote Modify Modify

Hey, I think you should try to evolve neural networks though, neural networks are usually better than rule-based ones.
Here are a couple links for you to start out:
http://en.wikipedia.org/wiki/Artificial_neural_network
http://www.youtube.com/watch?v=oU9r64tc7yE
 
Neural networks may take longer to evolve though, I am not very sure myself, but I think you should consider neural networks.
IP Logged

Thomas Tan, a very bad Arimaa player.
Quirky1
Forum Newbie
*



Arimaa player #7016

   


Gender: male
Posts: 1
Genetic ramblings - Very long winded -
« Reply #31 on: Dec 28th, 2011, 9:29pm »
Quote Quote Modify Modify

Is this thread dead or is there some development in this area still going on?
 
--- Here is a question or three and some ramblings, feel free to ignore ---
 
I have a basic question regarding this. How do you make the code mutate? A problem with an answer composed of a known number of parameters you can test, evolve and use a fitness function on, is that you "only" end up fine tuning the order of moves or the values for all the ingoing parameters. But how do you use this against an active opponent and not only a static (or at least predictable with a finite number of possible states) environment?
 
If you had a very large well defined library of good potential rules and a way of prioritizing between them, I guess that priority order could be evolved. But some one has to come up with those rules and hard code them, before we can optimize/evolve the prioritizing order of them, right?
 
I don't under stand how the AI should be able to evolve "new" rules, or strange (out of the box non human) ways of thinking or using hard to follow codeHuh
 
Am I missing something basic in Genetic Programming or evolutionary programming?
 
I guess it could be theoretically possible to develop some sort of frame work for coming up with random rules based on a limited (but still quite complex) world like Arimaa and representing those rules in a genetic/binary form. But the work needed would be staggering since there are so many possible types of rules that could (and probably should, if we are to reach a good level of AI) be used. We would need positional rules, material rules, historical rules and most daunting of all - combinatorial rules like; if piece x and piece y are positioned in one of these (n positions) we could respond with this specific move. The combinations of not only specific rules, but also on the types of rules are daunting to say the least. And the memory needed to keep the useless, redundant and obsolete rules "alive" until they are removed by natural selection must be staggering. We really need to have some sort of computational cost in the fitness function, preferably in the meta frame work for every rule, that keeps track how often that rule is actually used, so we can remove dead weight genetically speaking.
 
I still can't wrap my head around this genetical programming in this particular problem domain (Arimaa feels way to complex for this).
 
Maybe if we were limiting the genetical part to an evaluation function, with some goal part, some material gain part and a positional part, with a number of variables for each. And then were planning on letting the genetics find the perfect variant of that evaluation function. That would be doable I guess, but then the answer would be really predictable, since the evaluation function it self would be the real AI part (actually the whole bot) and the fine tuning would only be the finishing touch. But such a bot would never be able to evolve outside that functions parameters or take anything beyond the scope of that specific function into account?
 
And if that, more practical, approach is chosen, why would the bot need to learn how to move the pieces? That should be hard coded in the search tree and evaluation function from the start.
 
In short, I'm amazed you even got the pieces to move more than once and my guess is that you had to hard code that pretty hard.
 
I can see a toddler trying to play chess, getting cattle prodded hard enough to cause amnesia each time he does a wrong move and from that expect him to learn and develop new complex strategies.  
 
In short are we expecting too much from this approach or is my knowledge of GP waaaay too shallow and I should shut up and read up on the subject first?
 
The idea is so cool and I really want it to be plausible, but is it? Is there any of the bots that are learning anything new or are they just adjusting some of the parameters in the evaluation function based on statistics from earlier experience or from the game database?
 Huh
 
What would be interesting though is to create a framework for testing existing bots genetically. Each bot would take a certain amount of in parameters for each material weight constant, for each positional weight constant and maybe some parameters for prioritizing between certain rules when they are in conflict. Maybe the likelyhood of a random non optimal move in order to be non deterministic also could be an in parameter.
 
This particular parameter setup will be the specific genetic code for that type of bot.
 
The framework would then keep track of the different individuals (and their particular set up of constants - dna), their mutations and natural selection will come from playing the original bot (with the constants set to humanly defined optima), then a series of other normal bots, and maybe against the other genetically evolved individuals.
 
Such a frame work should be usefull to all bots and when you start up the frame work, the inparameters should be the definition and numbers of the parameters (dna) the current bot to be tested should use. The frame work doesn't need to differentiate between the parameters, it only needs to know how many, type and lower and upper bounds (but that could be set to to a standard 0.0-1.0 and the bot will hagve to convert it to an appropriate number before the constants are set internally)
 
I assume that today the bots are tested in some sort of frame works (I am very new to this site), but the tuning of the evaluation functions are done manually. Where the coders tweak the rules as well as the constants? Or do we already have a framework for helping with the fine tuning aswell?
 
I read some other post on the relative value of a rabbit and a cat and how the relative value changes over time. In this frame work that relative value would need at least two genes/inparamters, one for the value and one for the change coefficient.
 
The new thing here would be that all bots are not created equal, so there are no subjective answer to the relative value of the two pieces. One bot might be really good at moving rabbits and using them for support all over the place, while the other bot is a cat expert. One is aggressive and seldom runs out of rabbits, while the other plays a slow attrition game whit many exchanges so it almost always comes down to the number of rabbits in the end. With a "genetics" inspired approach one could find a good tuning of the constants for each individual bot and not have to do with the common sense of the forum, statistics from played games in the database or objective truths.  
 
In short why value a piece for it's ability to set up an elephant blockade, if your bot doesn't know how to set up one to begin with?
 
One might also consider splitting an evaluation function in two, one for the oponent with one set of constants and one for the bot it self. It means more genes to keep track of, but you can value the bots rabbits in one way and the generic opponent's in another.
 
And the current bots wouldn't need to be reprogrammed in a major way. With one more in parameter, a wraper to recieve the inparameter and set the constant to be tested would be all that is needed to get started. Then the number of constants that gets optimized by the framework could be increased
one at a time.
 
It's late and I got a taaaaad long winded, sorry about the ramblings. Smiley
IP Logged
dht
Forum Full Member
***



Arimaa player #8821

   


Gender: male
Posts: 25
Re: Evolving a bot
« Reply #32 on: Oct 17th, 2013, 7:06am »
Quote Quote Modify Modify

http://arimaa.com/arimaa/gameroom/comments.cgi?gid=278748
 
??
IP Logged
Fritzlein
Forum Guru
*****



Arimaa player #706

   
Email

Gender: male
Posts: 5928
Re: Evolving a bot
« Reply #33 on: Oct 17th, 2013, 10:57am »
Quote Quote Modify Modify

How to spot a genetic algorithm...
IP Logged

Pages: 1 2 3  Reply Reply Notify of replies Notify of replies Send Topic Send Topic Print Print

« Previous topic | Next topic »

Arimaa Forum » Powered by YaBB 1 Gold - SP 1.3.1!
YaBB 2000-2003. All Rights Reserved.