[Hypothetic babbling] - AI
Moderator: Forum Moderators
[Hypothetic babbling] - AI
Hi dee Ho peeps!
First of all, I am not at all familiar of The Battle for Wesnoth's source, so I am speaking just hypothetically. But due to my just ended summer holidays I am not yet able to work... Just thinking of looking at my work source codes makes me feel nauseous. So I have time to do some hypothetical babbling. If you're busier than me, feel free just skip this topic - this will probably not contain anything of any real value
Anyways, I have been playing with thought of creating a game AI with so called neural networks. If you're not familiar with the concept, I'll just briefly tell the basic idea. (I have not really studied the topic too deeply myself either, so comments from any more experienced peoples are welcome
)
Idea is that you'll know the 'starting conditions'. In this case it would be amount and positions of enemy troops, terrain, own troops etc. Next you'll need to make a decision what to do. Options are of course to move some of your own fellows, or stay where you are. Attack or retreat etc.
Neural network is used to do this decision. Funny thing is that you do not really write any fancy principles how to do the decision. Instead you'll have kind of 'neural network' with decent amount of 'jumps'. You select your path through the jumps to some decision. If decision proves to be correct (you'll defeat the enemy, or you'll win the battle or...) then you will add some 'weight' to these jumps you used, so that at next time when you encounter somehow similar condition, this route has slightly better chance to be chosen than some other route. If decision turned out to be bad, then you'll decrease the weight of the 'jumps'.
Then you'll simulate different battle situations and always correct the balance of jumps according to the results. In other words, you're teaching your AI.
Of course this is just a rough explanation, but somehow I am excited by the thought of AI wish is learning in every game
It would allow interesting battles between AIs developed by different gamers...
Now the big question... Does anyone see any chances to create such AI? What would it take? Does anyone straight away see any fatal flaws in this idea? Could it be possible even in theory?
If only I didn't have my job, studies, life... :p
Oh, and finally... I must say this...
I am impressed by the battle for wesnoth. A freeware project which has grown into the dimensions where most of the commercial project only dream of getting at.
Thumbs Up!!!!
First of all, I am not at all familiar of The Battle for Wesnoth's source, so I am speaking just hypothetically. But due to my just ended summer holidays I am not yet able to work... Just thinking of looking at my work source codes makes me feel nauseous. So I have time to do some hypothetical babbling. If you're busier than me, feel free just skip this topic - this will probably not contain anything of any real value

Anyways, I have been playing with thought of creating a game AI with so called neural networks. If you're not familiar with the concept, I'll just briefly tell the basic idea. (I have not really studied the topic too deeply myself either, so comments from any more experienced peoples are welcome

Idea is that you'll know the 'starting conditions'. In this case it would be amount and positions of enemy troops, terrain, own troops etc. Next you'll need to make a decision what to do. Options are of course to move some of your own fellows, or stay where you are. Attack or retreat etc.
Neural network is used to do this decision. Funny thing is that you do not really write any fancy principles how to do the decision. Instead you'll have kind of 'neural network' with decent amount of 'jumps'. You select your path through the jumps to some decision. If decision proves to be correct (you'll defeat the enemy, or you'll win the battle or...) then you will add some 'weight' to these jumps you used, so that at next time when you encounter somehow similar condition, this route has slightly better chance to be chosen than some other route. If decision turned out to be bad, then you'll decrease the weight of the 'jumps'.
Then you'll simulate different battle situations and always correct the balance of jumps according to the results. In other words, you're teaching your AI.
Of course this is just a rough explanation, but somehow I am excited by the thought of AI wish is learning in every game

Now the big question... Does anyone see any chances to create such AI? What would it take? Does anyone straight away see any fatal flaws in this idea? Could it be possible even in theory?
If only I didn't have my job, studies, life... :p
Oh, and finally... I must say this...
I am impressed by the battle for wesnoth. A freeware project which has grown into the dimensions where most of the commercial project only dream of getting at.
Thumbs Up!!!!
CWF-Freeware
There is only 10 types of people, Those who understand binary, and those who don't.
My C blogs:
C - Suomeksi
C Programmer's diary
There is only 10 types of people, Those who understand binary, and those who don't.
My C blogs:
C - Suomeksi
C Programmer's diary
-
- Posts: 205
- Joined: September 15th, 2006, 1:22 pm
Neuronal networks often need VERY many repetitions to learn something, especially if they are not set up optimally. If the problem is complex and not presented in a "good" way to the NN, it might not learn it at all.
If you have no prior experience with neuronal networks, I suggest you start experimenting with some VERY simple tasks (FAR below the complexity of BfW) to see how well they can learn them. This might give you an idea if such an approach is possible for a BfW-AI.
If you have no prior experience with neuronal networks, I suggest you start experimenting with some VERY simple tasks (FAR below the complexity of BfW) to see how well they can learn them. This might give you an idea if such an approach is possible for a BfW-AI.
Thank's for the input AA 
And I must say that I did not really dream of creating AI for wesnoth ( in near future ) I simply do not have the time required to study these things
This has just been a thought I have been playing with
And yes, I know something about the amount of repetitions needed... But I assume it could be done by simulating the game without too much of human interaction
Running simulations for a few months - years could yield a decent AI (I assume). But of course you're right with the need of carefull design. But still... I cannot get the though off of my head 

And I must say that I did not really dream of creating AI for wesnoth ( in near future ) I simply do not have the time required to study these things




CWF-Freeware
There is only 10 types of people, Those who understand binary, and those who don't.
My C blogs:
C - Suomeksi
C Programmer's diary
There is only 10 types of people, Those who understand binary, and those who don't.
My C blogs:
C - Suomeksi
C Programmer's diary
Unless I'm mistaken, it should be possible to make Wesnoth autonomously run a set match between two AIs, for example, and report the outcome. If this is correct, it would probably be much easier to make small adjustments to the current AI, pitch the new version against the current AI or any other AIs floating around (bruteforce.py comes to mind) a couple dozen times, look at the results, tweak the AI parameters again, repeat, etc.
Would probably be quite a lot of work still.
Would probably be quite a lot of work still.
Naturally I was thinking of such a 'teaching' (AI Vs. AI). Also tweaking parameters ,as you put it, should be also automated.
CWF-Freeware
There is only 10 types of people, Those who understand binary, and those who don't.
My C blogs:
C - Suomeksi
C Programmer's diary
There is only 10 types of people, Those who understand binary, and those who don't.
My C blogs:
C - Suomeksi
C Programmer's diary
A drill of AI vs AI isn't going to achieve anything but an AI who can counter the quite predictable current AI. If anything, this teaching should be done by experienced MP players...
Try some Multiplayer Scenarios / Campaigns
True, but if we had 1-2 reasonably competent AIs that function differently from the default AI (bruteforce.py would be one AFAIK) then the AI that can beat them all is quite likely also harder for human players to beat.Rhuvaen wrote:A drill of AI vs AI isn't going to achieve anything but an AI who can counter the quite predictable current AI. If anything, this teaching should be done by experienced MP players...
Of course teaching an AI to beat other AIs will never teach it how to beat human players as such, but a noticeable improvement would already be good, and I don't think the possibility of that happening can be ruled out here.

I know this is a huge request manuel.. But if you ever try it out, could you please occasionally show me your progress? I am always keen on learning something new (and I am getting an obsession with the thought
) I feel NNs could be effective in some places at least, since at least I personally cannot see the clear causality in all actions.
Ps. I do like the athmosphere in this community. I have seen too many programming oriented forums, where members are trying to prove their goodness by striking down every new idea straight away. In here I have not had such an impression. You fellows seem to be generally nice & open minded, friendly, but still telling your own opinion. So Thumbs Up for the community too!
(although I should have guessed this. If something is done with bad athmosphere, results are rarely (or never?) as good as wesnoth game
)

Ps. I do like the athmosphere in this community. I have seen too many programming oriented forums, where members are trying to prove their goodness by striking down every new idea straight away. In here I have not had such an impression. You fellows seem to be generally nice & open minded, friendly, but still telling your own opinion. So Thumbs Up for the community too!
(although I should have guessed this. If something is done with bad athmosphere, results are rarely (or never?) as good as wesnoth game

CWF-Freeware
There is only 10 types of people, Those who understand binary, and those who don't.
My C blogs:
C - Suomeksi
C Programmer's diary
There is only 10 types of people, Those who understand binary, and those who don't.
My C blogs:
C - Suomeksi
C Programmer's diary
I'm new to wesnoth community and I have the same feeling, people here is very nice with newcomers. I'm working on the sound classes of wesnoth just to get used to the source code and game structure. As soon as I get a better knowledge of the source I will start a simple AI, and maybe try to put some simple NN aids in some points if possible. I think the point is to use it in internal layers of the AI, and use it only on simple and easy tasks, like the evaluation of an AI parameter. Try to connect the NN direct to the game state would be very difficult. If I got any progress I will informMazzie wrote:I know this is a huge request manuel.. But if you ever try it out, could you please occasionally show me your progress? I am always keen on learning something new (and I am getting an obsession with the thought) I feel NNs could be effective in some places at least, since at least I personally cannot see the clear causality in all actions.
Ps. I do like the athmosphere in this community. I have seen too many programming oriented forums, where members are trying to prove their goodness by striking down every new idea straight away. In here I have not had such an impression. You fellows seem to be generally nice & open minded, friendly, but still telling your own opinion. So Thumbs Up for the community too!
(although I should have guessed this. If something is done with bad athmosphere, results are rarely (or never?) as good as wesnoth game)

-
- Posts: 205
- Joined: September 15th, 2006, 1:22 pm
I guess it would easily take 100 000+ games to train a NN-AI. Also, learning progress is often very slow at first, so if it doesn't work you will probably find out after having played maybe 20 000 games. I doubt we can find any experienced MP players who are willing to endure thiszookeeper wrote:True, but if we had 1-2 reasonably competent AIs that function differently from the default AI (bruteforce.py would be one AFAIK) then the AI that can beat them all is quite likely also harder for human players to beat.Rhuvaen wrote:A drill of AI vs AI isn't going to achieve anything but an AI who can counter the quite predictable current AI. If anything, this teaching should be done by experienced MP players...
Of course teaching an AI to beat other AIs will never teach it how to beat human players as such, but a noticeable improvement would already be good, and I don't think the possibility of that happening can be ruled out here.

Having different AIs compete, as Zookeeper suggested, is probably a very good idea especially if most of them are still actively and independently being developed. That way the different AIs provide a convenient benchmark for each others strength, which makes it easier to judge if a changes between versions are effective or not.
Regarding the Hybrid-NN idea: I think this one might be quite promising. The default AI is fairly good at the "detail decisions", i.e. choosing how to move and attack with single units. But it is less effective in the "big picture", i.e. overall strategy.
So maybe it might be possible to make a hybrid-NN, where the NN only sets the more general goals (i.e. 'capture the keep to the west'), while the details of the implementation of these goals are decided in a more conventional way.
- Viliam
- Translator
- Posts: 1341
- Joined: January 30th, 2004, 11:07 am
- Location: Bratislava, Slovakia
- Contact:
This idea seems interesting, but difficult to implement. In theory it should be possible.
First, the neural networks could learn by observing the current AI. When they become good enough, we could take the current AI away and replace it by evolution. The neural networks would have random mutations and fight against each other; loser would die, winner would replicate. This would allow neural networks to randomly find some improvements against the current AI.
To add more computing power, it could be possible to download some definitions of neural networks, run the AI vs AI combats on home computer, and then upload the resulting networks to server. As an incentive for donating CPU time, the neural networks could remember names of people who let them train on their computers, and when fighting the internet, they would choose the names of their sponsors for the units.
First, the neural networks could learn by observing the current AI. When they become good enough, we could take the current AI away and replace it by evolution. The neural networks would have random mutations and fight against each other; loser would die, winner would replicate. This would allow neural networks to randomly find some improvements against the current AI.
To add more computing power, it could be possible to download some definitions of neural networks, run the AI vs AI combats on home computer, and then upload the resulting networks to server. As an incentive for donating CPU time, the neural networks could remember names of people who let them train on their computers, and when fighting the internet, they would choose the names of their sponsors for the units.

-
- Posts: 205
- Joined: September 15th, 2006, 1:22 pm
A crucial thing to make a NN-learning algorithm successful is to keep the number of parameters that must be learned (e.g. the number of required neurons)
a) as small as possible: time needed to learn can explode when there are too many neuronal weights that must be adjusted
b) not too small: if the network has too few neurons, it might not be able to learn more complex patterns.
Therefore, a NN-AI should not be trained to learn anything for which reasonable solutions can be hard-coded. The hybrid idea pretty much comes down to only presenting the NN with those choices, for which it is really needed.
Also the NN-AI probably fares better, if it is not one amorphous all-purpose thing but consists of many highly specialised modules (like the human brain).
A "simple" hybrid approach could be to make a NN, which decides which units to recruit, while the standard AI makes everything else. This module might then later be extended by other NN-modules that accomplish different tasks.
a) as small as possible: time needed to learn can explode when there are too many neuronal weights that must be adjusted
b) not too small: if the network has too few neurons, it might not be able to learn more complex patterns.
Therefore, a NN-AI should not be trained to learn anything for which reasonable solutions can be hard-coded. The hybrid idea pretty much comes down to only presenting the NN with those choices, for which it is really needed.
Also the NN-AI probably fares better, if it is not one amorphous all-purpose thing but consists of many highly specialised modules (like the human brain).
A "simple" hybrid approach could be to make a NN, which decides which units to recruit, while the standard AI makes everything else. This module might then later be extended by other NN-modules that accomplish different tasks.
Very good idea. I totally agree.Angry Andersen wrote:A crucial thing to make a NN-learning algorithm successful is to keep the number of parameters that must be learned (e.g. the number of required neurons)
a) as small as possible: time needed to learn can explode when there are too many neuronal weights that must be adjusted
b) not too small: if the network has too few neurons, it might not be able to learn more complex patterns.
Therefore, a NN-AI should not be trained to learn anything for which reasonable solutions can be hard-coded. The hybrid idea pretty much comes down to only presenting the NN with those choices, for which it is really needed.
Also the NN-AI probably fares better, if it is not one amorphous all-purpose thing but consists of many highly specialised modules (like the human brain).
A "simple" hybrid approach could be to make a NN, which decides which units to recruit, while the standard AI makes everything else. This module might then later be extended by other NN-modules that accomplish different tasks.