Something is UP.
Moderator: Forum Moderators

 Art Developer
 Posts: 2220
 Joined: April 2nd, 2004, 10:19 pm
 Contact:
turin wrote:@Cod: Do you really think the RNG on your machine is broken? Can you do some kind of stat test to see if it is?
Nope, that's not how it works.Ankka wrote:In Civilization 3 the RNG was made so that if you did everything in the same order again you got excectly the same results.
I wonder if it's the same in BfW: if you do no other battles before the mage one it will always be the same result..?
Actually I think it is, I saved a turn at the end of the clans campaign where I have three characters (Li'sar, A paladin and elfhero guy, forgot his name.) that are able to kill him in one turn without getting any damage; I attempted to directly attack him in the same manner several times, All resulting in him dying on elfwads's fireballs, However; If I where to click a tile first, And attack him after that; the elf would miss one time, Making him survive; Requiring my paladin to kill him,
Each time was directly the same result depending on what exactly I did that turn; Randomly clicking on a button or on a tile is even an influence on what happens; However it was obvious that if you did EXACTLY the same, Exactly the same results were generated.
This was quite a few versions ago, So perhaps it has changed; However it appears that EACH possible thing you do influences the stats, Order of movement, Instead of going directly, Going half, And continuing after that.
Signature dropped due to use of img tag
Apparent weirdness in the RNG in Wesnoth might not actually be the RNG's fault  It might be the wrapper code:
For instance:
size_t choice = get_random() % num_choices;
[game_events.cpp, line 728 here]
I remembered something about doing rand() % foo was a bad idea, so I googled to see if I could find something about that, and found this:
http://www.robertjacobs.fsnet.co.uk/random.htm
About the possibly poor quality of the default RNG:
http://members.cox.net/srice1/random/crandom.html
For instance:
size_t choice = get_random() % num_choices;
[game_events.cpp, line 728 here]
I remembered something about doing rand() % foo was a bad idea, so I googled to see if I could find something about that, and found this:
http://www.robertjacobs.fsnet.co.uk/random.htm
About the possibly poor quality of the default RNG:
About doing rand() % foo:All random number generator programs have limitations and some are better than others. The version of rand() that comes with your C++ compiler will in all probability be a pretty simple generator and wouldn't be appropriate for scientific use but it can still be used for some purposes. It may well be random enough for use in simple programs and games. If you want something a little better, you'll have to look beyond the C++ standard library but for now we'll concern ourselves with the functions C++ provides.
The google search also found this:There are several ways to generate numbers in a specified range, some better than others. One method that you'll probably come across is the following:
random_integer = (rand()%10);
This will generate numbers in the range 0 to 9. To use a nonzero baseline you can do the following:
random_integer = 20 + (rand()%10);
This will generate numbers in the range 20 to 29.
The above method of producing pseudo random numbers is poor because it only uses the loworder bits of the number returned from rand() and these may be less random for some pseudo random number generator implementations.
For a better method you should prefer the following:
random_integer = 20 + int(10.0 * rand()/(RAND_MAX+1.0));
Don't be put off by the apparent complexity, it's remarkably simple to use and is superior to the previous version. If we now substitute this into our earlier program we have the following:
http://members.cox.net/srice1/random/crandom.html
Note: Do NOT use
y = rand() % M;
as this focuses on the lower bits of rand(). For linear congruential random number generators, which rand() often is, the lower bytes are much less random than the higher bytes. In fact the lowest bit cycles between 0 and 1. Thus rand() may cycle between even and odd (try it out). Note rand() does not have to be a linear congruential random number generator. It's perfectly permissible for it to be something better which does not have this problem.
Would it be hard to try another RNG for a release and see if the results change?
Hope springs eternal.
Wesnoth acronym guide.
Wesnoth acronym guide.
I've done some tests, and the rand() in mingw seems to be pretty good, comparable to the MT.
My test program does srand 100 times, and each time then picks 100000 random numbers from 031:
For srand:
For rand:
It sums up the number of times each value (from 031) is picked, and then calculates the mean and standard deviation for the total. The idea is that the lower the standard deviation is, the closer it is to being 'perfect' statistically.
The MT seems to be slower, but also consistantly has a lower standard deviation.
Interestingly, if I seed with srand_test (the for loop counter for srand) instead of time(NULL), we get a MUCH lower standard deviation:
Edit:
I've tried this now with MAX_TEST_VALUE set to 42, and also with it set to 12. I get similar results: The mean is different, but the same for rand and MT still, and the stddev is different too of course. The general pattern holds though (MT being slower, but having a lower standard deviation when seeding with time(NULL), and standard deviation being lower for both algorithms if I seed with srand_test instead of time(NULL)).
With MAX_TEST_VALUE set to 73 (A prime number I pulled out of thin air), and seeding with srand_test:
Edit again:
When seeding with time(NULL), the standard deviation varies quite a bit, and sometimes the MT has the lower one and sometimes rand does.
Keeping in mind that this only tests the RNG that's in mingw, I suppose this tells us:
1. Seeding with a number, perhaps a low number, apparently gives more evenly distributed results than seeding with time(NULL).
2. We haven't checked for runs of identical numbers, or checked the number of times that happens against the number of times it'd be predicted to happen on average.
3. Neither the MT or rand is better in giving more evenly distributed results  MT wins with some, rand with others, and it depends on the seed too.
4. bitflipping every time doesn't happen with with the rand algorithm in mingw or with the MT.
My test program does srand 100 times, and each time then picks 100000 random numbers from 031:
For srand:
It does ALG_RAND first, and stores the seed so we can use the exact same seed with the MT.if (algorithm==ALG_RAND) {
seeds[srand_test]=time(NULL);
srand(seeds[srand_test]);
} else if (algorithm==ALG_MT) {
init_genrand(seeds[srand_test]);
}
For rand:
MAX_TEST_VALUE is 32. I'm doing % because that's what's in the Wesnoth source.int res;
if (algorithm==ALG_RAND) {
res = rand() & 0x7FFFFFFF;
} else if (algorithm==ALG_MT) {
res = genrand_int31();
}
[...]
int num = res % MAX_TEST_VALUE;
It sums up the number of times each value (from 031) is picked, and then calculates the mean and standard deviation for the total. The idea is that the lower the standard deviation is, the closer it is to being 'perfect' statistically.
Neither one seems to have a problem with bit cycling (it flips half the time and half the time it doesn't, which is exactly what's supposed to happen ).Total results for algorithm rand:
bitcycling=4998900
notbitcycling=5001100
Mean: 312500.000000
Standard deviation: 6575.665366
Time taken: 468
Total results for algorithm MT:
bitcycling=4976000
notbitcycling=5024000
Mean: 312500.000000
Standard deviation: 4870.125768
Time taken: 657
The MT seems to be slower, but also consistantly has a lower standard deviation.
Interestingly, if I seed with srand_test (the for loop counter for srand) instead of time(NULL), we get a MUCH lower standard deviation:
I'll post the code if you want.Total results for algorithm rand:
bitcycling=4999212
notbitcycling=5000788
Mean: 312500.000000
Standard deviation: 580.153428
Time taken: 546
Total results for algorithm MT:
bitcycling=4998565
notbitcycling=5001435
Mean: 312500.000000
Standard deviation: 550.790682
Time taken: 594
Edit:
I've tried this now with MAX_TEST_VALUE set to 42, and also with it set to 12. I get similar results: The mean is different, but the same for rand and MT still, and the stddev is different too of course. The general pattern holds though (MT being slower, but having a lower standard deviation when seeding with time(NULL), and standard deviation being lower for both algorithms if I seed with srand_test instead of time(NULL)).
With MAX_TEST_VALUE set to 73 (A prime number I pulled out of thin air), and seeding with srand_test:
Interestingly, rand has the lower standard deviation there. It's also (slightly) lower with 12 and 42 when seeding with srand_test.Total results for algorithm rand:
bitcycling=4999212
notbitcycling=5000788
Mean: 136986.301370
Standard deviation: 381.067451
Time taken: 485
Total results for algorithm MT:
bitcycling=4998565
notbitcycling=5001435
Mean: 136986.301370
Standard deviation: 444.792662
Time taken: 656
Edit again:
When seeding with time(NULL), the standard deviation varies quite a bit, and sometimes the MT has the lower one and sometimes rand does.
Keeping in mind that this only tests the RNG that's in mingw, I suppose this tells us:
1. Seeding with a number, perhaps a low number, apparently gives more evenly distributed results than seeding with time(NULL).
2. We haven't checked for runs of identical numbers, or checked the number of times that happens against the number of times it'd be predicted to happen on average.
3. Neither the MT or rand is better in giving more evenly distributed results  MT wins with some, rand with others, and it depends on the seed too.
4. bitflipping every time doesn't happen with with the rand algorithm in mingw or with the MT.
Is it? I didn't know that.
100 with time(NULL) seed:
100 with srand_test seed:
100 with time(NULL) seed:
Total results for algorithm rand:
bitcycling=4998000
notbitcycling=5002000
Mean: 100000.000000
Standard deviation: 2896.722286
Time taken: 484
Total results for algorithm MT:
bitcycling=4974900
notbitcycling=5025100
Mean: 100000.000000
Standard deviation: 3372.951230
Time taken: 609
100 with srand_test seed:
Total results for algorithm rand:
bitcycling=4999212
notbitcycling=5000788
Mean: 100000.000000
Standard deviation: 320.714359
Time taken: 484
Total results for algorithm MT:
bitcycling=4998565
notbitcycling=5001435
Mean: 100000.000000
Standard deviation: 305.735049
Time taken: 656
We've had this discussion before too.SL wrote:Apparent weirdness in the RNG in Wesnoth might not actually be the RNG's fault  It might be the wrapper code:
For instance:
size_t choice = get_random() % num_choices;
[game_events.cpp, line 728 here]
I remembered something about doing rand() % foo was a bad idea
My contention has always been that I think rand()%foo works fine for our purposes. It would be a problem for a cryptographic program, but for us it will be fine. All tests have shown this too.
David
“At Gambling, the deadly sin is to mistake bad play for bad luck.”  Ian Fleming

 Posts: 99
 Joined: April 14th, 2005, 2:32 pm
 Location: Blacksburg, VA, USA
Clustering of random events is *normal*. If you haven't extensively studied statistics, or applied meticulous statistical tests, streaks almost always look more significant than they are.
If you flip a fair coin ten times in a row, how likely is it that you will get at least one streak of length 4 or longer? How likely is it that you will get 7+ of the same result out of the ten flips? If you repeat the tenflip test 200 times, how likely is it that you will get at least one tenofakind?
Try answering those questions based on your intuitive feel for how unlikely they are, and then work out the actual probabilities, and you might get an idea how unreliable intuition about probability is, and why this thread comes up so often and the same people always say "show me the proof".
For the record, I don't think unlikely results happen more often than they ought to, but there are some situations in Wesnoth where an only slightly unlikely result has a disproportionate effect on the situation in the game (e.g. killing a unit in one swing and opening a path for another unit to reach your wounded). This is especially likely to happen in the early scenarios of some campaigns where your alllevel1 army faces off against numerous level 2 units  even slight unluckiness can result in a ruinous casualty rate because there is such a huge power and survivability gap between levels 1 and 2, and it seems a *lot* of campaigns have orc or troll enemies that do high perhit damage with a small number of swings, leading to much more variability of actual damage from expected damage than you expect from something like an Elvish Fighter or Archer. If one side completely misses one enemy and kills one enemy, and the other moderately wounds two enemies, it's obvious which side has the advantage. Sustained damage forces units to retreat. Sudden damage kills them before they *can* retreat.
Being able to predict how fast your units will take damage makes them much easier to keep alive, and you can't earn experience when you're dead. Losing a 30 exp unit in one turn because two trolls each hit with both swings is what makes people come here and complain about the RNG. All the work they put into channeling experience to that unit, positioning it so it is on favorable terrain and could only be reached from two directions, etc... gone without giving you a chance to do anything about it. It's very frustrating even if you *do* understand the probabilities involved.
If you flip a fair coin ten times in a row, how likely is it that you will get at least one streak of length 4 or longer? How likely is it that you will get 7+ of the same result out of the ten flips? If you repeat the tenflip test 200 times, how likely is it that you will get at least one tenofakind?
Try answering those questions based on your intuitive feel for how unlikely they are, and then work out the actual probabilities, and you might get an idea how unreliable intuition about probability is, and why this thread comes up so often and the same people always say "show me the proof".
For the record, I don't think unlikely results happen more often than they ought to, but there are some situations in Wesnoth where an only slightly unlikely result has a disproportionate effect on the situation in the game (e.g. killing a unit in one swing and opening a path for another unit to reach your wounded). This is especially likely to happen in the early scenarios of some campaigns where your alllevel1 army faces off against numerous level 2 units  even slight unluckiness can result in a ruinous casualty rate because there is such a huge power and survivability gap between levels 1 and 2, and it seems a *lot* of campaigns have orc or troll enemies that do high perhit damage with a small number of swings, leading to much more variability of actual damage from expected damage than you expect from something like an Elvish Fighter or Archer. If one side completely misses one enemy and kills one enemy, and the other moderately wounds two enemies, it's obvious which side has the advantage. Sustained damage forces units to retreat. Sudden damage kills them before they *can* retreat.
Being able to predict how fast your units will take damage makes them much easier to keep alive, and you can't earn experience when you're dead. Losing a 30 exp unit in one turn because two trolls each hit with both swings is what makes people come here and complain about the RNG. All the work they put into channeling experience to that unit, positioning it so it is on favorable terrain and could only be reached from two directions, etc... gone without giving you a chance to do anything about it. It's very frustrating even if you *do* understand the probabilities involved.
Here's the thing, Doc, it's normal to get streaks. You'll  Oh, nevermind, someone else just explained this.
I've compiled the test program with the MS visual c++ toolkit 2003, which is the commandlineonly version of the vs.net 2003 c++ compiler. I'm getting similar results to what I posted before.
Actually, in fact, the standard deviation with the srand_test seed is identical to the one mingw gives. Maybe they use the same algorithm.
I haven't tried vcpp6.
I've compiled the test program with the MS visual c++ toolkit 2003, which is the commandlineonly version of the vs.net 2003 c++ compiler. I'm getting similar results to what I posted before.
Actually, in fact, the standard deviation with the srand_test seed is identical to the one mingw gives. Maybe they use the same algorithm.
I haven't tried vcpp6.
The usual problem with C library RNGs is not longterm averages, it's shortterm correlations. I'd recommend the survey at http://choices.cs.uiuc.edu/~akapadia/pr ... ject2.html for a quick orientation.
The Mersenne Twister has had its correlations weeded out over 623 consecutive iterations. It pretty much requires the C/C++ rightshift operator and unsigned long types, however.
The minimum standard linear congruential generators (MINSTD, MINSTD2, MINSTD3, see linked overview for definitions) all are fairly decent.
The Mersenne Twister has had its correlations weeded out over 623 consecutive iterations. It pretty much requires the C/C++ rightshift operator and unsigned long types, however.
The minimum standard linear congruential generators (MINSTD, MINSTD2, MINSTD3, see linked overview for definitions) all are fairly decent.
Doc: If you reload from the save game, and he dies, and you reload again, and he dies, etc, that would make sense if the state of the RNG was being saved in the saved game. Of course I have no idea if it is or not... IIRC someone said earlier that it was said, and someone else said it wasn't, so, damned if I know.Forbidden.
You don't have permission to access /~akapadia/project2/project2.html on this server.
Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request.
 Doc Paterson
 Drake Cartographer
 Posts: 1973
 Joined: February 21st, 2005, 9:37 pm
 Location: Kazakh
 Contact:
I'm convinced.zaimoni wrote:The usual problem with C library RNGs is not longterm averages, it's shortterm correlations.
I will not tell you my corner / where threads don't get locked because of mostly no reason /
because I don't want your hostile disease / to spread all over the world.
I prefer that corner to remain hidden /
without your noses. Nosebane, Sorcerer Supreme
because I don't want your hostile disease / to spread all over the world.
I prefer that corner to remain hidden /
without your noses. Nosebane, Sorcerer Supreme
I'd just like to chime in on this a bit. I have a bit of a stats background, but I'm by no means proficient in it, and have had no proof that there is anything wrong with the RNG. I also know that randomness is just that, and its difficult to predict probability especially with what is often a low sample size. Thats why I've openly refrained from making comments about it, because #1 I don't know if what I'm seeing is actual randomness #2 I don't know how to prove that something is wrong. (Being a nontechnical guy, the best I could do would be to create a spread sheet, and play a bunch of games and note down every case of a certain unit attacking another in the same conditions yada yada and when I have 500 events recorded then calculated... thats the only way I could do that and wesnoth is supposed to be my "recreation time')Chris Byler wrote:Clustering of random events is *normal*. If you haven't extensively studied statistics, or applied meticulous statistical tests, streaks almost always look more significant than they are.
If you flip a fair coin ten times in a row, how likely is it that you will get at least one streak of length 4 or longer? How likely is it that you will get 7+ of the same result out of the ten flips? If you repeat the tenflip test 200 times, how likely is it that you will get at least one tenofakind?
Try answering those questions based on your intuitive feel for how unlikely they are, and then work out the actual probabilities, and you might get an idea how unreliable intuition about probability is, and why this thread comes up so often and the same people always say "show me the proof".
For the record, I don't think unlikely results happen more often than they ought to, but there are some situations in Wesnoth where an only slightly unlikely result has a disproportionate effect on the situation in the game (e.g. killing a unit in one swing and opening a path for another unit to reach your wounded). This is especially likely to happen in the early scenarios of some campaigns where your alllevel1 army faces off against numerous level 2 units  even slight unluckiness can result in a ruinous casualty rate because there is such a huge power and survivability gap between levels 1 and 2, and it seems a *lot* of campaigns have orc or troll enemies that do high perhit damage with a small number of swings, leading to much more variability of actual damage from expected damage than you expect from something like an Elvish Fighter or Archer. If one side completely misses one enemy and kills one enemy, and the other moderately wounds two enemies, it's obvious which side has the advantage. Sustained damage forces units to retreat. Sudden damage kills them before they *can* retreat.
Being able to predict how fast your units will take damage makes them much easier to keep alive, and you can't earn experience when you're dead. Losing a 30 exp unit in one turn because two trolls each hit with both swings is what makes people come here and complain about the RNG. All the work they put into channeling experience to that unit, positioning it so it is on favorable terrain and could only be reached from two directions, etc... gone without giving you a chance to do anything about it. It's very frustrating even if you *do* understand the probabilities involved.
However over the long term (ie watching a lot of games) I've seen some very peculiar things happen. One game in particular between myself and Elvish Pillager when he used only assassins and only grunts for myself showed this to me. Essentially in the game at one point I had an expected EV double of my actual damage delt... I think it was 78 to 150. for the rest of the game I was 100~80 damage points below my EV, if not more. Stastitically it can happen, but I've seen this situation repeated a little too often for what I would consider random. Maybe thats a bit of Nominalization of data (only remembering the times that it does happen, and not remembering the amount of times it doesn't) I can't be sure... but It is something that has been bothering me for some time. I as others can voice our suspicions, but even I take mine with a grain of salt.
Maybe what I'm trying to say in my last post, is that there needs to be a way to prove that there is a problem, between people's anectodal evidence and the vagueries of stats... I wouldn't have a clue how to do it other than my old fashioned way... is Ott's little program effective in doing this? (I'm not sure)
Whoops...URL is a HTTP 403. The link has not been fixed at http://choices.cs.uiuc.edu/~akapadia/pubs.html .
I have a local PDF archive from August 28, 2004. Not sure if it's a copyright violation to upload the PDF, though.
Definitions for the MINSTDs as a linear congruential generator:
* modulus m: 2147483647
* additive offset b: 0
* multiplicative constant a:
** MINSTD: 16807
** MINSTD2: 48271
** MINSTD3: 69621
All of these are mostly decorrelated for four consecutive iterations, by spectral test (for linear congruential generators only).
I have a local PDF archive from August 28, 2004. Not sure if it's a copyright violation to upload the PDF, though.
Definitions for the MINSTDs as a linear congruential generator:
* modulus m: 2147483647
* additive offset b: 0
* multiplicative constant a:
** MINSTD: 16807
** MINSTD2: 48271
** MINSTD3: 69621
All of these are mostly decorrelated for four consecutive iterations, by spectral test (for linear congruential generators only).