Luck in multiplayer
Moderators: Forum Moderators, Developers
 pyrophorus
 Posts: 513
 Joined: December 1st, 2010, 12:54 pm
Re: Luck in multiplayer
Maybe it's obvious to you, but not to me:iceiceice wrote:It's obvious to me that if the seed is truly random, then the next call of "random_next()" will be truly random with negligible error.pyrophorus wrote:Hi !
Anyway, I have a question about all mathematical arguments which were given. They all rely on the implicit assumption the RNGs gives something very close to a normal distribution. This is certainly the case on long sequences using the same seed. But what if the RNG si reseeded very often ? IMO, it's not obvious the results still follow a normal distribution.
As one can see, the tests are applied to sequences obtained from the same seed, and long enough to get a rather strong probability. If you reseed a generator each ten trials, you can't deduce anything a priori about the result from the PRNG properties.While it is impossible to give a mathematical
proof that a generator is indeed a random bit generator, the tests described here help detect
certain kinds of weaknesses the generator may have. This is accomplished by taking a sample
output sequence of the generator and subjecting it to various statistical tests. Each statistical
test determines whether the sequence possesses a certain attribute that a truly random
sequence would be likely to exhibit; the conclusion of each test is not definite, but rather
probabilistic.
That said, my point is not to say the random system in Wesnoth is not random enough, but only to say it's not enough to choose a good PRNG: one must be aware of the way it is used. Reseeding too often, use it on various (and not correlated) events can give bad results, even if the PRNG is a good one.
Friendly,
Campaign Return to Noelren *** HowTos: WML filtering, WML variables
Please help to to update the Guide to UMC Campaigns
Please help to to update the Guide to UMC Campaigns
Re: Luck in multiplayer
pyrophorus: When I wrote this, it was based on specific source code review of our current random number generator. The state of our generator is a single integer, and it is fully reset when we reseed. In general that means that if we seed it with a fully random seed, the next draw will be fully random.
Here's a pithy quote from an LCG manufacturer that expresses a similar principle: http://wellington.pm.org/archive/200704 ... ss/#slide8
"We guarantee that each number is random individually, but we don't guarantee that more than one of them is random."  early LCG vendor.
The issue you are speaking of may arise with more sophisticated RNG's that may have a very large state  in that case simple reseeding may not fully randomize the state, and if you want this stronger guarantee you may have to use a series of seeds.
However, if you would like to follow the observation in the source code, it is relatively simple to do so.
Like the pithy quote says, if you take two samples of rand, all bets are off , and I hope whatever code it is never touches your bank account numbers.
But the first sample is actually pretty damn good. And if you reseeded with truly random seeds between every sample, then in fact your sequence would be truly random. So frequent reseeding is in this case good, not bad.
Your advice about being knowledgeable of what is needed to properly seed a prng is quite good .
Edit: Actually, it is completely trivial that 110351245 has a multiplicative inverse mod 2^32. This is because they are relatively prime, 2^32 is a power of two... and 110351245 is odd. So now we have two proofs
Edit: Actually Pyrophorus, I take back what I said, and thank you for asking this question a second time.
What I wrote would be true if we were actually using the code in this man page.
In fact, wesnoth uses a "tweaked" version of rand. For instance, 3 years ago someone decided that our seed should not be fully random, instead the leading bit should always be 0. (github: https://github.com/wesnoth/wesnoth/comm ... f60f5a5fa0. see the lines which introduce & 0x7FFFFFF)
I would like to tell you that that would not change the proof significantly, it's only one bit less entropy... or something.
But in fact, that argument doesn't hold water. Flipping one bit in the outcome of a random generator could be very significant, leading to difficulttodetect bias in the most significant bits of attack values, bias in trait values ... and the mathematics to figure out what the actual consequences would be seem involved. And since we are using rand and not a high quality generator, I wouldn't make any assumptions.
So in fact, no, I think you are right, in the current implementation frequent reseeding of the wesnoth RNG could cause significant problems that we have no way to evaluate from first principles. The only way to figure it out really would be to do a bunch of numerical experiments. So our current RNG may actually be in some respects worse than rand(), potentially.
This is yet another argument for why we should be using a library.
Here's a pithy quote from an LCG manufacturer that expresses a similar principle: http://wellington.pm.org/archive/200704 ... ss/#slide8
"We guarantee that each number is random individually, but we don't guarantee that more than one of them is random."  early LCG vendor.
The issue you are speaking of may arise with more sophisticated RNG's that may have a very large state  in that case simple reseeding may not fully randomize the state, and if you want this stronger guarantee you may have to use a series of seeds.
However, if you would like to follow the observation in the source code, it is relatively simple to do so.
Spoiler:
But the first sample is actually pretty damn good. And if you reseeded with truly random seeds between every sample, then in fact your sequence would be truly random. So frequent reseeding is in this case good, not bad.
Your advice about being knowledgeable of what is needed to properly seed a prng is quite good .
Edit: Actually, it is completely trivial that 110351245 has a multiplicative inverse mod 2^32. This is because they are relatively prime, 2^32 is a power of two... and 110351245 is odd. So now we have two proofs
Edit: Actually Pyrophorus, I take back what I said, and thank you for asking this question a second time.
What I wrote would be true if we were actually using the code in this man page.
In fact, wesnoth uses a "tweaked" version of rand. For instance, 3 years ago someone decided that our seed should not be fully random, instead the leading bit should always be 0. (github: https://github.com/wesnoth/wesnoth/comm ... f60f5a5fa0. see the lines which introduce & 0x7FFFFFF)
I would like to tell you that that would not change the proof significantly, it's only one bit less entropy... or something.
But in fact, that argument doesn't hold water. Flipping one bit in the outcome of a random generator could be very significant, leading to difficulttodetect bias in the most significant bits of attack values, bias in trait values ... and the mathematics to figure out what the actual consequences would be seem involved. And since we are using rand and not a high quality generator, I wouldn't make any assumptions.
So in fact, no, I think you are right, in the current implementation frequent reseeding of the wesnoth RNG could cause significant problems that we have no way to evaluate from first principles. The only way to figure it out really would be to do a bunch of numerical experiments. So our current RNG may actually be in some respects worse than rand(), potentially.
This is yet another argument for why we should be using a library.
Re: Luck in multiplayer
i don't see how the &0x7FFFFFF could have any effect. Both simple_rng::get_next_random and rand() return a value not higher than 32768 so the first bit is already null. (rand() retruns not higher than RAND_MAX but that is usually defined as 32767).iceiceice wrote: In fact, wesnoth uses a "tweaked" version of rand. For instance, 3 years ago someone decided that our seed should not be fully random, instead the leading bit should always be 0. (github: https://github.com/wesnoth/wesnoth/comm ... f60f5a5fa0. see the lines which introduce & 0x7FFFFFF)
I would like to tell you that that would not change the proof significantly, it's only one bit less entropy... or something.
But in fact, that argument doesn't hold water. Flipping one bit in the outcome of a random generator could be very significant, leading to difficulttodetect bias in the most significant bits of attack values, bias in trait values ... and the mathematics to figure out what the actual consequences would be seem involved. And since we are using rand and not a high quality generator, I wouldn't make any assumptions.
Scenario with Robots SP scenario (1.11/1.12), allows you to build your units with components, PYR No preperation turn 1.12 mpmod that allows you to select your units immideately after the game begins.
Re: Luck in multiplayer
gfgtdf: Let F denote the function next = a * next + b, where a and b are those constants.
The good reseeding property (of rand) exists because F is a permutation on the space of all 32 bit integers, that's what the argument was based on.
However, if we don't give it something that at least resembles the uniform distribution as input, then it might not be a permutation on the input space anymore, so the next state has some bizarre distribution.
For concreteness: I know that F( {0,1}^32 ) = {0,1}^32. But what is F( 0 {0,1}^31 )? I have no idea, (edit: but it's definitely not 0 {0,1}^31). It's some ridiculously large set that is potentially hard to describe.
What if those strings all have, say, the 15th bit stuck? Then that bit will be broken in the output. What if they all have the lowest order bit stuck? What if they introduce some kind of correlation?
In general when you take one bit of entropy away from a distribution of strings in a potentially arbitrary way, it is as if you allowed an adversary to choose a bit to flip, after the string is sampled. If that adversary is a jerk, he could ruin the statistics, in obvious or subtle ways.
Edit: Lest the point be lost  the point is not that I think an adversary is ruining the game, the point is that I can no longer reason meaningfully from first principles to see that this is okay, and if I want to know that I would have to do numerical experiments. Using something like this without doing tests is basically the same as crossing your fingers and hoping for the best. So its a much worse standard than if we used an established library, and were careful not to screw around with the seeds.
Edit:
gfgtdf: To specifically answer your question: Yes the top bit of the seed could potentially affect the next random output returned, because when we multiply by that number "a", it will overflow and wrap around to the other bits that we return. So in more concrete terms, the distribution we would actually sample, would be something like, half the time uniform + 2^31 * a, half the time uniform. If there's overflow in that addition I think even the low order bits could change... anyways even if not in the first random draw there would definitely be some kind of new deviation from nearuniform introduced by the change, perhaps in the second, third or fourth draw, and all subsequent. Edit: I'm not sure if what I've written here is completely sensible / accurate, I'm not going to try anymore to figure out what the distribution is when you fix the seed bit. But anyways it should be clear that if I have a seed s, and I sample a random number r with it as the first draw, then if I go back and twiddle the top entry, it will change r by something like, adding 2^31 * 1103515245. According to my compiler that % 32768 is 30457. But when you add it, it will occur before the % 32768, and the overflow will be occurring while it is an unsigned long, so the effect won't exactly be a permutation of the possible values of r, it will be some other weird function.
The good reseeding property (of rand) exists because F is a permutation on the space of all 32 bit integers, that's what the argument was based on.
However, if we don't give it something that at least resembles the uniform distribution as input, then it might not be a permutation on the input space anymore, so the next state has some bizarre distribution.
For concreteness: I know that F( {0,1}^32 ) = {0,1}^32. But what is F( 0 {0,1}^31 )? I have no idea, (edit: but it's definitely not 0 {0,1}^31). It's some ridiculously large set that is potentially hard to describe.
What if those strings all have, say, the 15th bit stuck? Then that bit will be broken in the output. What if they all have the lowest order bit stuck? What if they introduce some kind of correlation?
In general when you take one bit of entropy away from a distribution of strings in a potentially arbitrary way, it is as if you allowed an adversary to choose a bit to flip, after the string is sampled. If that adversary is a jerk, he could ruin the statistics, in obvious or subtle ways.
Edit: Lest the point be lost  the point is not that I think an adversary is ruining the game, the point is that I can no longer reason meaningfully from first principles to see that this is okay, and if I want to know that I would have to do numerical experiments. Using something like this without doing tests is basically the same as crossing your fingers and hoping for the best. So its a much worse standard than if we used an established library, and were careful not to screw around with the seeds.
Edit:
gfgtdf: To specifically answer your question: Yes the top bit of the seed could potentially affect the next random output returned, because when we multiply by that number "a", it will overflow and wrap around to the other bits that we return. So in more concrete terms, the distribution we would actually sample, would be something like, half the time uniform + 2^31 * a, half the time uniform. If there's overflow in that addition I think even the low order bits could change... anyways even if not in the first random draw there would definitely be some kind of new deviation from nearuniform introduced by the change, perhaps in the second, third or fourth draw, and all subsequent. Edit: I'm not sure if what I've written here is completely sensible / accurate, I'm not going to try anymore to figure out what the distribution is when you fix the seed bit. But anyways it should be clear that if I have a seed s, and I sample a random number r with it as the first draw, then if I go back and twiddle the top entry, it will change r by something like, adding 2^31 * 1103515245. According to my compiler that % 32768 is 30457. But when you add it, it will occur before the % 32768, and the overflow will be occurring while it is an unsigned long, so the effect won't exactly be a permutation of the possible values of r, it will be some other weird function.
Re: Luck in multiplayer
sure changing the top bit could have an effect. But just show me just one line where we have a & 0x7fffffff where we actualy change it with & 0x7fffffff
Scenario with Robots SP scenario (1.11/1.12), allows you to build your units with components, PYR No preperation turn 1.12 mpmod that allows you to select your units immideately after the game begins.
Re: Luck in multiplayer
How about this line?
https://github.com/wesnoth/wesnoth/blob ... e.cpp#L951
Edit: Link to an actual commit, for posterity: https://github.com/wesnoth/wesnoth/blob ... e.cpp#L951
https://github.com/wesnoth/wesnoth/blob ... e.cpp#L951
Edit: Link to an actual commit, for posterity: https://github.com/wesnoth/wesnoth/blob ... e.cpp#L951
Re: Luck in multiplayer
i am sure that "rand()" never returns a value where the first bit is set because rand() always returns a positive value (the first bit is the sign bit here becasue rand doesnt return an unsigned), so & 0x7FFFFFFF is a nop.iceiceice wrote:Okay, maybe rand() is like that, but the man page specifically points out random(3) as a portable alternative.
Maybe on some possible existent systems where int s are 64 bit it and RAND_MAX % 0x7FFFFFFF != 0 it could have an effect :s .
but since teh old code already relied on ints beeing 32 bits that was no problem.
Scenario with Robots SP scenario (1.11/1.12), allows you to build your units with components, PYR No preperation turn 1.12 mpmod that allows you to select your units immideately after the game begins.
Re: Luck in multiplayer
Ok but what that means then is rand is not suitable to generate seeds for rand. You can't expect an rng to work unless you seed it with something random (like /dev/random) or pseudorandom (very difficult to distinguish from /dev/random). (Edit: If you use time(NULL), it's probably fine... if you are starting at a random time ) If one of the seed bits is frozen ... it's not random and the consequences are in typically unpredictable. With any rng.
Edit: I knew about the & 0x7FFF... thing for a while now, but at the time I guess I wasn't thinking about it. I didn't know our algorithm was actually rand until this thread. I don't know what the specs are for rand, maybe the designers actually intended that it would only have a 15 bit seed, idk. But if they didn't say that then I would say we shouldn't do it. It's quite possible that this is actually fine, and there are more sophisticated arguments for this kind of thing, I am no expert. But at least for rand it seems to make it much harder to reason about if you don't fully seed it.
Edit: I knew about the & 0x7FFF... thing for a while now, but at the time I guess I wasn't thinking about it. I didn't know our algorithm was actually rand until this thread. I don't know what the specs are for rand, maybe the designers actually intended that it would only have a 15 bit seed, idk. But if they didn't say that then I would say we shouldn't do it. It's quite possible that this is actually fine, and there are more sophisticated arguments for this kind of thing, I am no expert. But at least for rand it seems to make it much harder to reason about if you don't fully seed it.
 pyrophorus
 Posts: 513
 Joined: December 1st, 2010, 12:54 pm
Re: Luck in multiplayer
The problem is what you call random. IMO, your quote makes no sense, because one number individually cannot be said random. Only sequences of numbers (or bits) can. Take a look at the PRNG tests suite designed by the NSIT and you shall see they don't search for any mathematical proof, but compute various tests on sequences and see how much they move aside from the results given using a normal distribution. And because the large numbers law, this comparison is meaningless on short sequences.iceiceice wrote:pyrophorus: When I wrote this, it was based on specific source code review of our current random number generator. The state of our generator is a single integer, and it is fully reset when we reseed. In general that means that if we seed it with a fully random seed, the next draw will be fully random.
Here's a pithy quote from an LCG manufacturer that expresses a similar principle: http://wellington.pm.org/archive/200704 ... ss/#slide8
"We guarantee that each number is random individually, but we don't guarantee that more than one of them is random."  early LCG vendor.
Then, the example you give can be flawed in the same way: if you uses two instances of the same linear PRNG, seeding the second one with the result of the first, say, each ten trials, if Sn is the seed, you'll obtain this kind of thing from the second:
Sn Sn+1 Sn+2 Sn+3 Sn+4 Sn+5 Sn+6 Sn+7 Sn+8 Sn+9 (reseeding here with Sn+1 from the first PRNG) Sn+2 Sn+3 Sn+4 Sn+5 Sn+6 Sn+7 Sn+8 Sn+9 Sn+10 ... and so on.
which is not a really good random sequence, because it shows a repeating sliding pattern.
And again, I'm not criticising the Wesnoth random system (I didn't look at the code, and it seems to work rather well), but the idea one could blindly assume it has exactly the same behavior than a normal distribution and use probability results to describe it. There's absolutely no proof of it, even asymptotically.
Friendly,
Campaign Return to Noelren *** HowTos: WML filtering, WML variables
Please help to to update the Guide to UMC Campaigns
Please help to to update the Guide to UMC Campaigns
Re: Luck in multiplayer
Hmm, well I would prefer not to discuss interpretations of probability theory, but in short, when I speak about a "distribution", I have already introduced an abstraction, a limiting object, which indeed only makes sense in terms of a large, hypothetical numerical experiment, using the "large numbers law" as you point out. This is the common meaning of the term "probability distribution". So when a person discusses whether, say, a coin, or a sixsided dice, is fair, they might phrase it "this number is random" (or not), but of course it is a turn of phrase, a shorthand, to discuss a sequence of hypothetical samples drawn repeatedly. The tests of NIST are irrelevant to a question like "is this coin fair". And obviously "is this coin fair" / "is a particular bit random" is a meaningful question.The problem is what you call random. IMO, your quote makes no sense, because one number individually cannot be said random. Only sequences of numbers (or bits) can.
...
And because the large numbers law, this comparison is meaningless on short sequences.
The quote only makes sense in the context of a random seed. If you don't believe it is meaningful to ask if a seed is random...
The tests at NIST are only one tiny piece of the puzzle.pyrophorus wrote: Take a look at the PRNG tests suite designed by the NSIT and you shall see they don't search for any mathematical proof, but compute various tests on sequences and see how much they move aside from the results given using a normal distribution.
All science is based on approximations. Random number generation is based on assumptions like "this seed is (approximately) uniformly distributed". It's like anything else... frictionless springs may not be real, but the cars seem to work anyways...pyrophorus wrote: ... but the idea one could blindly assume it has exactly the same behavior than a normal distribution and use probability results to describe it. There's absolutely no proof of it, even asymptotically.
Re: Luck in multiplayer
It seems to me that one consequence of the way the random number generator is seeded for each attack is that there are certain sequences of hits and misses that can never occur.
For (a slightly contrived) example, suppose a cuttle fish were to attack another cuttle fish with a melee attack. Assume both cuttle fish are at full hit points, so each has 10 strikes. Then there would be 20 strikes in total, each of which has 2 possible outcomes (hit or miss), so there would be 220 possible hit/miss sequences. But there are only 215 possible seeds for the random number generator, so some (actually most) of these sequences could never be generated.
For (a slightly contrived) example, suppose a cuttle fish were to attack another cuttle fish with a melee attack. Assume both cuttle fish are at full hit points, so each has 10 strikes. Then there would be 20 strikes in total, each of which has 2 possible outcomes (hit or miss), so there would be 220 possible hit/miss sequences. But there are only 215 possible seeds for the random number generator, so some (actually most) of these sequences could never be generated.

 Posts: 8
 Joined: May 5th, 2014, 7:16 pm
Re: Luck in multiplayer
So I understand that there's a steady stream of grumbling about losses due to bad luck in Wesnoth. Has anybody ever tried to write a tool for analysing how lucky each side actually was during the course of the game? I know there's the simple information about actual vs. expected damage, but that is just a fairly primitive sanity check on the PRNG. As one of the earlier posts in this thread mentions, sometimes the unlikely set of (hits) misses occurs at a point where it has a real impact on the outcome, sometimes it's just an inconvenience that doesn't really matter.
What I'm wondering is, has anyone written a tool to try to distinguish/measure this kind of difference. I think this would be a fun thing to do, and I'm willing to take a stab it, not least because doing it successfully would mean inventing ways to look at the game that are also relevant to writing/improving an AI. After all, if you can accurately conclude that the (non)death of that orc in the central village was super unlucky because otherwise your side would have won, then you have identified a critical strategic element that your AI should also be aware of.
It would also be kind of cool because it would provide some automated feedback to people posting their replays and asking for analysis/advice. Imagine you could run this tool and it said, yes you were somewhat lucky on turns 6 and 14 but really unlucky on turn 12.
I posted here because I get the impression some of you have been around for a while and might be able to answer the question (has this been tried? i.e is there some prior work I can piggy back off?), and you might also have suggestions for how such a tool might work, what you would like it to do. I'm under no illusions that this would be quick, easy, or satisfy everyone, but I still think it would be cool.
(At the risk of being excessively long winded, this same problem exists with similar games, e.g. Risk. Winning a battle for a country that makes/breaks a continent is more significant than usual, so it would be interesting to have a tool to identify (even after the fact) such crucial moments and how luck impacted them.)
What I'm wondering is, has anyone written a tool to try to distinguish/measure this kind of difference. I think this would be a fun thing to do, and I'm willing to take a stab it, not least because doing it successfully would mean inventing ways to look at the game that are also relevant to writing/improving an AI. After all, if you can accurately conclude that the (non)death of that orc in the central village was super unlucky because otherwise your side would have won, then you have identified a critical strategic element that your AI should also be aware of.
It would also be kind of cool because it would provide some automated feedback to people posting their replays and asking for analysis/advice. Imagine you could run this tool and it said, yes you were somewhat lucky on turns 6 and 14 but really unlucky on turn 12.
I posted here because I get the impression some of you have been around for a while and might be able to answer the question (has this been tried? i.e is there some prior work I can piggy back off?), and you might also have suggestions for how such a tool might work, what you would like it to do. I'm under no illusions that this would be quick, easy, or satisfy everyone, but I still think it would be cool.
(At the risk of being excessively long winded, this same problem exists with similar games, e.g. Risk. Winning a battle for a country that makes/breaks a continent is more significant than usual, so it would be interesting to have a tool to identify (even after the fact) such crucial moments and how luck impacted them.)
Re: Luck in multiplayer
gnombat:
That's a good observation, but I actually wouldn't say that by itself it indicates a problem. It's usually the case that when you use a pseudorandom generator, you are stretching a short seed to a long sequence, and that means that only a tiny fraction of the sequences are "actually" possible.
It also depends on what your goals are. If you just want the *results* of a single attack to be very nearly random, meaning like, the hp of the participants, that should be achieved even with very little randomness, and even with a completely braindead random generator. It's true that if you made a histogram of the precise order of hits and misses, and took millions and millions of samples, it wouldn't quite be perfect, for the reasons you point out. But for the actual game I would say that's pretty much irrelevant.
Here's a different way to look at it. Suppose we want to use 15 bits to drive the cuttlefish attack with 2^20 possible outcomes, as in your example. So only 1/32 of the result sequences will actually be possible. Suppose we allow a cryptographic adversary to select the most screwed up pseudorandom generator he can  he can arbitrarily select 2^15 strings of length 20 which will correspond to each possible seed. The only rule is he can't pick the same string twice, since any real generator would guarantee at least that. How much bias can he create? Actually, using an estimate based on the normal approximation, I think it's very little, only about 2 hits out of 20, which might be surprising. (In fact I would bet that you could successfully derandomize a single long attack sequence even with something really poor, like a pairwise independent hash function or something like that, much worse than our generator.) Long attack patterns aren't harder to fool than short ones, in a sense they are easier because the outcomes are more tightly concentrated about the mean anyways, so they are fairly predictable even with true randomness. I think the rng's task is made much harder by a series of many "short" attacks with frequent user interaction in between, than it is by a few long cuttlefish attacks.
(FWIW, you can probably make less contrived examples using berzerk units that have a similar number of hits.)
forceOfHabit:
I think it would be extremely difficult to quantify how much luck helped someone in a game, perhaps as hard as writing a very good AI imo.
You could perhaps succeed for something other than default wesnoth, by like, aggregating the total death risk that someone takes with their leader. For an RPG that would tell you I guess the total death risk their strategy incurred. Strictly speaking I guess you could do that in default wesnoth too but it would largely miss the point I think, since like you say the game is really lost when that orc in the central village died.
That's a good observation, but I actually wouldn't say that by itself it indicates a problem. It's usually the case that when you use a pseudorandom generator, you are stretching a short seed to a long sequence, and that means that only a tiny fraction of the sequences are "actually" possible.
It also depends on what your goals are. If you just want the *results* of a single attack to be very nearly random, meaning like, the hp of the participants, that should be achieved even with very little randomness, and even with a completely braindead random generator. It's true that if you made a histogram of the precise order of hits and misses, and took millions and millions of samples, it wouldn't quite be perfect, for the reasons you point out. But for the actual game I would say that's pretty much irrelevant.
Here's a different way to look at it. Suppose we want to use 15 bits to drive the cuttlefish attack with 2^20 possible outcomes, as in your example. So only 1/32 of the result sequences will actually be possible. Suppose we allow a cryptographic adversary to select the most screwed up pseudorandom generator he can  he can arbitrarily select 2^15 strings of length 20 which will correspond to each possible seed. The only rule is he can't pick the same string twice, since any real generator would guarantee at least that. How much bias can he create? Actually, using an estimate based on the normal approximation, I think it's very little, only about 2 hits out of 20, which might be surprising. (In fact I would bet that you could successfully derandomize a single long attack sequence even with something really poor, like a pairwise independent hash function or something like that, much worse than our generator.) Long attack patterns aren't harder to fool than short ones, in a sense they are easier because the outcomes are more tightly concentrated about the mean anyways, so they are fairly predictable even with true randomness. I think the rng's task is made much harder by a series of many "short" attacks with frequent user interaction in between, than it is by a few long cuttlefish attacks.
(FWIW, you can probably make less contrived examples using berzerk units that have a similar number of hits.)
forceOfHabit:
I think it would be extremely difficult to quantify how much luck helped someone in a game, perhaps as hard as writing a very good AI imo.
You could perhaps succeed for something other than default wesnoth, by like, aggregating the total death risk that someone takes with their leader. For an RPG that would tell you I guess the total death risk their strategy incurred. Strictly speaking I guess you could do that in default wesnoth too but it would largely miss the point I think, since like you say the game is really lost when that orc in the central village died.
 pyrophorus
 Posts: 513
 Joined: December 1st, 2010, 12:54 pm
Re: Luck in multiplayer
Curiously, tossing a coin is exactly the abstract model this test suite uses. And of course yes, applying the test suite to a large set of coin tosses can tell you if this coin is fair or not.iceiceice wrote: ... The tests of NIST are irrelevant to a question like "is this coin fair". And obviously "is this coin fair" / "is a particular bit random" is a meaningful question.
It seems you don't see it's only escaping the problem, asserting a seed is random without any proof of it. If you obtain the seed from a PRNG, it's randomness is questionable as long as you don't explain how you seed the seed generator itself and so on. Cascading generators will not give you automatically more randomness. It's just incantation.iceiceice wrote: The quote only makes sense in the context of a random seed. If you don't believe it is meaningful to ask if a seed is random...
It's easy to say so. Please elaborate...iceiceice wrote: The tests at NIST are only one tiny piece of the puzzle.
Science (or more accurately, physics at least), is not so simple: it uses various models, more and more sophisticated, and use the simplest one giving results matching experience: you don't need relativistic mechanic to describe cars movements, because newtonian mechanic is accurate enough and simpler to use. But I doubt car designers use a frictionless spring model (more elaborated models exist of course and are science). NSIT tests suite follows exactly the same approach: randomness is actually defined by your requirements: what's is suitable for some applications is not for some others. You can be satisfied with a system in a particular context. This doesn't mean it's the best possible randomness or can be used as a definition of randomness.iceiceice wrote: All science is based on approximations. Random number generation is based on assumptions like "this seed is (approximately) uniformly distributed". It's like anything else... frictionless springs may not be real, but the cars seem to work anyways...
Friendly,
Campaign Return to Noelren *** HowTos: WML filtering, WML variables
Please help to to update the Guide to UMC Campaigns
Please help to to update the Guide to UMC Campaigns
Re: Luck in multiplayer
True, any pseudorandom number generator is going to have only a finite number of possible sequences; it just seems that the current Wesnoth implementation is unnecessarily reducing the amount of entropy in the generator (and hence the number of possible sequences) by reseeding it. The generator itself has 32 bits of state, so it could potentially generate 232 possible sequences, but every time it gets reseeded this drops to 215.iceiceice wrote:gnombat:
That's a good observation, but I actually wouldn't say that by itself it indicates a problem. It's usually the case that when you use a pseudorandom generator, you are stretching a short seed to a long sequence, and that means that only a tiny fraction of the sequences are "actually" possible.