D&D and probability
Nov. 23rd, 2004 11:53 pmThe opponents had proposed entering the arena with no preparatory spells cast and then rolling dice each round to see when to start the attack. (Until then we'd be able to take defensive measures.) This led to a discussion of how many rounds of prep to expect for any given die-rolling proposal. (That's "expect" as in "expected value", not "expect" as in "really think this will happen every time", of course.)
This is a simple probability problem, if you're up on your probability theory. The three people who explicitly tried to answer the question used three different approaches.
I, being rather distant in time from formal math of any sort, took the intuitive approach: for N equally-probable outcomes, the expected value is N/2. (I'll justify this error in comments if asked. There was actual reasoning behind it, not just a guess.) Ralph wrote a perl script to run simulations, and came up with the same results as Kevin, who applied formal theory. Sure, it's obvious that the answer is log(0.5)/log(1-1/N), right? Um, yeah. Sure. Whatever you say. If he didn't have the perl script backing him up, I'd be more argumentative.
I think this is an illustration of why formal math and I do not get along. I started college life as a math major, but it didn't stick. I'm just fine if I can relate a theory to actual facts I can really observe and understand, but if it's all "sure, whatever you say" to me, then I have trouble conjuring up the correct invocations from memory on demand, and that's what the exams were all about. In a real-world context I do not yet understand why the probability of a 1-in-10 event is 6.6 rather than 5.5. [Edited to add: Sloppy writing -- I meant the expected number of trials before a 1-in-10 event occurs, not the probability.] And if I can't understand it, there's no reasoning involved -- just number-crunching.
This is kind of bizarre when you think about some of the theoretical contexts I'm perfectly comfortable working in. But there you have it: why advanced calculus and everything that followed seemed completely foreign to me.
Oh, as for what actually happened? We wanted to get about 10-11 rounds of prep, so we opted for a match on 2 d20s. The expected value from that would have been 12-13 rounds, which was good enough. It actually went 48. So much for theory. :-)
(no subject)
Date: 2004-11-24 05:24 am (UTC)0.1(1) + 0.1(2) + 0.1(3) + ... + 0.1(10)
which is 5.5.
(no subject)
Date: 2004-11-24 05:27 am (UTC)(no subject)
Date: 2004-11-24 02:00 pm (UTC)(no subject)
Date: 2004-11-24 02:02 pm (UTC)(no subject)
Date: 2004-11-24 05:26 am (UTC)(no subject)
Date: 2004-11-24 07:11 am (UTC)(1-p)^i = 1/2
i ln (1-p) = ln(1/2)
i = ln(1/2) / ln(1-p).
Okay, some discrete/continuous handwaving going on, but it's close.
(no subject)
Date: 2004-11-24 07:23 am (UTC)(no subject)
Date: 2004-11-24 08:56 am (UTC)Consider Pr(X = k). Then we have k "failures" before our first "success." The probability of this sequence, given probability p of success, is pq^k. Now Pr(X<=k) = \sum_{i=0}^k Pr(X = k) = \sum_{i=0}^k pq^i = p(1-q^(k+1))/(1-q) = p(1-q^(k+1)/p = 1-q^(k+1), where we eliminated the summation by remembering the equation for the sum of a geometric series. We thus have 1-q^(k+1) = 0.5 => q^(k+1) = 0.5 => log(0.5) = (k+1)log(q) => (k+1) = log(0.5)/log(q) = log(0.5)/log(1-p). This figure differs from the one you reported presumably because you rolled the dice at the end of a round, ensuring at least one round of preparation.
Note that X has a negative binomial distribution (with p = 1/20 and r = 1), so truly lazy students of probability theory would just look up its mean (and perhaps its median) somewhere.
(no subject)
Date: 2004-11-24 06:29 am (UTC)Your intuition is giving the (almost) correct answer to a slightly different problem: number of trials to get a specified outcome, for N equally probable outcomes without repetition. If you're drawing from a deck of cards, on what draw will you pull the ace of spades? Expectation is (52+1)/2.
If you're rolling a 52-sided die, though, or putting the drawn card back in the deck each time, you can get the same "wrong" outcome more than once, so it takes longer to get the "right" one -- expectation is N, and it's interesting that this result is simpler-looking than the other, but not directly accessible to any intuition I've got.
E = 1/n + (n-1)/n (1 + E) = 1 + (n-1)/n E
1/n E = 1
E = n
(no subject)
Date: 2004-11-24 06:47 am (UTC)(no subject)
Date: 2004-11-24 06:57 am (UTC)(no subject)
Date: 2004-11-24 06:58 am (UTC)(no subject)
Date: 2004-11-24 07:07 am (UTC)(no subject)
Date: 2004-11-24 06:57 am (UTC)Imagine a long (infinite?) stream of random rolls of an n-sided die.
On average, 1/n of those will be the right outcome. (Without loss of generality, call that right outcome a roll of 1.)
These 1's break the stream up into subsequences of 0 or more non-1's terminated by a 1. The average length of these subsequences is n, because there's a single 1 per subsequence, and if the average length is not n, you don't get 1/n 1's per roll. (Kind of a handwave, but I think that's solid.)
Rolling the dice until you get a 1 is just picking a random subsequence from that stream, so the average number of rolls is n.
I dunno if this is intuitive to anyone else--I invite you and Monica to weigh in on whether this makes sense.
(no subject)
Date: 2004-11-24 07:07 am (UTC)Hmm, until we get to "Rolling the dice until you get a 1 is just picking a random subsequence from that stream". My intuition pipes up and says that it's picking a subsequence, but starting at some point in the middle, and not getting charged for the part before that point. Also it's not picking uniformly over all subsequences (if that made sense, which really it doesn't), but weighted by their length.
I guess I can say, since this is a memoryless system, that without loss of generality we can imagine that the previous roll was a 1, so you do pick a subsequences "uniformly", and start at its beginning. But, eh, veers away from intuitively apparent for me.
(no subject)
Date: 2004-11-24 07:21 am (UTC)But suppose that you were actually going to try to count up the averages by hand. You'd reset your counter, roll until you got a 1, and then add the current value of the counter as an observation. Then you'd reset the counter and start again. So for almost all your observations, you are in fact observing when the previous roll was a 1.
Actually, I think that this might be a more intuitive argument: Suppose that you are actually counting observations and adding them up. To do this, you roll the dice a whole bunch of times--call it M times. Each time a 1 comes up, you record a new observation and start counting again. M/n of the die rolls are 1's, so there are M/n observations. Every roll of the dice fits into some observation, so the total of all the observations is M. So the average is M/(M/n) = n.
How's that for intuitive?
(no subject)
Date: 2004-11-24 07:26 am (UTC)(no subject)
Date: 2004-11-24 07:27 am (UTC)(no subject)
Date: 2004-11-24 03:16 pm (UTC)That seems intuitive at first glance, but another part of my intuition is nagging me about the following: we are suggesting that on average, each possible outcome will come up once and the one we want will be the last one. My intuition wants the desired one to fall randomly within the sequence (thus my initial argument for N/2). But that doesn't account for repetition, and as I said in another comment, not all repetition is equal. Non-ones are allowed to repeat infinitely; ones stop the sequence.
(no subject)
Date: 2004-11-24 03:10 pm (UTC)I think the flaw in this reasoning is that you're not really pulling a subset out a bazillion trials. Rather, as soon as you get the desired answer, you stop. So wrong answers can repeat, but correct ones never do.
(no subject)
Date: 2004-11-24 07:40 am (UTC)Let's see, theory says 48 or higher will happen... (19/20)^48 = 9% of the time. :)
(no subject)
Date: 2004-11-24 02:51 pm (UTC)(no subject)
Date: 2004-11-24 08:10 pm (UTC)(no subject)
Date: 2004-11-24 08:33 pm (UTC)Variance matters, but this isn't a real distribution; it may be more intuitive to express the variance as a series of goalposts, like this:
With 3d4, the top 75% mark is 4.45 (i.e., there's a 75% chance that the time will be above that)
The 50% mark is 10.74
The top 25% mark is 21.48
With 2d20 (a 1 in 20 chance):
The 75% mark is 5.61
The 50% mark is 13.51
The 25% mark is 27.07
With the 1-in-10 chance we used last time:
The 75% mark is 2.73
The 50% mark is 6.58
The 25% mark is 13.16
(I actually think that there should be an extra +1 for all of these, since there's always at least one roll. But that doesn't change things too much.)
(no subject)
Date: 2004-11-24 08:08 am (UTC)(no subject)
Date: 2004-11-24 04:18 pm (UTC)Actually in each trial the probablility is 1 in 10, but what you're asking for is the number of trials it would take to increase the cumulative chance above 50%
For example, take a d10 v. d10
The chance of a match on trial one is 10% (no surprise!)
The chance of a match *by* trial two is the sum of the chances of a match on trial one and on trial two. Trial one's prob is 10%, as above. So the chance that trial two actually happens is 90%. (we are assuming the dice rolling ends on a match, otherwise your reasoning works!)
So the chance of a match of trial two is the chance of trial two happening (90%) times the chance of a match (always 10%)
This gives 9%. Adding it to the chance of a match on trial one gives 19%. Note that this is < 20% ;-)
The chance of a match happening by trial three or earlier is:
10% + (90%)*(10%) + (81%)*(10%) = 10% + 9% + 8.1% = 27.1%
Note that this is < 30% ;-)
So by five trials, you will have some number <50% Thus you'll need to do more than five trials to get above a 50% chance of a match "on or before the nth trial" Actually, since the question was phrased in terms of time before the match happens, it's more like you're saying that it's better than 50% that a match *won't* happen on or before roll number 5, and thus that you have a better than 50% chance of 5 rolls happening.
Make some sense?
(no subject)
Date: 2004-11-24 06:58 pm (UTC)Aha! Thank you. That makes perfect sense!
(Where were you during my freshman year in the math department? :-) )
(no subject)
Date: 2004-11-24 10:36 pm (UTC)Proabably learning calculus on my own in 10th grade, or reading Martin Gardener's books on Math Puzzles while in Jr. High ;-)
(no subject)
Date: 2004-11-24 04:53 pm (UTC)You forgot to take into account the innate perversity of gaming dice. ;->
(no subject)
Date: 2004-11-24 06:59 pm (UTC)(no subject)
Date: 2004-12-03 03:39 am (UTC)