D&D and probability
The opponents had proposed entering the arena with no preparatory spells cast and then rolling dice each round to see when to start the attack. (Until then we'd be able to take defensive measures.) This led to a discussion of how many rounds of prep to expect for any given die-rolling proposal. (That's "expect" as in "expected value", not "expect" as in "really think this will happen every time", of course.)
This is a simple probability problem, if you're up on your probability theory. The three people who explicitly tried to answer the question used three different approaches.
I, being rather distant in time from formal math of any sort, took the intuitive approach: for N equally-probable outcomes, the expected value is N/2. (I'll justify this error in comments if asked. There was actual reasoning behind it, not just a guess.) Ralph wrote a perl script to run simulations, and came up with the same results as Kevin, who applied formal theory. Sure, it's obvious that the answer is log(0.5)/log(1-1/N), right? Um, yeah. Sure. Whatever you say. If he didn't have the perl script backing him up, I'd be more argumentative.
I think this is an illustration of why formal math and I do not get along. I started college life as a math major, but it didn't stick. I'm just fine if I can relate a theory to actual facts I can really observe and understand, but if it's all "sure, whatever you say" to me, then I have trouble conjuring up the correct invocations from memory on demand, and that's what the exams were all about. In a real-world context I do not yet understand why the probability of a 1-in-10 event is 6.6 rather than 5.5. [Edited to add: Sloppy writing -- I meant the expected number of trials before a 1-in-10 event occurs, not the probability.] And if I can't understand it, there's no reasoning involved -- just number-crunching.
This is kind of bizarre when you think about some of the theoretical contexts I'm perfectly comfortable working in. But there you have it: why advanced calculus and everything that followed seemed completely foreign to me.
Oh, as for what actually happened? We wanted to get about 10-11 rounds of prep, so we opted for a match on 2 d20s. The expected value from that would have been 12-13 rounds, which was good enough. It actually went 48. So much for theory. :-)
no subject
Your intuition is giving the (almost) correct answer to a slightly different problem: number of trials to get a specified outcome, for N equally probable outcomes without repetition. If you're drawing from a deck of cards, on what draw will you pull the ace of spades? Expectation is (52+1)/2.
If you're rolling a 52-sided die, though, or putting the drawn card back in the deck each time, you can get the same "wrong" outcome more than once, so it takes longer to get the "right" one -- expectation is N, and it's interesting that this result is simpler-looking than the other, but not directly accessible to any intuition I've got.
E = 1/n + (n-1)/n (1 + E) = 1 + (n-1)/n E
1/n E = 1
E = n
no subject
no subject
no subject
no subject
no subject
Imagine a long (infinite?) stream of random rolls of an n-sided die.
On average, 1/n of those will be the right outcome. (Without loss of generality, call that right outcome a roll of 1.)
These 1's break the stream up into subsequences of 0 or more non-1's terminated by a 1. The average length of these subsequences is n, because there's a single 1 per subsequence, and if the average length is not n, you don't get 1/n 1's per roll. (Kind of a handwave, but I think that's solid.)
Rolling the dice until you get a 1 is just picking a random subsequence from that stream, so the average number of rolls is n.
I dunno if this is intuitive to anyone else--I invite you and Monica to weigh in on whether this makes sense.
no subject
Hmm, until we get to "Rolling the dice until you get a 1 is just picking a random subsequence from that stream". My intuition pipes up and says that it's picking a subsequence, but starting at some point in the middle, and not getting charged for the part before that point. Also it's not picking uniformly over all subsequences (if that made sense, which really it doesn't), but weighted by their length.
I guess I can say, since this is a memoryless system, that without loss of generality we can imagine that the previous roll was a 1, so you do pick a subsequences "uniformly", and start at its beginning. But, eh, veers away from intuitively apparent for me.
no subject
But suppose that you were actually going to try to count up the averages by hand. You'd reset your counter, roll until you got a 1, and then add the current value of the counter as an observation. Then you'd reset the counter and start again. So for almost all your observations, you are in fact observing when the previous roll was a 1.
Actually, I think that this might be a more intuitive argument: Suppose that you are actually counting observations and adding them up. To do this, you roll the dice a whole bunch of times--call it M times. Each time a 1 comes up, you record a new observation and start counting again. M/n of the die rolls are 1's, so there are M/n observations. Every roll of the dice fits into some observation, so the total of all the observations is M. So the average is M/(M/n) = n.
How's that for intuitive?
no subject
no subject
no subject
That seems intuitive at first glance, but another part of my intuition is nagging me about the following: we are suggesting that on average, each possible outcome will come up once and the one we want will be the last one. My intuition wants the desired one to fall randomly within the sequence (thus my initial argument for N/2). But that doesn't account for repetition, and as I said in another comment, not all repetition is equal. Non-ones are allowed to repeat infinitely; ones stop the sequence.
no subject
I think the flaw in this reasoning is that you're not really pulling a subset out a bazillion trials. Rather, as soon as you get the desired answer, you stop. So wrong answers can repeat, but correct ones never do.