# How come you get bell-shaped curves in generalized logic

When I learned a little probability theory in school, we arrived at bell-shaped ‘gaussian’ curves as the result of mixing a bunch of ‘random variables’ together. Theorems showed they tended to spread out into that well known bell shape. At least, that’s how I remember the theory going.

But probability theory as logic has no ‘random variables’, so how come it ends up with the same ‘gaussian’ functions? It turns up as a way of saying that some facts are unknown. If you think about it, this is exactly what the idea of a ‘random variable’ is trying to capture -- facts that are there but which for some reason we don’t know exactly, but can only make an educated guess. it is not the rolling of the dice that makes them a ‘random variable’, but the fact that we don’t know what sides will end up on top until after we have rolled the dice. This is what makes an abstract view necessary. Suppose, for instance, that we know there are 5d6 lying somewhere, and someone has arranged them purposely rather than rolled them, but hasn’t told us the arrangement. Then the same exact ‘random variable’ applies here as in the case of rolled dice; it is the not-knowing that makes a probability problem what it is, not the fact that rolling was involved. (Rolling of dice is probably about as deterministic a process as one can imagine, anyway, akin to shooting pool. Few would doubt that it is simply an intractable problem in Newtonian dynamics.)
• Post a new comment

#### Error

Anonymous comments are disabled in this journal

default userpic