Applied Univariate, Bivariate, and Multivariate Statistics. Daniel J. Denis
the variance of the sampling distribution of means will be equal to only a fraction of the population variance. It will be equal to
or simply,
The mathematical proof of this statistical fact is in most mathematical statistics texts. A version of the proof can also be found in Hays (1994). The idea, however, can be easily and perhaps even more intuitively understood by recourse to what happens as n changes. We consider first the most trivial and unrealistic of examples to strongly demonstrate the point. Suppose that we calculate the sample mean from a sample size of n = 1, sampled from a population with μ = 10.0 and σ2 = 2.0. Suppose the sample mean we obtain is equal to 4.0. Therefore, the sampling variance of the corresponding sampling distribution is equal to:
That is, the variance in means that you can expect to see if you sampled an infinite number of means based on samples of size n = 1 repeatedly from this population is equal to 2. Notice that 2 is exactly equal to the original population variance. In this case, the variance in means is based on only a single data point.
Consider now the case where n > 1. Suppose we now sampled a mean from the population based on sample size n = 2, yielding
What has happened? What has happened is that the variance in sample means has decreased by 1/2 of the original population variance (i.e., 1/2 of 2 is 1). Why is this decrease reasonable? It makes sense, because we already know from the law of large numbers that as the sample size grows larger, one gets closer and closer to the true probability in estimating a parameter. That is, for a consistent estimator, our estimate of the true population mean (i.e., the expectation) should get better and better as sample size increases. This is exactly what happens as we increase n, our precision of that which is being estimated increases. In other words, the sampling variance of the estimator decreases. It's less variable, it doesn't “bounce around as much” on average from sample to sample.
Analogous to how we defined the standard deviation as the square root of the variance, it is also useful to take the square root of the variance of means:
which we call the standard error of the mean, σM. The standard error of the mean is the standard deviation of the sampling distribution of the mean. Lastly, it is important to recognize that
2.12 CENTRAL LIMIT THEOREM
It is not an exaggeration to say that the central limit theorem, in one form or another, is probably the most important and relevant theorem in theoretical statistics, which translates to it being quite relevant to applied statistics as well.
We borrow our definition of the central limit theorem from Everitt (2002):
If a random variable y has a population mean μ and population variance σ2, then the sample mean,
Asymptotically, the distribution of a normal random variable converges to that of a normal distribution as n → ∞. A multivariate version of the theorem can also be given (e.g., see Rencher, 1998, p. 53).7
The relevance and importance of the central limit theorem cannot be overstated: it allows one to know, at least on a theoretical level, what the distribution of a statistic (e.g., sample mean) will look like for increasing sample size. This is especially important if one is drawing samples from a population for which the shape is not known or is known a priori to be nonnormal. Normality of the sampling distribution, for adequate sample size, is still assured even if samples are drawn from nonnormal populations. Why is this relevant? It is relevant because if we know what the distribution of means will look like for increasing sample size, then we know we can compare our obtained statistic to a normal distribution in order to estimate its probability of occurrence. Normality assumptions are also typically required for assuming independence between
2.13 CONFIDENCE INTERVALS
Recall that a goal of statistical inference is to estimate functions of parameters, whether a single parameter, a difference of parameters (for instance, in the case of population differences), or some other function of parameters. Though the sample mean
We can say that over all samples of a given size n, the probability is 0.95 for the following event to occur:
How was (2.2) obtained? Recall the calculation of a z‐score for a mean:
Suppose now that we want to have a 0.025 area on either side of the normal distribution. This value corresponds to a z‐score of 1.96, since the probability of a z‐score of ±1.96 is 2(1 – 0.9750021) = 0.0499958, which is approximately 5% of the total curve. So, from the z‐score, we have
We