Sunday, February 12, 2012

More on Shortest Confidence Intervals

I was (pleasantly) surprised by the number of "hits" my recent post on "Minimizing the Length of a Confidence Interval" attracted. As has often been the case, a lot of visitors came by way of  Mark Thoma's excellent blog, Economist's View. (Thanks, Mark!) 

In that post one of the things I discussed was the issue of constructing a "shortest length" confidence interval in the case where the distribution of the pivotal statistic that's used to start off the interval is asymmetric. In such cases, we have a more difficult task on our hands than when the distribution is symmetric, and uni-modal. In response to this, we usually construct "equal tails" confidence intervals in the asymmetric case.

I'm not going to repeat the previous post! Instead, I'm going to share a few lines of R code that I've put together to deal with this issue in the case of an asymmetric distribution that's of great practical importance to econometricians.

Now, in that earlier post mentioned what we need to do in order to construct a "shortest length" confidence interval using a pivotal statistic whose sampling distribution is asymmetric. Let's use one that follows a Chi-square distribution, for illustrative purposes.

We generally construct an "equal tails" confidence interval. That is, if the confidence level is to be 100(1 - α)%, we choose the "cut-off" quantiles for the distribution so as to get 100(α/2)% in each of the left and right ails of the distribution. Given the asymmetry of the Chi-square density, the "equal tails" interval is not the shortest one that we could possibly construct, while retaining the same coverage probability of 100(1 - α)%. It works very well, of course for large degrees of freedom, as the asymmetry of the Chi-square distribution decreases with an increase in the degrees of freedom. 

I explained in the earlier post how to get the shortest possible interval in cases like this:
"We choose the upper and lower quantiles from the (uni-modal) distribution so as to ensure that the height to the density function is the same in each case, while still ensuring that our choice gives us the desired confidence level. As long as the chosen quantiles "straddle" the median of the distribution, we'll have the shortest confidence interval.

In practice, this is going to take quite a lot of effort! The mathematical problem that we face is one of solving two complicated equations for two unknowns - the latter being the two quantiles. One of the equations says that the value of the density function has to be the same when evaluated at the two unknowns. The other equation says that the sum of two areas under the density (two integrals) has to equal one minus the desired confidence level."
Incidentally, the median of the Chi-square distribution with (say) v degrees of freedom is approximately the value, v [1 - 2/(9v)]3. For any v, we can compute the median exactly, of course, as the quantile that cuts off 50% of the area under the density to the left (and hence to the right, too).

So, using the above information, here's a picture that represents our problem, and its solution, for a particular case:



For v = 10, the lower and upper cut-off points (cL and cU ) that we should use when constructing a 90% confidence interval are 3.107327 and 16.710795 respectively. At each of these quantiles, the height of the density is the same, and equal to 0.02387. For the record, the median of this particular Chi-square distribution is 9.341818, and this value is "straddled" by the two cut-off points of 3.1 and 16.7.

These calculations were done with just a few lines of R code that you can find on the Code page associated with this blog. You can use that code to perform the same calculations for any desired confidence level, and any degrees of freedom for the Chi-square distribution. (If you're not an R user, you can still open the R file with any test editor, to take a look.)

By way of comparison,  for  v = 10, the "equal tails" cut-off points for a 90% coverage probability are  3.940299 and 18.30704. These give 5% in each of the left and right tails of the distribution.

In addition, you can see a table of illustrative cut-off points for the Chi-square distribution for constructing "shortest length" confidence intervals on this blog's Data page.

Now, let's look at a couple of examples where this information can be used in the construction of a "shortest-length" confidence interval.

Students usually encounter the Chi-square distribution for the first time when they learn how to construct a confidence interval for the variance of a Normal population, under simple random sampling. You'll recall that the pivotal statistic that we use in this case is χ2 = (n - 1)s2/σ2. This statistic has a sampling distribution that is Chi-square with (n - 1) degrees of freedom.

I'm actually going to look at the related problem of constructing a "shortest length" confidence interval for the "precision" of the population - that is, the parameter τ = (1 / σ2).

To construct a traditional "equal tails" 2-sided confidence interval for τ, with a confidence level (or "coverage probability" of 100(1 - α)% we'd form the interval [cL / ((n - 1)s2)  ,  cU / ((n - 1)s2]. Here, cL is the quantile that "cuts off" 100(α/2)% of the area under the Chi-square density in the left tail; and cU is the quantile that cuts off 100(α/2)% of the area in the right tail.

(If this interval has limits that appear to be "inverted" relative to what you're used to, it's because the interval is for τ, not for σ2 itself.)

So, using the picture we had earlier, a 90% "shortest length" confidence interval for τ, when v = 20 (say), would be the interval [9.78589 / ((n - 1)s2)   ,  29.87586 / ((n - 1)s2)]. The length of this interval is approximately 20.09 / [(n - 1)s2]. In contrast, if we constructed the "equal tails" interval in this case we'd use "cut-off" values of  10.85081 and 31.41043, and the length of this interval would be a little longer - namely, approximately 20.56 / [(n - 1)s2 ].

The second example I have is one that relates to the "coefficient of variation" of a Normal population. That is, C = [σ / | μ |]. We'd usually estimate this parameter by using the sample coefficient of determination, c = [s / | m |], where m is the sample mean, and s is the sample standard deviation.

You'll no doubt recall that the coefficient of variation is unit-less and adjusted for location, so it provides a useful measure when we want to compare variability across different populations. Sometimes, we use C2 and c2, instead of C and c. either way, the coefficient of variation finds application in economics - for example to compare income inequality across countries, as in Sala-i-Martin (2002), and elsewhere. 

I'm again going to focus on "precision", rather than "variablilty", so I'm going to work with C -2, rather than C2 itself

Suppose that we want to construct a confidence interval for C -2. To do this we need to construct a pivotal statistic, based on c-2, and (most importantly) we need to know the sampling distribution of that statistic. There's a long and very interesting literature dealing with this sampling distribution. More recent contributions include those of Bai (2009).

The exact sampling distributions of c and c-2 are very complicated (see Hendricks and Robey, 1936), even if we are sampling from a Normal population. However, McKay (1932) established that a Chi-square approximation can be used in certain circumstances. Specifically, if the population is Normal, and if C is less than (about) one-third, then the statistic, [nc2(C -2 + 1) / (1 + c2)], is essentially Chi-square distributed with (n - 1) degrees of freedom. The quality of this approximation was verified numerically by Fieller (1932) and Pearson (1932).

Yes - numerically, in 1932! That would have been lots of fun!

So, in this case you can see right away that a 100(1 - α)% confidence interval for C -2 is of the form [cL {(1 + c2) / nc2} - 1   ,    cU {(1 + c2) / nc2} - 1]. Again, we can choose cL and cU so as to construct an "equal tails" interval; or we can choose these cut-offs so as to construct a "shortest tails" interval. Let's consider a 95% confidence interval this time. If n = 16, so that the degrees of freedom are (n - 1) = 15, then the cut-off points used for constructing the "shortest length" interval will be 5.31713 and 25.90030, whereas those for the "equal tails" interval will be 6.26214 and 27.48839.

In this case, the length of the two confidence intervals for C -2 are 20.58317[(1 + c2) / nc2], and  21.22625[(1 + c2) / nc2], respectively.

With a little thought, you'll be able to guess why I worked with confidence intervals for "precision", rather than "variability", in these two examples. Similar results apply for confidence interval based on pivotal statistics with other asymmetric sampling distributions that econometricians also use a lot, such as the F distribution.

Some more on this in a later post, perhaps.


Note: The links to the following references will be helpful only if your computer's IP address gives you access to the electronic versions of the publications in question. That's why a written References section is provided.


References

Bao, Y., 2009. Finite-sample moments of the coefficient of variation. Econometric Theory, 25, 291-297.

Fieller, E. C., 1932. A numerical test of the adequacy of A. T. McKay's approximation.  Journal of the Royal Statistical Society, 95, 699-702.

Hendricks, W. A. and K. W. Robey, 1936. The sampling distribution of the coefficient of variation. Annals of Mathematical Statistics, 7, 129–132.

McKay, A. T.  , 1932. Distribution of the coefficient of variation and the extended "t" distribution. Journal of the Royal Statistical Society, 95, 695-698.

Pearson, E. S., 1932. Comparison of A. T. McKay's approximation with experimental sampling results. Journal of the Royal Statistical Society, 95, 703-704

Sala-i-Martin, X., 2002. The disturbing "rise" of global income inequality. Working Paper 8904, NBER.

© 2012, David E. Giles

1 comment:

  1. Great, very interesting, and thanks for the code!!

    ReplyDelete

Note: Only a member of this blog may post a comment.