Taking the voodoo out of multiple regression

Valerio Filoso (2013) writes:

Most econometrics textbooks limit themselves to providing the formula for the β\beta vector of the type

β=(XX)1XY.\beta = (X′X)^{-1} X'Y.

Although compact and easy to remember, this formulation is a sort black box, since it hardly reveals anything about what really happens during the estimation of a multivariate OLS model. Furthermore, the link between the β\beta and the moments of the data distribution disappear buried in the intricacies of matrix algebra. Luckily, an enlightening interpretation of the β\betas in the multivariate case exists and has relevant interpreting power. It was originally formulated more than seventy years ago by Frisch and Waugh (1933), revived by Lovell (1963), and recently brought to a new life by Angrist and Pischke (2009) under the catchy phrase regression anatomy. According to this result, given a model with K independent variables, the coefficient β\beta for the k-th variable can be written as

βk=cov(y,x~k)var(x~k)\beta_k = \frac{cov(y,\tilde{x}_k)}{var(\tilde x_k)}

where x~k\tilde x_k is the residual obtained by regressing xkx_k on all remaining K1K − 1 independent variables.

The result is striking since it establishes the possibility of breaking a multivariate model with KK independent variables into KK bivariate models and also sheds light into the machinery of multivariate OLS. This property of OLS does not depend on the underlying Data Generating Process or on its causal interpretation: it is a mechanical property of the estimator which holds because of the algebra behind it.

From, βk=cov(y,x~k)var(x~k)\beta_k = \frac{cov(y,\tilde{x}_k)}{var(\tilde x_k)}, it’s easy to also show that

βk=cov(y~,x~k)var(x~k)\beta_k = \frac{cov(\tilde{y},\tilde{x}_k)}{var(\tilde x_k)}

I’ll stick to the first expression in what follows. (See Filoso sections 2-4 for a discussion of the two options. The second is the Frisch-Waugh-Lovell theorem, the first is what Angrist and Pischke call regression anatomy).

Multiple regression with K3K\geq3 (a constant and two or more variables) can feel a bit like voodoo at first. It is shrouded in phrases like “holding constant the effect of”, “controlling for”, which are veiled metaphors for the underlying mathematics. In particular, it’s hard to see what “holding constant” has to do with minimising a loss function. On the other hand, a simple K=2K=2 regression has an appealingly intuitive 2D graphical representation, and the coefficients are ratios of familiar covariances.

This is why it’s nice that you can break a model with KK variables into KK bivariate models involving the residuals x~k\tilde x_k. This is easiest to see in a model with K=3K=3: x~k\tilde x_k is the residual from a simple K=2K=2 regression. Hence a sequence of three simple regressions is sufficient to obtain the exact coefficients of the K=3K=3 regression (see figure 2 below, yellow boxes).

Similarly, it’s possible to arrive at the coefficients of a K>3K>3 regression by starting with only simple pairwise regressions of the original KK independent variables. I do this for K=4K=4 in figure 1. From these pairwise regressions (in black and grey1), we work our way up to three K=3K=3 regressions of one XX-variable on the two others (orange boxes), by regressing each XX-variable on the residuals obtained in the first step. We obtain expressions for each of the x~k\tilde x_k, (g,f,qg,f,q in my notation). We regress YY on these (yellow box). Figure 1 also nicely shows that the number of pairwise regressions needed to compute multivariate regression coefficients grows with the square of KK. According to this StackExchange answer, the total time complexity is O(K2n)O(K^2n), for nn observations.

Figure 1:

Judd et al. (2017) have a nice detailed walk-through of the K=3K=3 case, pp.107-116. Unfortunately, they use the more complicated Frisch-Waugh-Lovell theorem method of regressing residuals on residuals. I show this method here (in green) and the method we’ve been using (in yellow), for K=3K=3. As you can see, the former method needs two superfluous base-level regressions (in dark blue). But they should be equivalent, hence I use the same θ\theta coefficients in the yellow and green boxes.

Figure 2:

I made this is PowerPoint, not knowing how to do it better. Here is the file.

  1. The grey ones are redundant and included for ease of notation. 

January 10, 2018

Diagrams of linear regression

I made a big diagram describing some assumptions (MLR1-6) that are used in linear regression. In my diagram, there are categories (in rectangles with dotted lines) of mathematical facts that follow from different subsets of MLR1-6. References in brackets are to Hayashi (2000).

thelinearmodel

A couple of comments about the diagram are in order.

  • UU,YY are a n×1n \times 1 vectors of random variables. XX may contain numbers or random variables. β\beta is a K×1K \times 1 vector of numbers.
  • We measure: realisations of YY, (realisations of) XX. We do not measure: β\beta, UU. We have one equation and two unknowns: we need additional assumptions on UU.
  • We make a set of assumptions (MLR1-6) about the joint distribution f(U,X)f(U,X). These assumptions imply some theorems relating the distribution of bb and the distribution of β\beta.
  • In the diagram, I stick to the brute mathematics, which is entirely independent of its (causal) interpretation.1
  • Note the difference between MLR4 and MLR4’. The point of using the stronger MLR4 is that, in some cases, provided MLR4, MLR2 is not needed. To prove unbiasedness, we don’t need MLR2. For finite sample inference, we also don’t need MLR2. But whenever the law of large numbers is involved, we do need MLR2 as a standalone condition. Note also that, since MLR2 and MLR4’ together imply MLR4, clearly MLR2 and MLR4 are never both needed. But I follow standard practise (e.g. Hayashi) in including them both, for example in the asymptotic inference theorems.
  • Note that since XXX’X is a symmetric square matrix, QQ has full rank KK iff QQ is positive definite; these are equivalent statements (see Wooldridge 2010 p. 57). Furthermore, if XX has full rank KK, then XXX’X has full rank KK, so MLR3* is equivalent to MLR3 plus the fact that QQ is finite (i.e actually converges).
  • Note that given MLR2 and the law of large numbers, QQ could alternatively be written E[XX]E[X’X]
  • Note that whenever I write a plimp\lim and set it equal to some matrix, I am assuming the matrix is finite. Some treatments will explicitly say QQ is finite, but I omit this.
  • Note that by the magic of matrix inversion, ((XX)1)kk=1i=1n(xkixˉk)2((X'X)^{-1})_{kk} = \frac{1}{\sum_{i=1}^n (x_{ki} - \bar x _{k})^2}. 2
  • Note that these expressions are equal: bkβkse(bk)=(bkβk)nKU^U^((XX)1)kk\frac{b_k -\beta_k}{se(b_k)} = \frac{(b_k - \beta_k)\sqrt{n-K}} {\hat{U'} \hat{U} ((X'X)^{-1})_{kk}}. Seeing this helps with intuition.

The second diagram gives the asymptotic distribution of the IV and 2SLS estimators.3

iv

I made this is PowerPoint, not knowing how to do it better. Here is the file.

  1. But of course what really matters is the causal interpretation.

    As Pearl (2009) writes, “behind every causal claim there must lie some causal assumption that is not discernible from the joint distribution and, hence, not testable in observational studies”. If we wish to interpret β\beta (and hence bb) causally, we must interpret MLR4 causally; it becomes a (strong) causal assumption.

    As far as I can tell, when econometricians give a causal interpretation it is typically done thus (they are rarely explicit about it):

    • MLR1 holds in every possible world (alternatively: it specifies not just actual, but all potential outcomes), hence UU is unobservable even in principle.
    • yet we make assumption MLR4 about UU

    This talk of the distribution of a fundamentally unobservable “variable” is a confusing device. Pearl’s method is more explicit: replace MLR{14}\{1 \quad 4\} with the causal graph below, where :=:= is used to make it extra clear that the causation only runs one way. MLR1 corresponds to the expression for YY (and, redundantly, the two arrows towards YY), MLR4 corresponds to the absence of arrows connecting XX and UU. We thus avoid “hiding causal assumptions under the guise of latent variables” (Pearl). (Because of the confusing device, econometricians, to put it kindly, don’t always sharply distinguish the mathematics of the diagram from its (causal) interpretation.)

  2. Think about it! This seems intuitive when you don’t think about it, mysterious when you think about it a little, and presumably becomes obvious again if you really understand matrix algebra. I haven’t reached the third stage. 

  3. For IV, it’s even clearer that the only reason to care is the causal interpretation. But I follow good econometrics practice and make only mathematical claims. 

January 10, 2018

The expected value of the long-term future, and existential risk

I wrote an article describing a simple model of the long-term future. Here it is:

Summary:

A number of ambitious arguments have recently been proposed about the moral importance of the long-term future of humanity, on the scale of millions and billions of years. Several people have advanced arguments for a cluster of related views. Authors have variously claimed that shaping the trajectory along which our descendants develop over the very long run (Beckstead, 2013), or reducing extinction risk, or minimising existential risk (Bostrom, 2002), or reducing risks of severe suffering in the long-term future (Althaus and Gloor, 2016) are of huge or overwhelming importance. In this paper, I develop a simple model of the value of the long-term future, from a totalist, consequentialist, and welfarist (but not necessarily utilitarian) point of view. I show how the various claims can be expressed within the model, clarifying under which conditions the long-term becomes overwhelmingly important, and drawing tentative policy implications.

December 28, 2017