**Coefficient Table Use #2: Determining the Effects of the -Variables**

In a regular least-squares regression, the effect of each -variable is generally assessed by looking at the sign and magnitude of the corresponding regression coefficient. The table below shows just the regression coefficient estimates pulled from coefficient table from the Kid Creative least-squares regression example (to see the entire regression output, click here):

Recall that in this example, Household Income was the -variable that was regressed on the -variables listed in the table. (To review the set-up for this example, see Part 1).

Suppose we are interested in the relationship between gender and household income. If we look in the table above at the regression coefficient for IsFemale we see it is –3997.272. This would generally be interpreted as indicating that being female is associated with having about $4,000.00 less in household income (other things being constant). Note that I have carefully used the words “is associated” (rather than a word like “causes”) and included the parenthetical “other things being constant” in this interpretation as I will discuss further below.

As another example and in a similar fashion, the coefficient of IsProfessional (11432.073) would generally be interpreted as indicating that being a professional is associated with an increased household income of about $11,500.00 (again, other things being constant).

In this way, the least-squares regression coefficients can be used to try to assess the impact of each of the -variables on the -variable. What the regression coefficient shows is the estimate of the expected change in for a one-unit change in the corresponding -variable.

In the particular two examples above, the variables IsFemale and IsProfessional are indicator or “dummy” variables that indicate the presence or absence of a condition (e.g., “1” means Female, “0” means not female). Thus a change in one unit for this kind of -variable means a change of category (e.g., from Male to Female). Thus, the regression coefficients of these dummy variables shows the expected difference in the -variable between the corresponding categories.

There are a number of important issues to note here. First, because linear regression assumes a linear relationship between the dependent -variable and the independent -variables, the “effect” of changing a variable is the same no matter what the values of the other variables are. For example, according to the linear regression model, being female is associated with about $4,000 less household income no matter what the values of the other variables are. That is, being female costs you $4,000.00 whether you are a professional or not, whether you are retired or not, and so on.

It is a property of linear equations that the effect of changing an -variable is the same no matter what the values of the other variables. Loosely speaking, this means that the interpretation of the regression coefficient is the same “everywhere” (no matter what the other -variables are). This is one of the reasons that linear functions are so appealing and why we would like to use linear functions to fit our data whenever possible. Linear functions are relatively easy to understand and interpret. In contrast, non-linear functions are often very difficult to understand and interpret as the effects of changing the -variables depends on where on the surface of the function you are located.

It is also important to understand that the regression coefficient shows the effect of changing the variable *conditional* on the values of the other -variables (which means they stay fixed). While this statement sounds simple enough, there is more to this idea than might be immediately obvious. It may well be impossible in the real world to actually change the just the -value of one variable.

It is frequently the case that the -variables in a regression are related to each other (perhaps causally). Thus, it might not make a lot of sense to think that you can change one variable without changing the others. For example, suppose I am looking at the household income for a high-school educated, non-professional person. It makes little sense to think about the effects on household income of an identical person who is a professional. Being a professional almost certainly requires more than a high-school education, so it makes little or no sense to to think about changing their professional status without also changing their level of education.

The fact that the regression coefficients show the effects of each variable *conditional* on the other variables can make their interpretation non-intuitive. This is because what a regression coefficient for a partcular -variable really shows is the effect above and beyond the effects of the other variables. To say it another way, the regression coefficient shows the effect of the variable once the effects of all of the other variables have been removed. So what exactly does this mean? The Kid Creative regression output provides and example which may help to clarify things.

In the regression output, notice that the regression coefficient for “English” is *negative*

First, I looked at what I considered to be a related variable, namely race (“White” or not). As expected, the coefficient for “White” is positive (). As I thought about it, I began to suspect that virtually all of the “White” survey respondents would also be English speakers. If this is true, then there would be virtually no variation in the value of the English variable for white people (the idea is that if all white people speak English, if White is 1, then English is also 1).

Now if all of the variation in the English variable is occurring for non-white people, then the English variable may be capturing income differences between non-white racial groups. For example, it may be capturing the difference between income for African Americans (English speakers) and Hispanics (non-English speakers) or possibly Asians (also non-English speakers). Since it is at least plausible that African Americans might make less than Hispanics or Asians, it becomes plausible that the variable English might be associated with a decrease in household income.

All of this is speculative, as I have not done the detailed analysis necessary to see exactly what is going on. I did, however, as a sanity check, calculate the average income for English speakers (English = ) and non-English speakers (English = ). The average income for non-English speakers is $29,819.67. The average income for English speakers is $35,602.94. Thus, English speakers earn on average $5,783.27 more than non-English speakers. This difference is positive as expected. Comparing this $5,783.27 to the negative regression coefficient of –$4,273.76 illustrates why the conditional nature of regression coefficients can make their interpretation difficult.

Finally, it is very important to keep in mind that most statistical analysis of non-experimental data shows association, not causation. Association is very different from causation for a lot of reasons. This is a huge topic that I cannot expand on fully here, but I will give one quick example.

The $4,000.00 “effect” of gender on household income could be due to many things. It might be due to discrimination or it might be due to any of a virtually unlimited number of variables that are not in the regression model. For example, females may work fewer hours than males due to family demands or for a host of other reasons such as a desire to have a more balanced life. Gender does not “cause” this difference. Gender may be associated with lower income in this data set, but this is very different from having identified a cause.

In quite a few instances in my discussion above, I have talked about the regression coefficient showing the “effect” of changing the -variable. This is, in fact, how people tend to talk about the regression coefficients, but it is usually *wrong* because the regression shows association, not causation. Frankly, it is hard to talk about the interpretation of regression coefficients without using causal language. Whether causal language is used of not, it is very important to not forget that regressions generally show association, not causation. Interpreting them as showing causation is usually very, very risky.

As a final note, there is a relationship between the topic discussed here (the effects of the -variables) and the topic discussed in Part 2 of this series (determining which -variables matter). If there is no statistical evidence that a regression coefficient matters, then it really does not make a great deal of sense to interpret its impact as described in this article even though the corresponding estimated regression coefficient is not 0. In fact, in this article I have ignored the issue of our uncertainty about the true regression coefficient values. I will return to this in Part 5.

In summary, one of the common and important uses of regression coefficients is to try to assess the impact of the variables. While this is commonly done, often with a causal relationship implied, causal interpretation should be done with great caution.

In the next part of the background series, I will discuss another very important use of the regression coefficient table, namely making predictions. Click here to proceed to Part 4.

**Questions or Comments?**

Any questions or comments? Please feel free to comment below. I am always wanting to improve this material.

I have had to implement a very simple “captcha” field because of spam comments, so be a bit careful about that. Enter your answer as a number (not a word). Also, save your comment before you submit it in case you make a mistake. You can use cntrl-A and then cntrl-C to save a copy on the clipboard. Particularly if your comment is long, I would hate for you to lose it.