I think the breezy way how Rich Goldstein and Nick Cox treat the reparametrisations did not really explain what is going on here, and seems to me that falsely suggests that what is going on here is rather trivial. I do not think what is going on is trivial, and if I personally did not have my own version of what is going on here, I would have ended up even more confused after reading Rick and Nick's explanations.
Many people in statistics and econometrics treat reparametrisations as if they are something trivial, e.g., Nick in #15 wrote that "A quadratic is a quadratic; you are just parameterising it differently.-- with a side-effect on the intercept, which lacks inherent interest any way, so far as I can imagine." Rich in #12 wrote "yes, centering changes the constant - without the centering, the constant is meaningless (assume year was your only predictor; without centering the constant is the mean when year = 0 - does anyone really care?)"
I think these statements are misleading on two accounts:
1. Reparametrisations have serious consequences--some or all of your parameters change their meaning.
2. No, the effect is not only on the intercept. If you are interested in the partial derivative with respect to year, as you presumably should be, the reparametrisation changes the meaning of both the slope on the year, and the slope on the year^2.
Therefore I will go with my version of what is happening here. As soon as we leave the realm of linear models, even in a trivial way such as including a quadratic, the marginal effects are no longer constant.
E.g., if we are estimating E(y| year) = b*year + c*year^2, then d[E(y| year)]/d[year] = b +2*c*year, so this marginal effect clearly depends at which level of year we are measuring it.
When Rich subtracted the minimum value of year in the sample, he reparametrised the model in such a way that the new parameters are easier to interpret. In particular if you evaluate the derivative d[E(y| year)]/d[year] = b +2*c*year at the minimum value of year 1935 in Rich's reparametrisation, then this derivative will evaluate to d[E(y| year)]/d[year] = b +2*c*0 = b, therefore in Rich's reparametrisation the meaning of the estimated slope on year (while disregarding the estimated slope on year^2) has the meaning of the marginal effect d[E(y| year)]/d[year] evaluated at year 1935.
And as to why you cannot estimate the original model without reparametrising it, you can see it from the following picture:
What we see from this picture, is that the relationship between year and year^2 in the range of above 1935 is basically non-stochastic and linear. They are basically perfectly collinear. Notice that this is not so for the range close to 0, the relationship there is curved.
Many people in statistics and econometrics treat reparametrisations as if they are something trivial, e.g., Nick in #15 wrote that "A quadratic is a quadratic; you are just parameterising it differently.-- with a side-effect on the intercept, which lacks inherent interest any way, so far as I can imagine." Rich in #12 wrote "yes, centering changes the constant - without the centering, the constant is meaningless (assume year was your only predictor; without centering the constant is the mean when year = 0 - does anyone really care?)"
I think these statements are misleading on two accounts:
1. Reparametrisations have serious consequences--some or all of your parameters change their meaning.
2. No, the effect is not only on the intercept. If you are interested in the partial derivative with respect to year, as you presumably should be, the reparametrisation changes the meaning of both the slope on the year, and the slope on the year^2.
Therefore I will go with my version of what is happening here. As soon as we leave the realm of linear models, even in a trivial way such as including a quadratic, the marginal effects are no longer constant.
E.g., if we are estimating E(y| year) = b*year + c*year^2, then d[E(y| year)]/d[year] = b +2*c*year, so this marginal effect clearly depends at which level of year we are measuring it.
When Rich subtracted the minimum value of year in the sample, he reparametrised the model in such a way that the new parameters are easier to interpret. In particular if you evaluate the derivative d[E(y| year)]/d[year] = b +2*c*year at the minimum value of year 1935 in Rich's reparametrisation, then this derivative will evaluate to d[E(y| year)]/d[year] = b +2*c*0 = b, therefore in Rich's reparametrisation the meaning of the estimated slope on year (while disregarding the estimated slope on year^2) has the meaning of the marginal effect d[E(y| year)]/d[year] evaluated at year 1935.
And as to why you cannot estimate the original model without reparametrising it, you can see it from the following picture:
Code:
twoway function xsq = x^2, range(-1968 1968) xline(1935 1968)
What we see from this picture, is that the relationship between year and year^2 in the range of above 1935 is basically non-stochastic and linear. They are basically perfectly collinear. Notice that this is not so for the range close to 0, the relationship there is curved.
Comment