Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Command to test for time fixed effect in RE regression

    Dear all,

    For FE regression, whether or not time fixed effect should be included can be tested by -testparm i.time-.

    Is there counterpart command to test the necessity of time fixed effects for RE regression?

    Thank you very much!

  • #2
    Well, I personally don't like the use of this kind of testing procedure to decide what to include in a model, at least not generally. But if you're going to do that sort of thing, the command would be exactly the same in RE regression.

    Comment


    • #3
      Dear Clyde,

      thank you very much!

      Comment


      • #4
        In my opinion there are some things you should always do in panel data analysis. Fixed effects instead of random effects. Year dummies in all cases. Robust standard errors clustered by group. Using random effects or leaving out the year dummies or not clustering your errors are gonna make your coefficients more likely to be significant, and I think it will always raise eyebrows even if you justify it with a test.

        Comment


        • #5
          I am only in partial agreement with what is said in #4.

          I definitely do not agree that fixed effects are always superior to random effects. I know this is the general preference in economics and finance, where there is also a corresponding strong preference for unbiased estimation. But I think that blindly applying this preference is no better than blindly applying any other rule of thumb. In particular, although random effects estimation is not consistent, you have to consider that a slightly biased but more precise estimate (which may be what you get out of RE) may be more accurate than an unbiased but much less precise estimate (which may be what you get out of FE). I think that there is a trade-off to be made between consistency and efficiency, and that neither one should automatically be considered to trump the other.

          In finance and economics, year dummies (or quarter, or month, etc.) are the norm. There is a real basis for this: these variables are actually subject to appreciable unpredictable shocks over short periods of time, and you need to account for those. But in my discipline, epidemiology, that is usually not the case. Most of our variables are either stable over time, or exhibit consistent directional trends over time, with only negligible short-term variation. Using short-time-period indicator variables is generally not the best way to capture that in your model. So, again, I think you need to think about how your variable actually behaves over long and short time periods and model accordingly. In finance and economics, that doesn't usually require a lot of thought, but occasionally it might, and in other disciplines it almost always does.

          As for clustered standard errors, I agree, provided the number of clustering groups is sufficiently large. When the number of clusters is small, the cluster robust vce does not provide a better estimate than the ordinary vce, and sometimes is actually worse. Unfortunately, there is no consensus, nor a good theoretical basis for deciding, how many clustering groups is sufficient. I think pretty much everyone agrees that with 10 or fewer clusters, cluster() is useless or worse, and with 50 or more it is probably fine. In between, it becomes a matter of opinion.

          I do however endorse the point that doing this or that statistical test to justify a choice of model is, at best, a poor strategy and a last resort.

          Comment


          • #6
            When you have data on firms within industries, including the mean of the dv for the industry (excluding the firm of interest) may be a better control for exogenous factors than including a year dummy. This allows exogenous factors to influence industries differently. It does have the possibility that the dv (e.g., firm A's sales) influence the values for the rest of the firms, but the dummy approach probably also suffers from this.

            The year dummy approach assumes that whatever happens influences every firm the same way, which often does not make sense.

            There is also a substantive question about fixed versus random effects. Fixed only uses within-panel variation. If most of the variation is stable by panel (e.g., firms, people or whatever differ but don't vary much over time), then re may be a better choice. Indeed, a Mundlak approach may be even better, but it is not so commonly used. We cannot slavishly commit ourselves to fixed effects in panel data and give up any ability to examine stable inter-panel differences.

            I would disagree with Clyde on one point - even in finance and economics, it is necessary to think seriously about how variables behave both long and short term. With the exception perhaps of stock price data, long and short term adjustments often differ in firms.

            Comment


            • #7
              Thank you all so much. It really helps.

              Comment

              Working...
              X