Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Clustered standard errors in fixed effects model (T=2)

    Question: When T=2, given the equivalence of the first difference (FD) and fixed effects (FE) estimators, does it make sense to use clustered standard errors in the FE model?

    Say I have panel data of 48 states over two years.

    When I run the FD model, I use vce(robust) because I have 1 observation per state:
    Code:
        reg change_y change_x, vce(robust)  // (1)
    When I run the FE model, I use vce(cluster state):
    Code:
        xtreg y x, fe vce(cluster state)  // (2)
    But doesn't the equivalence of the FD and FE imply that the standard errors should (theoretically) be the same?
    In practice, (1) and (2) result in different values of standard errors.

    If I were to run the FE model in a different way, with vce(robust):
    Code:
        reg y x i.state, vce(robust) // (3)
    then I get the same standard errors as in (1).

    This leaves me wondering if clustering standard errors make sense in FE models when T=2.

    Bottom line: which regression should I run, (2) or (3)=(1)?

  • #2
    See https://www.nber.org/system/files/wo...003/w24003.pdf. The equivalence of FD and FE applies to the coefficient.

    I would argue you should cluster your standard errors to allow dependence of observations within state, following the mantra of rather being conservative than too liberal.

    But that's just my opinion, you'll find a lot more information in the paper mentioned above.

    Comment


    • #3
      Jay:
      the main issue here is that when you call -robust- under -regress- you correct for heteroskedasticity only.
      Conversely when you invoke -robust- or -vce(cluster clusterid)- under -xtreg- you actually deal with heteroskedasticity and/or autocorrelation.
      That said, I'd go:
      Code:
      xtreg y x i.timevar, fe vce(cluster panelid)
      Kind regards,
      Carlo
      (Stata 19.0)

      Comment


      • #4
        Thank you, Maxence and Carlo!

        Comment


        • #5
          Jay Euijung Lee:

          I suspect that your equation (1) includes a typo: It must have been -reg change_y change_x, vce(robust) noconstant- instead. You're right that given T = 2 per panel group, the FD and FE estimators should produce the same point estimates, and also right in expecting that combining -vce(robust)- with the FD estimator should be equivalent to combining -vce(cluster state)- with -xtreg, fe-. The reason why they give you different standard errors is because Stata applies a small sample correction factor when it calculates sandwich variance matrices, and this correction factor is calculated slightly differently across different commands.

          The small sample correction factor for -reg change_y change_x, vce(robust) noconstant- is N / (N - k) = 48 / 47 where N is the number of observations in your FD regression and k is the number of coefficients in your FD output table. If I remember correctly, the small sample correction factor for -xtreg y x, fe vce(cluster state)- is [G / (G - 1)] * [(N - 1) / (N - k)] = (48 / 47) * (95 / 94) where G is the number of clusters, N is the number of observations in your FE regression, and k is the number of coefficients in your FE output table.

          So after running your FD regression, if you type:
          Code:
          display _se[change_x] / sqrt(48/47) * sqrt((48 / 47) * (95 / 94))
          you'll obtain the same standard errors as what you see in your FE results.

          Both small sample correction factors are ad hoc to some extent, and there's no reason to prefer one factor to the other. In a sufficiently large sample, they should be practically identical to each other. If you happen to obtain quite different standard errors, I'm inclined to think that you should look for an alternative approach (e.g., percentile bootstrapping) that has better finite sample properties.
          Last edited by Hong Il Yoo; 23 Oct 2022, 17:33.

          Comment

          Working...
          X