Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Does using Discoll-Kraay standard errors correct heteroskedasticity and autocorrelation problem?

    Hello guys,

    I am running a fixed effect regression model. I have a problem of heteroskedasticity and was wondering if I can correct this by using Discroll-Kraay standard errors.
    First I tried using just robust errors using the command " xtreg, y x, fe vice(robust) " and the model lost significance, meaning my p-values > .0.1.
    When using the Discroll-Kraay standard errors my model didn't lose any significance, meaning my p-values equal 0.00. Am I solving my problem by using these Discroll-Kraay standard errors?
    I am very new to stata and panel data analysis and would appreciate the help!

  • #2
    The appropriate method depends on what panel you have.

    Driscoll-Kraay standard errors take care of heteroskedasticity and both cross sectional and time series autocorrelation, but require a large T.

    Comment


    • #3
      Thank you Joro! I have currently T=16 years. If I understood correctly, it requires a minimum of T=20 years for Discoll-Kraay standard errors to take care for the problem. I am right?

      Comment


      • #4
        No, there are no such hard thresholds like 20 or any other number.

        16 periods is a bit short, but it is also not crazy to apply the estimator with 16 time periods. You should be fine.

        Originally posted by Nicole Peredo View Post
        Thank you Joro! I have currently T=16 years. If I understood correctly, it requires a minimum of T=20 years for Discoll-Kraay standard errors to take care for the problem. I am right?

        Comment


        • #5
          Nicole: How large is your N? T = 16 is pushing it, as the D-K standard errors work entirely off of the time series. It is like applying Newey-West HAC to a time series with T = 16. It's done, but the statistical properties cannot be very good. It's difficult to know whether the reduction in standard errors is simply due to the downward bias caused by assuming the serial correlation drops off after one or two periods -- which, I assume, is what you have specified in the HAC lag length.

          If N is large -- or at least quite a bit larger than T -- you can dry using GLS applied to first differences to improve efficiency. But you still have to compute robust standard errors. Still, you may get enough of an efficiency gain.

          Comment


          • #6
            Thanks for the answers! Jeff I am currently working with T=19 and N=64! So I suppose I should try using GLS then. And I used a lag(4) when using D-K standard errors is that wrong?

            Comment


            • #7
              I would use something like this, where d2, ..., d19 are the time period dummies.

              Code:
              xtset id year
              xtgee D.(y x1 ... xK d2 ... d19), corr(uns) vce(robust)
              Whether this improves on usual FE remains to be seen -- as we say, an "empirical question."

              Four lags seems a bit high with T = 19. What happens if you let the routine choose the lags?

              Comment


              • #8
                Thanks Jeff! I changed the lag to 1. How could I let the routine choose the lags?
                Last edited by Nicole Peredo; 21 Jan 2021, 07:45.

                Comment

                Working...
                X