Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Robust standard errors in small samples for fixed and random effects xtreg

    Dear Statalist,

    I tried finding a reply to my problem on your site but so far I did not quite find something appropriate. Also, I am new to Statalist and Stata so please be patient with me.

    My data (10 years, 12 countries) is heteroscedastic but not serially / cross-sectionally correlated. Thus, when running my random and fixed effects models I want to correct for heteroscedasticity only. I then plan to compare the fixed with a random effects model using a Hausman test (with xtoverid) where both models have been previously corrected for heteroscedasticity.

    I read in other Statalist entries that correcting for heteroscedasticity is generally possible when using xtreg with the robust option. However, when used with xtreg the robust option conducts clustering and thus corrects not only for heteroscedasticity. But I am doubtful that clustering is correct for such a small panel as mine where Stata calculates 12 clusters (one for each country). Would this approach in my case nevertheless be valid?

    As an alternative I considered the areg command with the robust option. Here it seems that Stata does not cluster but only corrects for heteroscedasticity. However, with areg I did not find a random effects version so that I cannot compare both models in the first place.

    Thank you very much for your ideas and input!

    All the best,
    Leon


  • #2
    Leon:
    welcome to this forum.
    It's true that with a small number of clusters cluster standard errors (SEs) might give problematic results.
    However, if you have detected heteroskedasticity, the only option under -xtreg- is robust/cluster SEs (both option do the same job) and deal with both heteroskedasticity and/or autocorrelation.
    In sum, I would go -xtreg, re- with cluster SEs and check if -re- assumptions are valid via -xtoverid-.
    Kind regards,
    Carlo
    (Stata 19.0)

    Comment


    • #3
      Dear Carlo,

      Thank you very much for your fast reply! It´s good to know that I can´t do much besides using clustering / robust standard errors.

      I was a bit confused because if I did xtreg without the robust / cluster command and then ran xtoverid it suggested using random effects. However, when doing the same but including the robust command it suggested fixed effects.

      So do I understand your last sentence correctly that I should run xtreg with robust errors and then the xtoverid, which suggests FE?

      Thank you again very much for your help!

      All the best,
      Leon

      Comment


      • #4
        Leon:
        yes, you're correct: go -xtreg, fe-..
        If you impose default standard error, -hausman- and -xtoverid- outcome should go in the same direction (taht is; both tests should recommend -xtreg, re-): however, the main issue rests on the fact that, if the default standard error is inflated due to heteroskedasticity and/or autocorrelation, the -hausman- indication might be wrong.
        However, due to the limited number of your clusters, you may want to compare the SEs with and without the cluster/robust option to have an idea over and beyond the test outcomes.
        Eventually, you may also want to see what happens when you impose -bootstrap- standard errors.
        Kind regards,
        Carlo
        (Stata 19.0)

        Comment


        • #5
          Thank you so much Carlo! Your answers gave me good insights about the calculations and how I should proceed further.

          Comment


          • #6
            Dear Carlo,

            I re-read your last answer and it prompted one more question: You mention that the problem with the hausman test exists if the standard errors in the default model are inflated due to heteroskedasticity / autocorrelation. However, in my case they are deflated (that is the standard errors are smaller in the basic model compared to the one where I use the robust command).

            I guess the hausman test leads also in this situation to potentially wrong indications if I do not account for heteroskedasticity?

            Thank you so much for your help!

            All the best,
            Leon

            Comment


            • #7
              Leon:
              yes, I share your point.
              I would also consider that -hausman- works asymptotically (see -hausman- entry, Example 3 in Stata .pdf manual).
              Kind regards,
              Carlo
              (Stata 19.0)

              Comment


              • #8
                Thank you very much for your fast help. I really appreciate the way you help me on this site!

                Comment


                • #9
                  Hello for anyone would see my comment.

                  So, what is the solution if I have a random effect model recommended by both Hausman( testing random vs fixed) and Breush-Bagan (testing ols vs random), however there is serial correlation and possible hetero. When I add option robust, vce(cluster), cluster(country), vce(bootstrap), or vce(jackknife), it gives me inflated std errors, ruining the significance of my explanatory variables. I noticed the random effect model is estimated by Generalized Least Squares. Can I consider that GLS is already corrected the heteroscedastic and serially correlated std errors of my random effect model. I appreciate help of anyone can see my question.

                  Comment


                  • #10
                    Nariman:
                    deafult standard errors are to blame because they give you a false impression of statistical significance.
                    Clustered-robust standard errors do their job and tell you the truth (that, sometimes, is a bitter pill to swallow).
                    Kind regards,
                    Carlo
                    (Stata 19.0)

                    Comment

                    Working...
                    X