Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Robust Hausman Test

    Hi,

    I was wondering if I have to run the Robust Hausman Test. Should I use the command 'xtoverid' or the command 'rhausman' ?

  • #2
    Priya:
    welcome to this forum.
    If you need to run Robust Hausman Test you should consider the community-contributed command -rhausman-.
    Kind regards,
    Carlo
    (Stata 18.0 SE)

    Comment


    • #3
      Hello Mr. Carlo, thank you for your response.
      I am not sure what you mean by 'community-contributed command'
      I have attached the command I ran, would that be correct? Additionally, why can't I use the 'xtoverid' command?
      Click image for larger version

Name:	Screenshot 2019-09-04 15.59.17.png
Views:	1
Size:	114.1 KB
ID:	1514992

      Comment


      • #4
        Priya:
        community-contributed commands are created by Statalisters (-rhausman- was created by Boris Kaiser) for their research purposes and are then made downloadable by other interested listers.
        Put differently, they are not Stata built-in commands.
        That said, the results of -rhausman- confirm that -re- is the way to go with your data.
        On a second thought, you could also have used -xtoverid- (although it works as a test of overidentifying restrictions, whereas -rhausman- calls -bootstrap-).
        As an aside, please do not post screenshots, but, as per FAQ, use CODE delimiters to share what you typed and what Stata gave you back. Thanks.
        Last edited by Carlo Lazzaro; 04 Sep 2019, 05:17.
        Kind regards,
        Carlo
        (Stata 18.0 SE)

        Comment


        • #5
          Thank you for your response Mr. Carlo. This has solved a major problem of mine.

          I have another question tho, my RE model shows a very low R2 and a low Prob>chi2 and a significant Total_Revenue variable. Is this a problem as my R2 is extremely low.

          Code:
          xtreg Roa TotalRevenue third_liquidity Age yr5-yr11, re vce(robust)
          Code:
          Random-effects GLS regression                   Number of obs     =        944
          Group variable: ID_no                           Number of groups  =        118
          
          R-sq:                                           Obs per group:
               within  = 0.0077                                         min =          8
               between = 0.0291                                         avg =        8.0
               overall = 0.0110                                         max =          8
          
                                                          Wald chi2(10)     =      33.48
          corr(u_i, X)   = 0 (assumed)                    Prob > chi2       =     0.0002
          
                                             (Std. Err. adjusted for 118 clusters in ID_no)
          ---------------------------------------------------------------------------------
                          |               Robust
                      Roa |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
          ----------------+----------------------------------------------------------------
             TotalRevenue |   3.58e-06   7.85e-07     4.56   0.000     2.04e-06    5.12e-06
          third_liquidity |   .0000751   .0000631     1.19   0.234    -.0000485    .0001987
                      Age |  -.0009474   .0016912    -0.56   0.575    -.0042621    .0023674
                      yr5 |    -.05791    .070302    -0.82   0.410    -.1956993    .0798793
                      yr6 |   -.042013   .0228653    -1.84   0.066    -.0868281     .002802
                      yr7 |  -.0032415   .0226174    -0.14   0.886    -.0475708    .0410877
                      yr8 |  -.0394132   .0216638    -1.82   0.069    -.0818734     .003047
                      yr9 |  -.0533208   .0287767    -1.85   0.064    -.1097221    .0030805
                     yr10 |   .0027281   .0509206     0.05   0.957    -.0970744    .1025306
                     yr11 |   -.166262   .1485079    -1.12   0.263    -.4573322    .1248082
                    _cons |   .0655505   .0444013     1.48   0.140    -.0214744    .1525754
          ----------------+----------------------------------------------------------------
                  sigma_u |  .11811426
                  sigma_e |   .6149103
                      rho |  .03558327   (fraction of variance due to u_i)
          ---------------------------------------------------------------------------------

          Comment


          • #6
            Priya:
            I would run a -xttest0- to check whether there's evidence of -re-.
            Kind regards,
            Carlo
            (Stata 18.0 SE)

            Comment


            • #7
              On running the 'xttest0' as you suggested, I notice we can reject the null at the 1% level but not at the 5% level. Hence, I am assuming there is a presence of random effects, as I am using the 1% level of significance as my threshold.

              Code:
              xttest0
              Code:
              Breusch and Pagan Lagrangian multiplier test for random effects
              
                      Roa[ID_no,t] = Xb + u[ID_no] + e[ID_no,t]
              
                      Estimated results:
                                       |       Var     sd = sqrt(Var)
                              ---------+-----------------------------
                                   Roa |   .3911694       .6254354
                                     e |   .3781147       .6149103
                                     u |    .013951       .1181143
              
                      Test:   Var(u) = 0
                                           chibar2(01) =     3.40
                                        Prob > chibar2 =   0.0327

              Comment


              • #8
                Priya:
                I'm not sure I got you right.
                The p-value you got from -xttest0- tells that you can reject the null of no random effect; that said, p-value=0.0327 is lower than 0.05 but higher than 0.01.
                Rejecting the null with a p-value=0.01 is rejecting it stronger than with a p-value=0.05.
                Kind regards,
                Carlo
                (Stata 18.0 SE)

                Comment


                • #9
                  Sorry about the confusion.
                  According to my p-value of 0.0327, we reject the null of no random effect at the 5% level of significance.

                  Hence if I take 5% as my level. Then there is random effect in my model right?

                  Comment


                  • #10
                    Additionally, I was referring to this thread
                    https://www.researchgate.net/post/Wh...5_less_than_60

                    and they seem to say the Rsquare is not that important while estimating the results of panel data and the individual and overall significance needs more attention. Hence, could I just ignore the fact that my Rsquare is very low?

                    Comment


                    • #11
                      Priya:
                      #9: your conclusion is correct.
                      #10: after -xtreg,re- you should look at the between R-sq.
                      It may well be that it is low and there's nothing you can do; however, before throwing up your hands in front of evidence, you should check whether your model (just like any regression model) is correctly specified (ie, all necessary predictors and interactions are included in the right-hand side of your regression equation; no endogeneity).
                      Kind regards,
                      Carlo
                      (Stata 18.0 SE)

                      Comment


                      • #12
                        Thank You for your help Mr. Carlo.
                        One last doubt, there is no formal test to check for endogeneity in the model right?

                        Comment


                        • #13
                          Priya:
                          see: https://www.statalist.org/forums/for...est-panel-data
                          Kind regards,
                          Carlo
                          (Stata 18.0 SE)

                          Comment


                          • #14
                            I have gone over the thread you suggested, and it seems like I should include instrumental variables to avoid the problem of endogeneity.
                            However, my model theoretically probably has the problem of endogeneity as most models do. But I have checked for other model specifications like-

                            1. I found the between variance is higher than my within variance for all the explanatory variables.
                            2. The Robust Hausman Test results suggest RE model
                            3. The assumption of homogeneity among units made by the FE model is violated for me.

                            Points like these provide me a non-statistical justification and a statistical justification for the usage of RE model.

                            SO, would it be correct to point out at the end of my paper that an improvement area in my model is, using instrument variables to deal with the problem of endogeneity?

                            Comment


                            • #15
                              Priya:
                              if you have endogenous regressors in your model, your results are biased.
                              Set aside some self-apparent instances, detecting endogeneity is a matter of (deep) knowledge of the data generating process.
                              I would recommend you to discuss this issue with your teacher/supervisor/professor to avoid bad surprises before your work will come to its end.
                              Kind regards,
                              Carlo
                              (Stata 18.0 SE)

                              Comment

                              Working...
                              X