No announcement yet.
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    you may want to take a look at
    Kind regards,
    (Stata 16.0 SE)


    • #17
      Mansi: Including squares and interactions of key explanatory variables is not a bad place to start. Or, can obtain a RESET-type test.

      xtpoisson y x1 x2 ... xK, fe vce(robust)
      predict xbhat, xb
      gen xbhatsq = xbhat^2
      gen xbhatcu = xbhat^3
      xtpoisson y x1 x2 ... xK xbhatsq xbhatcu, fe vce(robust)
      test xbhatsq xbhatcu
      If the model is correctly specified then the two nonlinear terms should be jointly insignificant.


      • #18
        Dear Carlo Lazzaro and Jeff Wooldridge,

        Thank you very much! That was very helpful and I was able to conduct the test easily.



        • #19
          Thanks everyone for a very interesting thread.

          I have a related question: does the bootstrapping option in xtpoission deal with overdispersion, or is vce(robust) the way to go? I have seen a few papers out there claiming that bootstrapping is the best option, and I am not sure what to make of it.


          • #20
            The panel bootstrap certainly allows overdispersion -- in fact, any kind of variance-mean relationship -- and it also allows for arbitrary serial correlation. But the vce(robust) option does as well. One would have to argue that the cross section dimension, N, is "small" so that the usual asymptotics works poorly whereas bootstrapping is better. However, for standard errors, there's no theory that implies that. I would just use the vce(robust) option; people might think it's fishy that you're bootstrapping when there's no need.


            • #21
              Jeff - I have a similar problem and I've tried to run the RESET style test you outlined above but using the poi2hdfe command instead of xtpoisson. In that case the polynomial xbhat terms are dropped. Do you know what that happens?


              • #22
                Patrick: I haven't used poi2hdfe and so I don't know how it computes fitted values. You don't want to include the estimated fixed effects. However, that doesn't explain why those terms would be dropped. In fact, I can't see how that would happen unless your explanatory variables only have variation across i or t, but never both. But then everything would've dropped out of the initial estimation.


                • #23
                  Dear Patrick:

                  If the fitted values have little variation and are far from zero, their powers will be highly correlated and Stata may drop some of them because of apparent perfect collinearity. (This happens also in linear models.)

                  A solution in this case is to subtract the mean from the fitted values before computing the powers. The following example using one of the datasets from Jeff Wooldridge's book illustrates this.

                  qui poisson wage educ exper tenure, r
                  predict fit, xb
                  g fit2=(fit)^2
                  g fit3=(fit)^3
                  qui poisson wage educ exper tenure fit2 fit3, r
                  test fit2 fit3
                  * One of the powers is incorrectly dropped
                  * The problem is solved by centring the fitted values
                  su fit, meanonly
                  g c_fit2=(fit-r(mean))^2
                  g c_fit3=(fit-r(mean))^3
                  qui poisson wage educ exper tenure c_fit2 c_fit3, r
                  test c_fit2 c_fit3
                  Best wishes,