Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Correcting standard errors for a fixed effects Poisson model

    Hi all,

    I'm looking at the effect of fire occurrence on the number of visits to National Forests and National Parks. I've divided up these parks into spatial units, with multiple units from each park. This gave me panel date with 8 years and 2,500 IDs, and intend to use fixed effects. My dependent variable is count data and is very over-dispersed. I'm yet to do a formal over dispersion test after accounting for the fixed effects, but at least the unconditional mean is 5.06 and standard dev is 34.01, so I'm presuming the data will remain over dispersed even after accounting for the fixed effects etc. (I will of course check this is the case formally soon).

    I've read Cameron and Trivedi's book on count data, and the default approach seems to be doing a Poisson fixed effects model estimated through maximum likelihood and correcting the standard errors. I have a few questions about this:

    1) I'm a little unclear about how to correct the standard errors. They indicate at least for cross section data with a Poisson regression that there is a robust sandwich standard error correction that can easily be implemented in Stata. However, I not only want to correct for over dispersion but also i) the fact that an individual unit's observations over time will be correlated, and ii) there is likely some spatial correlation going on, so I should probably adjust for this sort of correlation between units in the same park. I don't know how I can simultaneously correct standard errors to incorporate all these things. I think the book mentioned that there are panel robust standard errors that deal with the first and second, but I don't know how to incorporate the 3rd.

    2) It seems like the benefit of a negative binomial model over Poisson is that they can both give consistent estimators, but neg bin is more efficient if data is over dispersed. Consequently, I was thinking of doing a fixed effects negative binomial model. I've read that the process implemented in Stata isn't truly doing fixed effects, and some researchers suggest just doing a normal neg bin with individual dummies for each unit. There seems to be no clear consensus on whether this results in an incidental parameters problem or not, with one paper (Allison and Waterman) indicating it doesn't and is superior to Poisson fixed effects. Does anyone have any take on this?

    Thank you so much!

  • Joao Santos Silva
    replied
    Dear alessio lombini,

    I do not have a reference for it and I do not think you need one; this is something I developed many years ago and have been teaching ever since.

    Best wishes,

    Joao

    Leave a comment:


  • alessio lombini
    replied
    A very elegant workaround, Joao Santos Silva ! Do you have any references for this method or do you think that, since it is a simple demeaning, it would not be needed to justify this approach to the issue?
    Last edited by alessio lombini; 21 Jan 2023, 11:14.

    Leave a comment:


  • Joao Santos Silva
    replied
    Dear Patrick:

    If the fitted values have little variation and are far from zero, their powers will be highly correlated and Stata may drop some of them because of apparent perfect collinearity. (This happens also in linear models.)

    A solution in this case is to subtract the mean from the fitted values before computing the powers. The following example using one of the datasets from Jeff Wooldridge's book illustrates this.

    Code:
    use http://fmwww.bc.edu/ec-p/data/wooldridge/wage2.dta
    qui poisson wage educ exper tenure, r
    predict fit, xb
    g fit2=(fit)^2
    g fit3=(fit)^3
    qui poisson wage educ exper tenure fit2 fit3, r
    test fit2 fit3
    * One of the powers is incorrectly dropped
    * The problem is solved by centring the fitted values
    su fit, meanonly
    g c_fit2=(fit-r(mean))^2
    g c_fit3=(fit-r(mean))^3
    qui poisson wage educ exper tenure c_fit2 c_fit3, r
    test c_fit2 c_fit3
    Best wishes,

    Joao

    Leave a comment:


  • Jeff Wooldridge
    replied
    Patrick: I haven't used poi2hdfe and so I don't know how it computes fitted values. You don't want to include the estimated fixed effects. However, that doesn't explain why those terms would be dropped. In fact, I can't see how that would happen unless your explanatory variables only have variation across i or t, but never both. But then everything would've dropped out of the initial estimation.

    Leave a comment:


  • Patrick Behrer
    replied
    Jeff - I have a similar problem and I've tried to run the RESET style test you outlined above but using the poi2hdfe command instead of xtpoisson. In that case the polynomial xbhat terms are dropped. Do you know what that happens?

    Leave a comment:


  • Jeff Wooldridge
    replied
    The panel bootstrap certainly allows overdispersion -- in fact, any kind of variance-mean relationship -- and it also allows for arbitrary serial correlation. But the vce(robust) option does as well. One would have to argue that the cross section dimension, N, is "small" so that the usual asymptotics works poorly whereas bootstrapping is better. However, for standard errors, there's no theory that implies that. I would just use the vce(robust) option; people might think it's fishy that you're bootstrapping when there's no need.

    Leave a comment:


  • Shon Ferguson
    replied
    Thanks everyone for a very interesting thread.

    I have a related question: does the bootstrapping option in xtpoission deal with overdispersion, or is vce(robust) the way to go? I have seen a few papers out there claiming that bootstrapping is the best option, and I am not sure what to make of it.

    Leave a comment:


  • Mansi Jain
    replied
    Dear Carlo Lazzaro and Jeff Wooldridge,

    Thank you very much! That was very helpful and I was able to conduct the test easily.

    Regards,
    Mansi

    Leave a comment:


  • Jeff Wooldridge
    replied
    Mansi: Including squares and interactions of key explanatory variables is not a bad place to start. Or, can obtain a RESET-type test.

    Code:
    xtpoisson y x1 x2 ... xK, fe vce(robust)
    predict xbhat, xb
    gen xbhatsq = xbhat^2
    gen xbhatcu = xbhat^3
    xtpoisson y x1 x2 ... xK xbhatsq xbhatcu, fe vce(robust)
    test xbhatsq xbhatcu
    If the model is correctly specified then the two nonlinear terms should be jointly insignificant.

    Leave a comment:


  • Carlo Lazzaro
    replied
    Mansi:
    you may want to take a look at https://blog.stata.com/2011/08/22/us...tell-a-friend/

    Leave a comment:


  • Mansi Jain
    replied
    Dear Jeff Wooldridge,

    Actually, one last question. You mentioned in one post that the Poisson model is robust to the failure of every Poisson assumption except the correct specification of the conditional mean. I think I'm a little confused about what correct specification of the conditional mean would mean. How can I test whether or not that is the case?

    Thank you!
    Mansi

    Leave a comment:


  • Mansi Jain
    replied
    Dear Jeff Wooldridge,

    Thank you so much for this explanation! That's really helpful -- particularly the point about how to interpret the results if linear and Poisson models give different results.

    P.S: Your work has been really important in learning econometrics for me. Thank you for all you do!

    Leave a comment:


  • Jeff Wooldridge
    replied
    Generally, I agree that estimating a linear model is a good starting point. And for cases like binary or corner solution responses, there's a clear tradeoff in assumptions because, with small T, one usually must use a correlated random effects approach or do a bias adjustment to the dummy variable approach. But with a count outcome (or nonnegative, unbounded outcomes generally), Poisson FE has significant advantages. It simply replaces the linear functional form for the mean with an exponential functional form. No other assumptions are needed, just like in the linear case. The coefficients in the exponential model are easy to interpret because they have percentage effects. Estimation is straightforward because the Poisson FE quasi-log likelihood is concave.

    If the linear model estimated by FE and the exponential estimated by Poisson FE give qualitatively different estimates I would trust the latter. I've seen cases where the linear model gives coefficients of the opposite and counterintuitive sign, in a statistically significant way. Even for program evaluation I would go with Poisson FE. I'd use Poisson in the cross sectional case with a nonnegative outcome, too, and exploit doubly robust estimation.

    Leave a comment:


  • Long Hong
    replied
    Hi Mansi, I see your point. My understanding from reading your research question is more about program evaluation instead of prediction. If it is about prediction, then you do care about the underlying Data Generating Process as well as whether the prediction gives you a negative value.

    A simple linear model could buy you many things, e.g (1) nice interpretation of the coefficients, (2) simple standard errors, (3) most importantly answer your research question in a way that is not too different from a Poisson model if your sample size is big enough. I personally think a linear model is a good first step to answer your research question, and a Poisson model can be a nice robustness check. This is a tradeoff you may have to choose, because, as discussed above, a Poisson FE model does not seem to be an easy job.

    Hope it helps.

    Leave a comment:

Working...
X