Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Marginal Effects for Negative Binomial Regressions

    I ran a negative binomial regression and post estimation used:

    margins,dydx(*)

    The marginal effects from this are extremely large.

    However, when I use:

    margins,dydx(*) atmeans

    I get reasonable estimates.

    Is there a difference between the two codes?

  • #2
    The first command is the average marginal effects, i.e. margins computed first the marginal effect for each observation, and than computed the average. Typically, this is what you want.

    The second command is the marginal effects at the averages, i.e. margins computed the averages for all the explanatory variables, plugged those in and computed the marginal effect for this fictional person. This is what was traditionally done because it is less work.

    In non-linear models, like nbreg, the two are different, but normally the differences are small. If you find radical differences, then that is an indication that there is something wrong with your model or your data.
    ---------------------------------
    Maarten L. Buis
    University of Konstanz
    Department of history and sociology
    box 40
    78457 Konstanz
    Germany
    http://www.maartenbuis.nl
    ---------------------------------

    Comment


    • #3
      One potential cause for the difference: are nominal (and ordinal) variables correctly used with the i. Prefix (factor var notation)?
      Best wishes

      Stata 18.0 MP | ORCID | Google Scholar

      Comment


      • #4
        It would help a lot if, as asked in the FAQ, that you show what you typed and what Stata returned. It's very difficult to assist with an abstract question. As Maarten mentioned, it seems weird to have very different estimates even thought the two different approaches do estimate two different parameters. I also agree that the average partial effect (the first) is generally preferred to the partial effect at the averages.

        If you are interested in the effects on the mean, there are clear benefits to using Poisson regression, particularly that it is fully robust and doesn't care whether there is overdispersion, underdispersion, or any variance-mean relationship. What happens if you use

        Code:
        poisson y x1 x2 ... xK, vce(r)
        margins, dydx(*)
        margins, dydx(*) atmeans
        As Felix noted, discreteness of some elements of x could make the partial effects different.

        Comment


        • #5
          this seems to work, thank you!

          Comment

          Working...
          X