Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Use -margins- for marginal effect after -xtpoisson-

    I am modeling "Innovation" counts over "HHI" and "squared HHI" with fixed effect poisson model. (My sample is unbalanced, small & wide one.) I am aware of that I can use Stata's -margins- command to obtain the estimated innovation count based on observable values. Below are my codes and results:

    xtpoisson innovation hhi c.hhi#c.hhi fee age c.age#c.age size c.size#c.size, fe
    margins, dydx(*)
    margins, at(hhi=0.0001(0.005)0.100)

    The estimated coefficient for HHI is 32.80 and for squared HHI is -131.31. The "Average Marginal Effect" of HHi is 25.37. However, when I used the third line to get the "marginal effect at representative values," all predicted values are negative. I am surprised because I thought I was modeling some positive count number and I don't understand why I got negative predicted values, e.g.,-3.xx innovations, which is generated from the sample mean of all dependent variables.
    My questions may be three-fold: 1) Do I understand -margins- results correctly? Or I missed something? 2) Is there any coding mistakes that may result to what I got now? 3) If there is no coding mistakes, does the result imply that my fixed effect poisson model have some problems?
    A bit more information: when I plot the observed innovation over HHI fit with quadratic line, the curve is u-shape and above zero. However, when I use the -marginscontplot- command to plot the predicted innovation over HHI, it's an inverted u-shape and below zero.
    Thank you for your help!
    Last edited by grace su; 08 Jul 2014, 12:59.

  • #2
    By default margins uses the default predict option for the estimation command, which may or may not be what you want or expect. For xtpoisson the default is xb, the linear prediction, and that can certainly go negative. I think it tells you this somewhere on the margins output but it is easy to miss.

    To see what your other options are, type

    help xtpoisson postestimation##predict

    For example, you may want to use nu0, the predicted number of events; assumes fixed or random effect is zero. If so, then your margins command would be something like

    Code:
    margins, at(hhi=0.0001(0.005)0.100) predict(nu0)
    You might also check out the expression option on margins. See -help margins- .
    -------------------------------------------
    Richard Williams, Notre Dame Dept of Sociology
    Stata Version: 17.0 MP (2 processor)

    EMAIL: [email protected]
    WWW: https://www3.nd.edu/~rwilliam

    Comment


    • #3
      Dear Richard,

      In "xtpoisson", does the default linear prediction xb also assume that the fixed or random effect is zero?

      It seems to me that it actually does reflect the random effect, for otherwise there would be no good reason to make xb the default prediction (in "poisson", the default prediction is n, the predicted number of events). Furthermore, the p-values I obtain using "margins, dydx(.) predict(xb)" differ significantly from the p-values obtained from "margins, dydx(.) predict(nu0)", which suggests that the former (unlike the latter) does indeed incorporate the random effect. In other words, if the difference between xb and nu0 was purely algebraic (with the former computing x*b and the latter computing exp(x*b), then the p-values should be identical, shouldn't they. But they aren't.

      Yet, the default predictor always produces the error message: "default prediction is a function of possibly stochastic quantities other than e(b)", so I am forced to use the "force" option.

      Many thanks,

      Luca J. Uberti
      University of Otago, NZ

      Comment


      • #4
        Dear Grace & Luca,

        If you are using a fixed effects estimator, -margins- will not give you any interesting results. Also, you cannot do predictions. The reason is simple: all of these depend on fixed effects and we do not have consistent estimates for them. Setting the FE at some value is meaningless because the result will depend on the way you measure your regressors. For example, if in the model in #1 we replace age with year of birth the results of -margins- and predictions will change.

        In short, do not do it! In the Poisson case the coefficients have a natural interpretation and that should be enough.

        All the best,

        Joao

        Comment


        • #5
          Joao Santos Silva I thought that given the way that Stata typically parameterizes the FEs in non-linear models with an intercept, they will sum to zero, and can be thought of a deviations from some common mean given by _cons. This means that margins(), dydx(xvar) with a FE of zero corresponds to the marginal effect for people as if they all had the average FE. This is not entirely meaningless, though perhaps of more limited use.

          Comment


          • #6
            Dear Joao,

            Many thanks.

            I am actually using a RE estimator. Apologies for the confusion. My model includes interaction terms, and I would like to use the margins command to better interpret the interactions.

            So my code is something along these lines:

            xtpoisson Y c.X i.Z c.X*i.Z, re
            margins, dydx(Z) at(X=(0(1)10))

            Here the default predict option is xb.

            Now, as I click this in, I get the error message: "default prediction is a function of possibly stochastic quantities other than e(b)".

            The question is: does the default predict option in xtpoisson assume that the random effect is zero (like the predict(nu0) option), or does it actually recover the random effect? If the latter, I assume I could type >margins, dydx(Z) at (X=(0(1)10)), force< and safely interpret my marginal effects. Is that right?

            Many thanks in advance,

            Luca

            Comment


            • #7
              Dimitriy: Correct me if I am wrong, but think that both -xtpoisson- and -xtlogit- do not report an intercept when the fe option is used. Actually, I cannot see how such intercept could be defined. In that case any output that involves the fixed effects cannot be computed in a meaningful way.

              Luca: Using RE with -xtpoisson- requires some very strong assumptions about the distribution for the individual effects. Therefore, predictions and margins will depend a lot on the assumptions you make, and I find that uncomfortable. Anyway, I do not see how would it be possible to recover the random effects and therefore the results will always have to be made assuming some value for the RE. Because the RE are within a non-linear function, this approach has non trivial consequences and I do not think it leads to a meaningful outcome. If using FE is not an option, I would just use pooled regression.


              Comment


              • #8
                Steve Samuels had some really interesting posts on this thread that might apply here:

                http://www.statalist.org/forums/foru...-after-xtlogit
                -------------------------------------------
                Richard Williams, Notre Dame Dept of Sociology
                Stata Version: 17.0 MP (2 processor)

                EMAIL: [email protected]
                WWW: https://www3.nd.edu/~rwilliam

                Comment


                • #9
                  Thanks, Richard. For the FE case, I would go for Kitazawa's approach as discussed here. I now have a simple code for it.

                  Comment


                  • #10
                    Dear Colleagues,

                    Many thanks for your clarifications.

                    I understand that >margins< does not integrate over the random effects, so as they stand my estimates are not meaningful.

                    That said, it looks like the latest version of the margins command in STATA 14 does recover the unobserved effects (if used after mepoisson, melogit, etc.). See http://www.statalist.org/forums/foru...fects-stata-14

                    Best,

                    Luca

                    Comment


                    • #11
                      Luca,

                      If I understood it correctly, is does not recover the unobservable but integrates it out. As I said above, the result you get will depend critically on the assumptions you make about their distribution.

                      All the best,

                      Joao

                      Comment


                      • #12
                        Dear Joao,

                        Thanks a ton!. Apologies if I am not putting this correctly.

                        As I understand the STATA manual (version 14, p. 20, http://www.stata.com/manuals14/u.pdf), the situation is as follows:

                        In STATA 13, margins and predict assume that the random effect is zero when computing predictions.

                        In STATA 14, by contrast, margins and predict integrate over the unobserved effects. In other words, they average over the actual distribution of the random effect, based on the data. In this sense, the STATA 14 output fully incorporates or, as it were, "recovers" the random effect. (Or, as Vince Wiggins puts it in http://www.statalist.org/forums/foru...fects-stata-14, the marginal effects/predictions are also marginal with respect to the random effect).

                        I understand that, for this reason, the margins results will depend on my assumptions about the functional form of distribution. But the same goes for the results of xtpoisson, doesn't it. In my specific case, the results change very little whether I assume gamma- or normally distributed random effects (although I have reasons to prefer the former). So I guess the assumption about distribution is not that "crucial" after all - in my specific case, I mean.

                        Am I getting this wrong?

                        Luca
                        University of Otago, NZ

                        Comment


                        • #13
                          Dear Luca,

                          I agree with what you write in the first few paragraphs, but not with what you write in the final (substantive) one.

                          Poisson regression, either pooled or with FE, is very robust to distributional assumptions: essentially, what you are doing is estimating an exponential model and the estimator is consistent under very mild assumptions. The same applies to functions of the estimates such as margins and predictions.

                          If you do RE the situation is rather different because now you need assumptions about the distribution of the individual effects. You may find that the results do not change much between gamma and normal, but in both cases you are assuming that the distribution is independent of the regressors, and that is very strong. If the distribution of the individual effects depends on the regressors in complex ways (as it is likely to be the case in practice), then making the wrong assumption (for instance assuming that it is independent of the regressors) may lead to very different results. Clearly, you need and extra layer of (fragile) assumptions if you use RE in a non-linear model.

                          All the best,

                          Joao

                          Comment


                          • #14
                            Dear Joao,

                            Many thanks for your clarification and your comments about the use of RE in non-linear models.

                            On a different note, can I possibly bother you with a small question/clarification about your fascinating paper on the "log of gravity"?

                            You argue that estimating constant-elasticity models in log-linearized form leads to inconsistent estimates. The reason is that the log-linearized error term is correlated with the regressors by construction. However, you never seem to cast this problem explicitly as a problem of endogeneity. Is there a particular reason?

                            Many thanks,

                            Luca

                            Comment


                            • #15
                              Dear Luca,

                              Thank you for your interest in our work.

                              To me endogeneity has an economic connotation whereas the problem created by logging is purely mechanical: we assume that the error of the model in levels is not correlated with the regressors, but if we take logs the error may be correlated. That is, there is no endogeneity in the model in levels and therefore saying that there is endogeneity in the model in logs sounds strange because the two models describe the same phenomenon.

                              All the best,

                              Joao

                              Comment

                              Working...
                              X