Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Comparing coefficients between count data models

    Dear Stata users,
    Thank you in advance for you kind response.

    I am looking for a way to compare coefficients of two count data models. The count outcome of the first model is number of plant openings in a country and the outcome of the second model is number of closures in a country. All predictor variables are the same. The two models look like:
    model A: nbreg openings var1 var2.....var6 i.year i.country
    model B: nbreg closures var1 var2 ...var6 i.year i.country

    At the moment the only way I found to "compare" coefficients is to do a Wald test with null hypothesis being:H0=var1(of model A)-var1(model B)=0

    I have seen that one way to see differences in coefficients is to use interaction terms with dummy variables on the outcome, but my understanding is that the outcomes in these cases are mutually exclusive. In my case, it is possible to have openings and closures in the same year in the same country.

    Any ideas how to deal with this issue (if possible) are more than welcome.

    Ioannis




  • #2
    You could look into gsem—twice this week alone: maybe this should be a StataCorp FAQ—but I would be wary of including country as a fixed effect like you're doing. Think incidental parameters.

    Maybe something like
    Code:
    gsem (openings closures <- c.var? i.year M[country], family(nbinomial))
    instead.

    Comment


    • #3
      Originally posted by Joseph Coveney View Post
      You could look into gsem—twice this week alone: maybe this should be a StataCorp FAQ—but I would be wary of including country as a fixed effect like you're doing. Think incidental parameters.

      Maybe something like
      Code:
      gsem (openings closures <- c.var? i.year M[country], family(nbinomial))
      instead.
      Dear Joseph,
      Thank you for your response. I believe that the syntax you kindly provided is no different than running the two models separately. The reason is that when I use it as

      gsem (openings closures <- var1 var2...var6 i.year i.country, family(nbinomial) vce(cluster country) I get exactly the same results as before (two separate models). (In my syntax I kept the i.country in order to check the similarity of the results) Then, if gsem calculates two models separately, it does not work for me. Not sure what exactly your point is on the fixed effects issue. I believe I need to have fixed effects to control for unobserved country-level heterogeneity and mitigate any omitted variables bias.

      Comment


      • #4
        Originally posted by Ioannis Siskos View Post
        . . . if gsem calculates two models separately, it does not work for me.
        But it doesn't. It leaves behind matrices and vectors that allow you to
        Code:
        test [openings]var1 = [closures]var1
        that is, test for differences in parameters across equations, which is what I thought you wanted to do.

        Not sure what exactly your point is on the fixed effects issue. I believe I need to have fixed effects to control for unobserved country-level heterogeneity and mitigate any omitted variables bias.
        If, for example, long T allows for consistent estimators in this type of model, and if you've convinced yourself through simulation that your T is long enough in this case, then I suppose you're in good stead.

        Comment


        • #5
          Originally posted by Joseph Coveney View Post
          But it doesn't. It leaves behind matrices and vectors that allow you to
          Code:
          test [openings]var1 = [closures]var1
          that is, test for differences in parameters across equations, which is what I thought you wanted to do.
          This is how I handle it at the moment. I use the Wald test to see if any coefficients are equal, but this is all I can do. I was hoping there is a way to put everything in one model so that I can quantify the difference in coefficients. That would require I suppose the use of interaction terms.

          If, for example, long T allows for consistent estimators in this type of model, and if you've convinced yourself through simulation that your T is long enough in this case, then I suppose you're in good stead.
          T=14 in my case and N=420.

          Comment


          • #6
            Originally posted by Ioannis Siskos View Post
            This is how I handle it at the moment. I use the Wald test to see if any coefficients are equal, but this is all I can do.
            I wasn't aware that you could do that with two separate, independent successive nbreg commands like you show above in #1. Are you using them in conjunction with suest? You don't show that.

            I was hoping there is a way to put everything in one model so that I can quantify the difference in coefficients. That would require I suppose the use of interaction terms.
            No. Using gsem is fitting one model. You don't need interaction terms in order to test for differences in parameters between the two equations.

            T=14 in my case and N=420.
            So, you've convinced yourself that using indicator variables for country provides for unbiased, consistent estimators under these circumstances?

            Comment


            • #7
              Originally posted by Joseph Coveney View Post
              I wasn't aware that you could do that with two separate, independent successive nbreg commands like you show above in #1. Are you using them in conjunction with suest? You don't show that.
              Yes, I use suest as well. Sorry for not mentioning this.

              No. Using gsem is fitting one model. You don't need interaction terms in order to test for differences in parameters between the two equations.
              So, you suggest that I could directly compare the coefficients? I am worried that the results are the same with the two separate models case.

              So, you've convinced yourself that using indicator variables for country provides for unbiased, consistent estimators under these circumstances?
              The way you ask suggests that I should be worried in this case. I don't see the reason why not to - I am probably naive in this regard. I would open to ideas as to increase my confidence though.


              Comment

              Working...
              X