Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Modifying the collapse option in the xtabond2 command

    I am implementing system GMM using the xtabond2 command. The data series is very long. This long history generates a very high number of instrumental variables (IV) which creates a significantly over identified result. Unfortunately, when I include the collapse option, the results change from what is observed in previous literature and don’t make intuitive sense. I suspect the averaging of the individual instruments across all the observations is reducing the information necessary to produce meaningful coefficients. Do you know how to modify the collapse option so that the IV matrix only collapses the earlier observations of the instrument matrix and leaves the more recent observations? Thank you, Dan

  • #2
    You could do something like a combination of gmmstyle(var, laglimits(0 5) equation(diff)) and gmmstyle(var, laglimits(6 .) collapse equation(diff)).

    As a side note: The system GMM estimator is designed for large-N, small-T situations. In the opposite case, besides the problem of instrument proliferation, particularly the overidentification tests may perform poorly given that their distributions are obtained under large-N asymptotics.
    https://twitter.com/Kripfganz

    Comment


    • #3
      Thank you again for your sterling advice Sebastian. The option
      Code:
      gmmstyle(var, laglimits(0 5) equation(diff))
      combined with
      Code:
      gmmstyle(var, laglimits(6 .) collapse equation(diff)).
      is a perfect solution (and so obvious in hindsight).

      Thank you also for your thoughtful question re large-N vs small-T. Although the time series is quite long (an average of 90 observations per entity but quite unbalanced with min 1 and max 140), the cross-sectional dimension is very large (a total of 3000 entities initially but 7000 by the end of the data series). I have two classes of interest in this analysis: one starts with a cross-sectional size of 9 but grows to 3000 by the end of the data series; the other starts at only 50 and grows a little to just 140 by the end). I’ll need to review the issue more thoroughly - a brief look at Hayakawa (2006) would suggest I’m ok as long as there is not strong persistence and the variance ratio of individual effects to disturbances is not large (I’ll therefore identify tests for both issues).

      For others who might have found this thread (eg by searching collapse, xtabond2 and related terms) pls note Sebastian’s great advice in a related conversation on “Including lags of exogenous variables in xtabond2” at https://www.statalist.org/forums/for...es-in-xtabond2 NB: especially Sebastian’s insightful reference materials from the 2019 London Stata Conference in #2 of that thread.

      Ref:
      Hayakawa, K. (2015). THE ASYMPTOTIC PROPERTIES OF THE SYSTEM GMM ESTIMATOR IN DYNAMIC PANEL DATA MODELS WHEN BOTH N AND T ARE LARGE. Econometric Theory, 31(3), 647-667. doi:10.1017/S0266466614000449

      Comment


      • #4
        Hi Sebastian, I hope a follow up question is ok? I see that Hayakawa (2015) findings are in the context of modifying the weighting matrix so that the off-diagonal blocks are set to zero. Do you know if this happens to be an option in xtabond2 or xtdpdgmm? Thank you, Dan

        Comment


        • #5
          Hayakawa (2015) writes that
          setting the off-diagonal blocks of the weighting matrix to zero leads to computational attractiveness at the cost of efficiency.
          I am not quite sure what these "computational attractiveness" gains really are, but you can easily implement this weighting matrix in xtabond2 with option h(1) and in xtdpdgmm with option wmatrix(separate).
          https://twitter.com/Kripfganz

          Comment


          • #6
            Hi again Sebastian, I was doing a quick re-read of "Advice on posting to Statalist" at https://www.statalist.org/forums/help#closure and noticed:
            "Please note that a Like on a post is not publicly visible as coming from you and, while friendly, also does not absolve you from either expectation."
            Please therefore accept my heartfelt thanks for your guidance and link to your comprehensive presentation on gmm in the context of stata, Dan

            Comment


            • #7
              Hi Dan. You are welome. I am happy when I can help.
              https://twitter.com/Kripfganz

              Comment

              Working...
              X