Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Choice of Estimator for Dynamic Panel Analysis

    Hi,


    I am working on determinants of corporate cash holdings with a panel dataset of ~700 firms across 16 years. Keeping in view theoretical and emprical considerations, I need to apply dynamic panel analysis. However, I am confused with regard to choice of a relevant dynamic panel estimator for the same. Below are my questions.

    In the context of Corporate Finance, three papers (attached) compare different dynamic panel estimators and recommend the following:

    Flannery & Hankins (2013): Broadly they recommend, Blundell Bond System GMM

    Dang, Kim & Shin (2015): They propose their own bias corrected global minimum variance estimator

    Zhou, Faff & Alpert (2014): They recommend bias corrected estimators namely: LSDV Corrected, Bootstrapped Bias Corrected, Indirect Inference.

    Having read all three papers, I would like to know how does one choose a relevant estimator out of the ones compared in the papers above for his/her analysis.

    References:

    Dang, V., Kim, M., & Shin, Y. (2015). In search of robust methods for dynamic panel data models in empirical corporate finance. Journal Of Banking & Finance, 53, 84-98. doi: 10.1016/j.jbankfin.2014.12.009

    Flannery, M., & Hankins, K. (2013). Estimating dynamic panel models in corporate finance. Journal Of Corporate Finance, 19, 1-19. doi: 10.1016/j.jcorpfin.2012.09.004

    Zhou, Q., Faff, R., & Alpert, K. (2014). Bias correction in the estimation of dynamic panel models in corporate finance. Journal Of Corporate Finance, 25, 494-513. doi: 10.1016/j.jcorpfin.2014.01.009

    Attached Files

  • #2
    Bias-corrected estimators usually require that all variables besides the lagged dependent variable are strictly exogenous. In that case, they often perform better than GMM estimators. Another option in this situation would be a maximum-likelihood estimator; see for example:
    XTDPDQML: new Stata command for quasi-maximum likelihood estimation of linear dynamic panel models

    If the strict exogeneity assumption is not satisfied, then there is hardly any alternative available to GMM estimators.
    https://twitter.com/Kripfganz

    Comment


    • #3
      This comes as a strange surprise to me, Sebastian Kripfganz. If all bias corrected estimators (specifically, LSDVC, Bootstrapped Bias Corrected Estimator and Indirect Inference) require independent variables to be exogenous, their application in the context of corporate finance studies should be completely avoided as financial variables used in these studies are mostly endogenous. In this way, the findings and recommendations of Dang, V., Kim, M., & Shin, Y. (2015) and Zhou, Q., Faff, R., & Alpert, K. (2014) are inaccurate and to an extent irrelevant.

      It is hard to comprehend why Dang, V., Kim, M., & Shin, Y. (2015) and Zhou, Q., Faff, R., & Alpert, K. (2014) are advocating the use of bias corrected estimates in the context of corporate finance studies if the assumption of these estimators is that independent variables are exogenous.

      Is there a relevant text (book/paper) which I can study for learning more about bias corrected estimators?

      Comment


      • #4
        I did not read the articles you mentioned in detail but they seem to reference primarily Kiviet (1995) and Everaert and Pozzi (2007) regarding the bias correction procedures. In those papers, it is made explicit that the independent variables need to be strictly exogenous. The logic is quite simple: If strict exogeneity does not hold then you need to make explicit assumption about the form of the endogeneity. Otherwise, there is no way to determine how this endogeneity affects the bias of the estimator, but then you cannot correct the bias.

        With that said, it might still be that applying these bias-corrected estimators helps to reduce the bias even in the presence of endogeneity, but any such simulation evidence only provides a partial picture given the specific data-generating process considered.

        Literature:
        • Kiviet, J. F. (1995). On bias, inconsistency, and efficiency of various estimators in dynamic panel data models. Journal of Econometrics (68), 53-78.
        • Everaert, G., and L. Pozzi (2007). Bootstrap-based bias correction for dynamic panels. Journal of Economic Dynamics & Control (31). 1160-1184.
        https://twitter.com/Kripfganz

        Comment


        • #5
          Alright, Sebastian Kripfganz. Now, the issue is that despite my inclination towards GMM estimators (specifically, system GMM) , I am facing a hurdle in implementing them. After experimentation, I have observed that I can obtain different results based on my choice of lags and categorisation of independent variables as exogenous, endogenous and pre-determined. This indicates scope of manipulation in drawing conclusions according to one's convenience. Further, there is no mention or guidance of these choices in the extant literature. I have not come across a paper in literature related to my topic which mentions the number of lags/categorisation of independent variables that they have used. This raises doubts in my mind in relation to the sanctity of the results because it seems that the researcher might prove anything just be playing around with these choices.

          Sincerely hoping to hear from you on this!

          Comment


          • #6
            I cannot take away your doubts because I share them to some extent. The sensitivity of the GMM estimator to sometimes even small changes in the choice of instruments, in particular in small samples, can be frustrating. You should broadly follow common sense and some simple guidelines:
            • Classify variables as strictly exogenous / predetermined / endogenous based on the theory in your field of research. If there is no clear guidance, make your own justifications based on what you believe is reasonable.
            • Ensure that you do not have too many instruments. As a rule of thumb, the number of instruments should be clearly below the number of groups. A combination of a not too restrictive lag length with collapsing of the instruments tends to work well; see for example Kiviet et al. (2017).
            • Use the familiar specification tests such as the Arellano-Bond test for serial correlation and the Hansen overidentification tests to gain confidence in your model choice.
            • If you find that a small change of the instruments leads to substantial differences in the estimates, this might indicate that there is a potential problem with your specification. In doubt, choose the specification that requires less strong assumptions.
            There is no on-size-fits-all solution. After all, you are the one who needs to make a final decision in good conscience. You should feel comfortable with it and you should be able to justify your choice if someone questions it.

            Literature:
            • Kiviet, J. F., M. Pleus, and R. W. Poldermans (2017). Accuracy and Efficiency of Various GMM Inference Techniques in Dynamic Micro Panel Data Models. Econometrics (5), 14.
            https://twitter.com/Kripfganz

            Comment


            • #7
              That is an honest answer, indeed. Really appreciate your guidance in this regard. I was waiting for someone to tell me that they share these concerns. Thank you, Sebastian Kripfganz.

              Comment

              Working...
              X