Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Hansen J statistic and Kleibergen-Paap rk LM statistic

    Dear all,
    I am trying to test validity of my set of instruments by checking over- and underidentification by using a Hansen J statistic and Kleibergen-Paap rk LM statistic.

    The results are:
    Click image for larger version

Name:	Kleibergen LM.PNG
Views:	1
Size:	2.5 KB
ID:	1312821


    So my p-value=0.33 > 0.1. Therefore, I cannot reject the null hypothesis that my model is underidentified.

    Click image for larger version

Name:	Hansen J test.PNG
Views:	1
Size:	2.5 KB
ID:	1312822

    Here, my p-value=0.5 > 0.1. I cannot reject the null hypothesis that my overidentification is valid.

    Isn’t this a contradiction? One result tells me that my model is underidentified,the other that it is overidentified.

  • #2
    Maybe I just found the answer by myself.
    With the Hansen J test of overidentification I test whether an instrument is valid, i.e. uncorrelated with the error term (Cov (z,u)=0) => exogeneity requirement
    With the Kleibergen-Paap Lm statistic of underidentification, I test whether the excluded instruments are correlated with the endogenous regressors (Cov(z,x)≠0) => relevance requirement
    Is this correct?

    Therefore, I could say that my instruments satisfy the exogeneity and but not relevance conditions?

    Comment


    • #3
      Dear Teresa,

      I think you got the essence of it. My only remark is that the J test checks the validity of the overidentifying restrictions, not the validity of the instruments.

      Best regards,

      Joao

      Comment


      • #4
        Dear Joao,

        Thanks for your answer.
        So with the J test I check the validity of the overidentifying restriction that is that there are more instruments than endogenous regressors, right?

        But then I’m still confused with the KleibergenLM statistic, where I test underindentification that is that there are less instruments than endogenous regressors. Or do I understand this wrong?
        So how can the results of both tests be true at the same time?

        Comment


        • #5
          Dear Teresa,

          There is no contradiction. The J test does not test whether there are more instruments than endogenous regressors; you do not need a test for that, you just need to count the instruments and the endogenous regressors. It tests whether the restrictions implied by the existence of more instruments than endogenous regressors are valid. For more on this please see this paper.

          The underidentification test checks whether your instruments are relevant. It is perfectly possible that your instruments are not relevant and the overidentifying restrictions are valid.

          All the best,

          Joao

          Comment


          • #6
            Dear Joao,

            thanks for your help.
            So, with my underidentification test I check the relevance of my instruments, this is Cov(z,x)≠0.
            I also did a Kleibergen-Paap rk Wald F statistic. With this test I check whether my instruments are weak. Is it correct to say that it is possble that my instruments are relevant by using an underidentifaction test, but to assume that this correlation is very weak by using the Kleibergen-Paap rk Wald F statistic?

            Comment


            • #7
              I guess that something along these lines is possible. The instruments are of little relevance and therefore the model is identified but with weak instruments. Is that what you are saying?

              Joao

              Comment


              • #8
                There are several issues:
                1. The (excluded) instruments are not correlated with the endogenous regressor
                - underidentification test: null is no correlation
                Kleibergen-Paap rk LM statistic (with robust option)
                Anderson canon. corr. LM statistic (without robust option)
                2. The instruments are correlated with the regressors but weakly (null is weak correlation)
                - Cragg-Donald Wald F statistic (without robust option)
                - Kleibergen-Paap Wald rk F statistic (with robust option)
                3. The (excluded) instruments - when more in number than the endogenous regressors - are coherent with each other
                - test for over-identification (null: the instruments are coherent with each other)
                4. The (excluded) instruments are not exogenous
                - cannot be tested
                The excluded instruments may pass the over-identification test but still be endogenous

                Comment


                • #9
                  Teresa - something important to add about the relationship between the overid and underid tests is the following.

                  The standard Sargan-Hansen J test is a specification test - it has a particular distribution if the model as specified is correct. But one of the assumptions of the model is that it is identified, i.e., that the rank condition ("relevance of instruments") is satisfied. If it isn't, then the standard J stat isn't guaranteed to have the usual chi-squared distribution. As Joao says, it's perfectly possible that your model is underidentified and yet the orthogonality conditions are satisfied. But the standard J stat won't give you a valid test stat.

                  (It's possible to get a J stat without the rank condition if you go down the weak-identification-robust inference route, but that's something else again.)

                  Comment


                  • #10
                    Joao, this is what I tried to say.

                    Thank you to all three of you, I now understand the statements of the tests!

                    Comment

                    Working...
                    X