Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Model specification: IV

    Dear friends,

    I am wondering if I applied the an interaction term between X and Year dummy as an instrument, should I also need to include the single terms of X and Year dummy into the first stage regression as instruments too?

    Thank you.

  • #2
    Sorry I should add the code as below for an example,

    Code:
    ivregress y  (x_endog = z*dummy)  z dummy x_controls
    Or like this?

    Code:
    ivregress y  (x_endog = z*dummy z dummy )  x_controls
    Should I put the non-interaction term into first stage regression, or outside it? Thank you.

    Comment


    • #3
      With respect to the set of instruments, you have a lot of liberty. As long as the instruments are valid, that is uncorrelated with the structural error, you can use whatever instruments you want.

      I would try both and see whether there is any big difference between the estimates. Otherwise on symmetry ground I think it is more appealing to use the second specification where you have also the main effects on the top of the interaction.

      Comment


      • #4
        Thank you so much,

        Comment


        • #5
          I found the estimation failed to reject the Underidentification test. Could I still explain that the IVs are still good since it rejects the Weak-instrument-robust test.


          Underidentification test
          Ho: matrix of reduced form coefficients has rank=K1-1 (underidentified)
          Ha: matrix has rank=K1 (identified)
          Kleibergen-Paap rk LM statistic Chi-sq(10)=11.38 P-val=0.3283

          Weak identification test
          Ho: equation is weakly identified
          Cragg-Donald Wald F statistic 2.63
          Kleibergen-Paap Wald rk F statistic 2.10

          Stock-Yogo weak ID test critical values for K1=3 and L1=12:
          5% maximal IV relative bias 17.80
          10% maximal IV relative bias 10.01
          20% maximal IV relative bias 5.90
          30% maximal IV relative bias 4.42
          Source: Stock-Yogo (2005). Reproduced by permission.
          NB: Critical values are for Cragg-Donald F statistic and i.i.d. errors.

          Weak-instrument-robust inference
          Tests of joint significance of endogenous regressors B1 in main equation
          Ho: B1=0 and orthogonality conditions are valid
          Anderson-Rubin Wald test F(12,24)= 5.67 P-val=0.0002
          Anderson-Rubin Wald test Chi-sq(12)= 75.60 P-val=0.0000
          Stock-Wright LM S statistic Chi-sq(12)= 22.21 P-val=0.0352
          Last edited by Bright Tree; 24 Jan 2021, 22:09.

          Comment


          • #6
            These test are not tests of whether your instruments are valid, but rather tests of whether your instruments explain the endogenous variable.

            A test that somewhat tests the instrument validity is possible only when you have more instruments than endogenous variables, and it is the over identification test.

            As for your results, I would not worry too much. The tests which are robust to weak instruments (on the bottom of the output you re showing, e.g., the Anderson Rubin test), show that you can reject the null that the endogenous regressor has no effect.

            Comment


            • #7
              Your question in #5 was whether you could proceed given that the null of under-identification is not rejected. The answer is no. What the result of test implies is that your instruments are useless.

              Comment


              • #8
                Thank you a lot to these two replies.

                Comment


                • #9
                  I should have added: they are useless either because they are not correlated with the endogoneous regressor or because any correlation between them and the endogenous regressor is already captured by your control variables.

                  Comment


                  • #10
                    Thank you so much to Eric de Souza. When I changed the clustered variable, the under-identification test could be rejected. But the estimated coefficients were not significant.
                    Last edited by Bright Tree; 25 Jan 2021, 15:08.

                    Comment


                    • #11
                      Eric said it like if you proceed when "the null of under-identification is not rejected," your computer will break irreparably and you will need to buy a new computer :-) .

                      After your post in #10 you might already have a feeling of what can go wrong with placing as much faith in a testing procedure as Eric places. What you already saw, is that you change the errors assumption from iid to heterskedastic, or to cluster correlated, or change the cluster variable, and very often it happens that what was "insignificant" becomes "significant," or the other way round. The point being is that the outcome of every test is built on a tower of various assumptions (such as errors being iid say), and if some of those assumptions do not hold, your test gives nonsensical results. Further and more basic reason not to place that much faith in a test, is the fundamental observation that failing to reject a null does not mean that the null is incorrect, it just means that you have failed to reject it.

                      Certainly the results on weak identification/no identification that you showed are reasons for concern, but what this concern means is that you cannot trust the 2SLS estimates (or GMM or LIML or whatever structural procedure you are using).

                      These weak/no identification results do not make the tests, such as Anderson-Rubin, on the bottom invalid. The test on the bottom can be invalid only if the exclusion restriction is violated. Therefore if you trust your exclusion restriction, the tests on the bottom are the whole story, they are valid in any circumstance including complete lack of identification.



                      Comment


                      • #12
                        Joro Kolev "After your post in #10 you might already have a feeling of what can go wrong with placing as much faith in a testing procedure as Eric places."
                        Then why bother testing. Given the information provided by Bright Tree, what I said is perfectly correct. A negative test result tells you that something is wrong with your model not what is wrong. Which is what I said,. And therefore ones needs to reformulate the model.

                        Comment


                        • #13
                          Everybody has their answer as to "why bother testing," Eric. My answer is that the reality is way too complicated for the human brain, therefore we try to simplify the reality, and to make sense of it. The way how we simplify and make sense of the reality is by fitting models (which hopefully capture a key aspect of the reality, while disregarding unimportant to us features), and then to aid our models we plot pictures, estimate parameters, and test hypotheses. So I would say: to understand and simplify the overly complicated reality we fit models. To aid and understand better our model we test hypotheses.

                          There are simple questions like 2+2=?, which have simple and unambiguous answers, i.e., 4.

                          There are complicated questions like "what to do when in our IV model we have weak identification, or no identification at all." I think that you very correctly interpreted the outcome of the test in question. (Yes, under-identification is a sign of trouble.)

                          From there on I interpreted what you are saying as "Abort the mission, terrible error has occurred, we all need to forsake what we are doing, or else the computer will break." To which I disagreed.

                          If you meant what you are saying below, and put in a bit more mild terms one might want "to reformulate the model," I totally agree with you that this is excellent advice in the current situation.

                          Originally posted by Eric de Souza View Post
                          Joro Kolev "After your post in #10 you might already have a feeling of what can go wrong with placing as much faith in a testing procedure as Eric places."
                          Then why bother testing. Given the information provided by Bright Tree, what I said is perfectly correct. A negative test result tells you that something is wrong with your model not what is wrong. Which is what I said,. And therefore ones needs to reformulate the model.

                          Comment


                          • #14
                            Joro:what I meant was given the model and the paucity of information provided, the instruments are useless and it makes no sense to proceed further in the framework of that model because the instruments are not serving their purpose.This result could have various causes.
                            On Edit: "in that framework" and not "in the framework"

                            Comment

                            Working...
                            X