Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Using a macro with test

    Hi all,

    I'm trying to test whether coefficients for one predictor are different across a number of different models.
    These are IV models and I'm running them using the -reghdfe- package.

    Sample code below:

    Code:
    reghdfe y x1 x2 x3 (x1=iv1), vce(cluster id) absorb(year id)
    local coef1 _b[x1]
    
    reghdfe y x1 x2 x3 (x1=iv2), vce(cluster id) absorb(year id)
    local coef2 _b[x1]
    
    test `coef1'=`coef2'
    In this case, x1 is the predictor of interest (instrumented by two separate IVs). The results from this test command are blank.

    I'm starting to think that -test- doesn't permit macros. If so, what's the best way to store coefficients and test against a different model? I can't use -suest- because it isn't compatible with -reghdfe-.

    EDIT: apologies, I realize the above code is incorrect.

    I have tried the below as well:

    Code:
    reghdfe y x1 x2 x3 (x1=iv1), vce(cluster id) absorb(year id)
    local coef1 _b[x1]
    
    reghdfe y x1 x2 x3 (x1=iv2), vce(cluster id) absorb(year id)
    
    test _b[x1]=`coef1'
    Thanks!
    Last edited by Yevgeniy Feyman; 14 Sep 2017, 13:58.

  • #2
    Update: I've solved this. The solution was as follows for those who are interested:


    Code:
     reghdfe y x1 x2 x3 (x1=iv1), vce(cluster id) absorb(year id)
    local coef1: di _b[x1]  
    
    reghdfe y x1 x2 x3 (x1=iv2), vce(cluster id) absorb(year id)  
    test _b[x1]=`coef1'

    Comment


    • #3
      Test permits macros, but the way you are using them is wrong. When we look at your first block of code, the line -local coef1 _b[x1]- sets local macro coef to the string "_b[x1]" It does not take note of the actual value of that coefficient. Analogously for -local coef2-. When you then run -test `coef1' = `coef2'-, Stata expands the macros and comes up with -test _b[x1] = _b[x1]-. But as far as -test- is concerned, the only _b[x1] is the one from the most recent regression. So you are getting a comparison of the coefficients of x1 in the second regression with itself. Similar reasoning explains that the same thing happens in your second code block.

      What I don't have is a solution to your problem. Ordinarily one would handle a situation like this with -sureg-, but I'm don't know whether -sureg- supports -reghdfe-. If it doesn't, I don't know how else to handle this. Perhaps somebody else can think of a solution.

      Added: Crossed with #3. I don't think the solution in #3 is correct. By using -local coef1: di _b[x1]-, Yevgeniy is now storing the estimated value of _b[x1] from the first equation in local macro coef1. But then the test command expands to: -test _b[x1] = some_number-. (some number being the estimated coefficient of x1 in the first model.) The problem is that you are now running a test of a random variable, _b[x1] from the second model against a constant. So the sampling variability in _b[x1] from the first model is not taken into account by this test. I would imagine that this would tend to be anti-conservative in terms of statistical significance.

      One more thought: -local coef1 = _b[x1]- would do the same thing as -local coef1: di _b[x1]- and is quicker to type.
      Last edited by Clyde Schechter; 14 Sep 2017, 14:22.

      Comment


      • #4
        Clyde, thanks for the thoughtful response.

        I see your point regarding sampling variability. In this case, perhaps it might be better to simply do the comparison by hand with t-tests?

        Comment


        • #5
          I'm honestly not sure if that works. If you assume the random variables _b[x1] from model 1 and _b[x1] from model 2 are independently sampled, then, yes, you could calculate a t-test based on their standard errors. But I am not sure whether that is actually true or not, and my instinct is that it isn't. My ability to pierce this problem is limited by the fact that I have very little understanding of instrumental variables--they are seldom used in my line of work. I just don't know whether the joint distribution of those estimates has covariance between them or not. If it does, then without an estimate of that covariance, you can't calculate the standard error of the difference between them. And finding that covariance puts us right back where we started.

          I hope somebody else can come up with a solution for you. This is just beyond my knowledge and understanding.

          Comment


          • #6
            It looks like you're trying to test whether you get different parameter estimates when you change instrumental variables. I'm not sure why you would want to do this. Generally, I would think you'll do better to include both if they both influence the endogenous variable.

            A few things to think about. A Hausman test might be an alternative but it generally tests whether all the parameters are equal. If suest works with reghdfe , then that would be a good way to go. You could also use xtivreg with i.year instead of reghdfe. suest might work with xtivreg even if it doesn't with reghdfe (I'm not sure). I think you can also do 2SLS with the regress command which might work with suest even if xtivreg doesn't. The panel effects can be done with dummies. If all else fails, you might calculate the instrumental regressions separately, and use the predicted values in sureg.

            Comment

            Working...
            X