Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • testing for equivalence of parameters across models

    Hey all,

    So I am doing 2 logits and 2 left censored tobits, and I aim to compare the coefficients of two covariates from each model with one another.

    I estimate my regressions and save them, use the suest command for est1 (logit1) and est2 (logit2)

    then I use test _b[est2_XX:Cov] = _b[est4_XX:Cov2]
    so

    my null is that they are equal

    And my output is

    chi2( 1) = 2.86
    Prob > chi2 = 0.0907

    I am not 100% sure how to interpret this wald test. Any help welcomed (I am assuming i fail to reject the null, but that they might be different at the 10% level?)




  • #2
    Originally posted by Mike Tanner View Post
    I am not 100% sure how to interpret this wald test. . . . (I am assuming i fail to reject the null . . .)
    If you survey practitioners of NHST you're liable to find that they consider that a fail-to-reject test result isn't really interpretable beyond just that: you've failed to reject the null hypothesis—end of discussion (beyond perhaps ruing inadequate sample size).

    In any case I don't see how by itself it can be interpreted in terms of the "equivalence" of your post's title. Equivalence NHST is done with a different null-alternative hypotheses pair than what you're showing.

    And when you're fitting different models, that is, different sets of predictors with perhaps only the one common to both, why wouldn't you expect its parameter estimate to be nonequivalent a priori? NHST here seems kind of straw-mannish.

    Comment

    Working...
    X