Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • RE: Likelihood ratio test

    Hello!

    I ran two logit models and tried to contrast the models using the likelihood ratio test to see whether the interaction model improves the basic model. However, I had an error message: “observations differ: 1670 vs. 1659 r(498);” Why does this error occur? Please see below. Any help would be appreciated.

    logit sho i.time c.econdis c.blackhisp c.vrate i.srace i.sgender i.wscend i.contact i.injury i.commands, cformat(%9.3f) pformat(%5.3f) sformat(%8.3f) nolog

    est store m1

    logit sho i.time c.econdis c.blackhisp c.vrate i.srace i.sgender i.wscend i.contact i.injury i.commands i.time#c.econdis i.time#c.blackhisp i.time#c.vrate i.time#i.srace, cformat(%9.3f) pformat(%5.3f) sformat(%8.3f) nolog

    est store m1

    lrtest m1 m2, stats
    observations differ: 1670 vs. 1659
    r(498);


  • #2
    Stata helpfully told you what the problem is. You have differing number of observations used to estimate each model, and under such conditions, the likelihood ratio test is invalid. This likely happened due to missing values in one or more of your variables.

    Also, you've posted here frequently. Please read the FAQ to learn how to ask effective questions here, such as showing code and model output, and note the use of the CODE tags.

    Comment


    • #3
      It doesn’t look like it can be missing data. Probably some of the cases in the more general model have outcomes perfectly predicted. I would just estimate the model with interactions and use the test command on the interaction terms. This is the Wald test.

      Comment


      • #4
        Original Poster, the lrtest is comparing m1 to some other m2 that you are not showing. Note that in the code you are showing, you called the results of both of your logits with the same name m1. In this case the second estimation results overwrites the first, and your lrtest is comparing the second logit you showed us, to some model which you have called m2 and you have not shown us.

        In short, you need to firstly fix your mistakes in the code.

        Comment


        • #5
          Hi There,

          Could anyone here help me with the interpretation of the 'lrtest'. Below is the output from the stata


          . lrtest x1 x2, st di

          Likelihood-ratio test LR chi2(5) = 7248.82
          (Assumption: x1 nested in x2) Prob > chi2 = 0.0000



          What does it mean? The two models are of 'streg' command.

          Could anyone point to any resource which can help me interpret the 'lrtest'.

          Regards
          Pavan

          Comment

          Working...
          X