Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • confidence interval for chi2 statistic following lrtest

    Hi Statalist,

    I have conducted multiple regression analyses in a SEM framework (using STATA 13.0). To compare the model fit of the covariate-only model versus the full model (with predictors of interest), I conducted a likelihood ratio test, which produced a chi-square statistic and an associated p-value. Is there a way to compute a 95% CI for the chi-square statistic (from the LR test)? Below is the exact syntax. Thanks!
    sem (RC_CNQTOTALT2 <- Female XComorbT1 aget1_centered EducationT1 ///
    MOS_StrucT1_w MOS_EPAT1 MOS_tangT1 FFMP_NT1), nocapslatent ///
    method(mlmv)

    estimate store m1

    sem (RC_CNQTOTALT2 <- Female XComorbT1 aget1_centered EducationT1 ///
    MOS_StrucT1_w@0 MOS_EPAT1@0 MOS_tangT1@0 FFMP_NT1@0), ///
    nocapslatent method(mlmv)

    estimate store m2

    lrtest m1 m2

  • #2
    Maybe bootstrap? I'm curious about what you're going to do with it. Something like this, or is it another one of those SEM goodness-of-fit numbers?

    Comment


    • #3
      Ok, but how exactly would I implement that? I've never used bootstrap with lrtest.

      The goal is to compare the models with and without covariates and run a simultaneous test of all the covariates in the model. This is in response to a reviewer's request to report the confidence interval for the lrtest.

      Thanks for your help!
      Last edited by Andrea Niles; 18 Jan 2018, 13:18.

      Comment


      • #4
        I figured out how to do the bootstrap, but I didn't get a SE or CI and I got the message below:

        Warning: Because lrtest is not an estimation command or does not set e(sample), bootstrap has
        no way to determine which observations are used in calculating the statistics and so
        assumes that all observations are used. This means that no observations will be
        excluded from the resampling because of missing values or other reasons.

        Comment


        • #5
          I wrote my own bootstrap program and this worked! Thanks so much for your help

          Comment


          • #6
            Glad to hear it worked out. I have never heard of anyone asking for a confidence interval around the chi-square statistic. Theory (as I understand it) tells us that the -2 log likelihood difference between nested models has an asymptotic chi-square distribution, with degrees of freedom equal to the difference in the number of parameters. I have heard of people obtaining the distribution of the -2 log likelihood difference between models via bootstrap in some cases where theory tells us that the asymptotic chi-square distribution does not apply (in latent class analysis, there is something about this setup being at the boundary of the parameter space or something like that, so people smarter than I have derived an analytical correction to the -2LL value or recommended the bootstrap LR test). Maybe that's what your correspondent was asking for?

            If it was, do you have any idea why it was necessary? When you fit the measurement model, you already estimated p-values and 95% confidence intervals for the betas that you later constrained to zero. If the 95% CI around the betas didn't include 0, then what additional information does this exercise provide? Additionally, you could have:

            1) fit your model, then conducted a joint test if the coefficients of all predictors of interest were equal to 0, or

            2) fit the model, then conducted score tests to see if each individual constraint should have been relaxed.

            That process was detailed in SEM example 8.

            I am not deeply familiar with SEM. I may not know what I am talking about. If so, I am genuinely curious to hear why your reviewer felt like this was necessary.
            Be aware that it can be very hard to answer a question without sample data. You can use the dataex command for this. Type help dataex at the command line.

            When presenting code or results, please use the code delimiters format them. Use the # button on the formatting toolbar, between the " (double quote) and <> buttons.

            Comment

            Working...
            X