Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • SEM/Testing measurement invariance when indicators are ordinal

    Greetings,

    I'm running Mac OSX on Stata 15.1. I'd like to test whether a mental health battery is measurement invariant across racial/ethnic groups. However, the indicators comprising this mental health index are ordinal/Likert scales. Stata's SEM assumes that indicators are normally distributed (an assumption likely to be violated with ordinal indicators). I'm thus not sure how to proceed. Is there an estimation method that is robust to this violation? I know that there is one in R's laavan package (WLS), but I'm not proficient in R so I'm stuck using Stata. What can/should I do? Should I simply treat the indicators as if they were continuous variables (i.e., ignore the assumption of normality)? Any thoughts or advice you have is much appreciated. Thanks!

    Best,
    Zach

  • #2
    Hi Zach, I am commenting in part since I have the exact same question, but also because one workaround I am using at the moment is to test for DIF using the command uirt in Stata (https://www.stata-journal.com/articl...article=st0670). I am not sure of the pros/cons of using DIF rather than framing it as measurement invariance but maybe this will help or at least get someone's attention to provide a better option. Good luck!

    Comment


    • #3
      Originally posted by Rebecca Bucci View Post
      I am commenting in part since I have the exact same question, but also because one workaround I am using at the moment is to test for DIF . . . I am not sure of the pros/cons of using DIF rather than framing it as measurement invariance but maybe this will help or at least get someone's attention to provide a better option.
      Couldn't you just use the ginvariant() option that's already available for the official gsem command?

      You could fit an ordered-categorical groupwise SEM using gsem with an identifying constraint—say, on the first item's first cutpoint (intercept)—and then use ginvariant()'s arguments singly or in combination to look at various types of measurement invariance: cutpoint (threshold) , factor loading and latent factor covariance.

      One possibility is shown below with a simple ordered-categorical confirmatory factor analysis (start at the "Begin here" comment; the top part is to create a toy dataset for use in illustration). It's shown in code for clarity (gsem . . . , group() with ordered-categorical indicator variables produces a lot of output), but I attach the log file from it in case you want to look at the result of running it.
      Code:
      version 17.0
      
      log using "Ordinal Measurement Invariance.smcl", nomsg name(lo)
      
      clear *
      
      // seedem
      set seed 213827780
      
      quietly drawnorm l1 l2 l3, double corr(1 0.7 0.7 \ 0.7 1 0.7 \ 0.7 0.7 1) n(250)
      forvalues i = 1/3 {
          egen byte q`i' = cut(l`i'), group(5)
      }
      
      generate byte grp = mod(_n, 2)
      
      *
      * Begin here
      *
      // Fit free model (first item's intercept constrained equal between groups for model identification)
      constraint define 1 [/q1]0bn.grp#c.cut1 = [/q1]1.grp#c.cut1
      gsem (q? <- F, oprobit), ///
          group(grp) ginvariant(none) constraints(1) ///
              nocnsreport nodvheader nolog
      estimates store Free
      
      // Then fit constrained model (cutpoints, factor loadings and latent factor variances)
      gsem (q? <- F, oprobit), ///
          group(grp) ginvariant(cons loading covex) ///
              nocnsreport nodvheader nolog
      lrtest Free
      
      quietly log close lo
      
      exit
      I've read where your approach is essentially equivalent to testing for measurement invariance. I'm more inclined to eyeball the coefficient pairs and judge whether they're reasonably close, but using the official Stata commands for the purpose might have some merit if you're having to deal with a referee.
      Attached Files

      Comment

      Working...
      X