Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Reference to interpret gsem ologit output

    Hi all

    I am trying to locate information on how to interpret the output of a gsem ologit measurement model (factor analysis)? Can anyone suggest a reference that describes, in detail, how to interpret the output and identify poorly fitting items? I have a great book ‘Discovering SEM using Stata’ but unfortunately it doesn’t detail gsem models. Something similar would be great!

    The model has one latent variable and 5 categorical ordered endogenous variables.

    I am using Stata16.

    Thanks in advance for your time
    Jen

  • #2
    You are literally one constraint away from item response theory (IRT). If you constrain the variance at 1, then all the item discrimination parameters are estimated, rather than the first one being fixed at one. IRT folks call the parameter discrimination, and I think I've heard that parameter referred to as slope parameters in SEM, equivalent to factor loadings in EFA.

    In IRT, I believe we usually want the discrimination parameters at least 1. A bit under 1 is fine. I don't have specific sources.

    You could search the IRT literature, though. Note that Rasch models, one class of IRT models, approach this differently. They either estimate one model-wide discrimination parameter, or else the discrimination parameter is fixed at 1 for all items (I'm unclear which). You would be looking at models like the 2-parameter logistic, graded response model, generalized partial credit model, and maybe a few others. If you see a Rasch or Rasch-type model, be clear that the discrimination parameter applies to all the items simultaneously.
    Be aware that it can be very hard to answer a question without sample data. You can use the dataex command for this. Type help dataex at the command line.

    When presenting code or results, please use the code delimiters format them. Use the # button on the formatting toolbar, between the " (double quote) and <> buttons.

    Comment


    • #3
      I had better add something to the post above.

      The irt suite of commands actually fits the model in the gsem command behind the scenes. If all you are doing is a measurement model, you could just run the irt grm command. However, the commands present the cutpoints differently (I believe SEM folks might refer to these as intercepts). The discrimination/loading parameters are the same. To illustrate, removing unnecessary output and focusing on the first item only:

      Code:
       webuse charity
      irt grm ta1-ta5
      
      ------------------------------------------------------------------------------
                   |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
      -------------+----------------------------------------------------------------
      ta1          |
           Discrim |    .907542   .0955772     9.50   0.000     .7202142     1.09487
              Diff |
              >=1  |  -1.540098   .1639425                     -1.861419   -1.218776
              >=2  |   1.296135   .1427535                      1.016343    1.575927
               =3  |   3.305059   .3248468                      2.668371    3.941747
      -------------+----------------------------------------------------------------
      
      
      gsem (Theta@ -> ta1 ta2 ta3 ta4 ta5, ologit) , variance(Theta@1) latent(Theta)
      
      ------------------------------------------------------------------------------
                   |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
      -------------+----------------------------------------------------------------
      ta1          |
             Theta |    .907542   .0955772     9.50   0.000     .7202142     1.09487
      
      ...
      -------------+----------------------------------------------------------------
      /ta1         |
              cut1 |  -1.397704   .0947599                      -1.58343   -1.211978
              cut2 |   1.176297   .0904769                      .9989655    1.353628
              cut3 |    2.99948   .1552198                      2.695255    3.303705
      Again, the discrimination parameters are the same. A poorly fitting item is equivalent, I believe, to an item with low discrimination.

      In IRT, standard practice is that you want the intercepts to cover a good amount of the range of the latent trait, probably at least from theta = -2 to 2, preferably below and above that. In gsem, you can convert to the IRT cutpoint by calculating -cut1 / discrimination, -cut2 / discrimination, etc. This is detailed in the manual for the IRT commands. (Hint, if for some reason you have Stata 13, you can fit an IRT model in gsem, and use the nlcom command to make the appropriate transformation).

      If you're just doing a measurement model, then, you can just use the irt grm command. If you are doing something more than just a measurement model, e.g. an explanatory IRT model, which I believe SEM users may call a MIMIC model, then you will need to use gsem. Some syntax is available on this post of mine earlier.
      Be aware that it can be very hard to answer a question without sample data. You can use the dataex command for this. Type help dataex at the command line.

      When presenting code or results, please use the code delimiters format them. Use the # button on the formatting toolbar, between the " (double quote) and <> buttons.

      Comment


      • #4
        Thanks heaps Weiwen!

        Comment

        Working...
        X