Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • IRT RSM with groups

    Dear colleagues,
    I am not new to STATA - but, am new to IRT analysis. I did search this listserv - but do not seem to find the exact response.

    I have a Likert-scale (5 ordinal choices) survey with three groups. The responses to the psychometric questions varied by group (understandably) such that one or more groups responded by selecting categories that other groups did not.

    How do I set up irt rsm, group(x) analysis in this case?

    TIA,
    Rupak Mukherjee

  • #2
    Originally posted by Rupak Mukherjee View Post
    I have a Likert-scale (5 ordinal choices) survey with three groups. The responses to the psychometric questions varied by group (understandably) such that one or more groups responded by selecting categories that other groups did not.
    Rupak, can you clarify this statement? If one group selects categories that another group didn’t, and these are ordinal items (as opposed to nominal ones), then that could simply mean that the groups differ in their ‘ability’ (or whatever the scale is measuring).

    regardless, if you believe there is DIF, then I believe the manual already outlines how you would conduct multiple group Likelihood-based DIF analysis. Do you have a specific question about the coding?
    Be aware that it can be very hard to answer a question without sample data. You can use the dataex command for this. Type help dataex at the command line.

    When presenting code or results, please use the code delimiters format them. Use the # button on the formatting toolbar, between the " (double quote) and <> buttons.

    Comment


    • #3
      Weiwen,
      Thank you. I think you may have already answered my query. Yes, the groups do vary in their "ability". I will take a closer look at the manual for the multiple group likelihood-based DIF analysis.

      Rupak Mukherjee

      Comment


      • #4
        Hi Weiwen and others:
        I looked up DIF analysis in the IRT manual - and both tests (diflogisitic and difmh) are for dichotomously coded questions. I have a questions on a Likert-scale questionnaire (with 6 possible responses) and 3 groups. There was a difference in the responses between the groups in that certain groups did not choose a certain response (see output from tabulation below). In this case, I get the error message "items ** must have the same number of levels of each group". Any suggestions?

        . irt rsm PresSkill if(DateTime != "[not completed]" & ID != 8), group(RoleNew)
        items PresSkill must have the same number of levels for each group
        r(198);

        . tabulate RoleNew PresSkill if(DateTime != "[not completed]" & ID != 8)

        RECODE of |
        Role |
        (role_of_r | presentation_skill
        espondent) | Very Impo Important Neutral Less Impo Unsure | Total
        -----------+-------------------------------------------------------+----------
        Attending | 23 20 0 1 0 | 44
        Resident | 13 26 1 2 0 | 42
        Multi-D | 16 24 1 2 1 | 44
        -----------+-------------------------------------------------------+----------
        Total | 52 70 2 5 1 | 130

        . tabulate RoleNew WriteSkill if(DateTime != "[not completed]" & ID != 8)

        RECODE of |
        Role |
        (role_of_r | Writing Skills
        espondent) | Very Impo Important Neutral Less Impo Unsure | Total
        -----------+-------------------------------------------------------+----------
        Attending | 10 24 6 4 0 | 44
        Resident | 6 23 9 3 0 | 41
        Multi-D | 14 20 4 5 1 | 44
        -----------+-------------------------------------------------------+----------
        Total | 30 67 19 12 1 | 129


        Comment


        • #5
          Rupak,

          Next time, can you use code delimiters to post the results? They will make sure that the table is properly formatted. Without the code delimiters, the columns don't line up. And unfortunately, if I copy-paste your results into code delimiters post hoc, it doesn't format correctly.

          That said, I can see that you have one response category called "unsure". I think there's a good argument to treat that as missing.

          Code:
          recode WriteSkill (5 = .), generate(WriteSkill_recode)
          Or use the replace option if you're fine replacing the variable.

          If you don't want to treat unsure as missing, where in importance do you think it falls? As is, I think the model would assume it's got the lowest difficulty parameter. I doubt that's what you want.

          Now, you have a fundamental problem that responses to some of the categories are sparse. I would guess that is the cause of the error: some groups are not endorsing a category at all, so the model thinks that each group has a different number of response categories. I don't see an alternative but to collapse some of the response categories. If you treat unsure as missing (NB, you already have some respondents with missing data, as you can see from the row totals), then it looks like attending physicians (?)were the only group with a missing response, and that's in the neutral category. So, maybe collapse neutral and unimportant.

          I don't typically use the rating scale model, but I'm pretty sure the questions can have unequal numbers of categories. So, you can collapse only the questions with sparsity. Your goal is just to determine if the difficulty parameters and/or the discrimination parameter differ among the groups, so I don't think collapsing the categories needs to affect the overall interpretation of the model.

          While we're on the subject of DIF, remember that DIF in individual questions often doesn't make that much of a difference to the whole model, and it can often be ignored. After you've enumerated which questions may have DIF, try fitting a model where all your DIF candidates have DIF, and then plot the test characteristic curve, which is the expected sum score by level of theta. You'll probably see that the curves are pretty similar between groups.
          Be aware that it can be very hard to answer a question without sample data. You can use the dataex command for this. Type help dataex at the command line.

          When presenting code or results, please use the code delimiters format them. Use the # button on the formatting toolbar, between the " (double quote) and <> buttons.

          Comment

          Working...
          X