Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Interrater reliability

    Hello,

    I am working with data that includes ratings from two fidelity raters for multiple sessions of an intervention for 4 participants. I would like to calculate the raters' level of agreement, but I am not sure which coding to use to format the data to be able to do so in Stata. I appreciate any help. Please find my data below. Please note the response options for each session/topic are 1=yes, 0=no, and 3=unable to assess. Unfortunately, I am unable to figure out how to use dataex as the due to the linesize limit and format of my data. Please see a screenshot below (sorry!).
    Click image for larger version

Name:	Screen Shot 2022-11-22 at 11.56.02 AM.png
Views:	1
Size:	604.1 KB
ID:	1690560






    Last edited by Katie Holzer; 22 Nov 2022, 11:03.

  • #2
    First of all, let me say that I suspect that "coding" per se is not the issue. Your coding of yes/no/unable should be compatible with any of the various commands for agreement on a nominal response variable. I'd suggest you look at the -kappaetc- command (-ssc describe kappaetc-) and the examples there and see if anything helps you there.

    Presuming that doesn't get you far enough:

    With your screenshot display of your data, I can't see enough of the variable names to use them to help me guess what the structure of your data is and go farther with advice; perhaps other people will do better at this than me.

    I suspect you *can* get -dataex- to be helpful. You don't *have* to display *all* of your variables with -dataex-, as the help for it shows that it does accept a -varlist-. You could just display, say, 10 or so of your MO variables with something like:
    Code:
    dataex Participant Event Fidelity MO...   MO... MO... MO...// don't know your variable names
    Even if you don't list *all* the MO* names, you could list some of them and explain to us what the rest would be. I think this is necessary for someone to efficiently and effectively help you.

    While I appreciate your attempt to not overburden us with detail, I'd say that, if you want advice on an appropriate agreement command, I think we'd need a better sense of the content, e.g., what does each observation represent, which variable indicates the rater, are the MO* variables a bunch of scores for each participant, what on an observation distinguishes one session from another, etc. You might want to get a colleague to listen when you try out a sample explanation. Also, I don't find the terminology "fidelity rater" familiar -- my ignorance--so if that's relevant to understanding what you are doing, I'd suggest you explain that.

    Comment

    Working...
    X