Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Finding interrater reliability

    Hello! I asked a related question a few weeks ago and really appreciate the help I got; I now understand what I'm trying to do a little better and have also included example data.

    I've given the first 20 rows of my data (there are 262 in total) below. For each movie, a reviewer answered 8 yes or no factual questions. I want to check interrater reliability by movie_ID i.e. interrater reliability of reviewers between answers to questions for movie 1, answers for movie 3, etc. So perhaps some kind of loop where it checks if the movie_IDs match and if so, uses kappa? From some examples I have read, I think this may also not be correctly formatted for interrater reliability but I'm not sure what it should look like.

    I'm also curious to see if there's a way to store these kappa values by reviewer_ID so that I can see if there any reviewers who have consistently low agreement with other reviewers.

    I am very new to using Stata for interrater reliability and wasn't sure how to approach this. Thank you!

    Code:
    * Example generated by -dataex-. To install: ssc install dataex
    clear
    input int movie_ID long reviewer_name byte(question1 question2 question3 question4 question5 question6 question7 question8)
     1  5 1 1 1 1 1 1 1 1
     1  3 1 1 1 0 1 1 1 0
     2  3 1 1 1 1 1 1 1 1
     3  5 1 1 1 1 1 1 1 1
     3  3 1 1 1 1 1 1 1 1
     3  9 1 1 0 1 1 1 0 1
     7  4 1 0 0 0 1 0 1 0
     7 15 1 1 1 1 1 1 1 1
     8  4 1 1 0 0 0 1 1 0
     8 15 1 1 1 1 1 0 1 1
     9 15 1 1 1 1 1 0 1 1
     9  4 1 0 0 0 0 0 0 0
    10 15 1 1 1 1 1 0 1 1
    10  4 1 0 0 0 0 0 0 0
    11  4 1 1 1 1 1 1 1 1
    11 15 1 1 1 1 1 1 1 1
    12 15 1 1 1 1 1 0 1 1
    12  5 1 0 1 0 0 0 0 0
    12  7 1 1 1 0 0 1 0 0
    13 15 1 1 1 1 1 1 1 1
    end

  • #2
    Before jumping into code, you need to further clarify what exactly you want to know here. This is because you will easily get answers from kappa (or other commands) but these answers might not mean what you think.

    In a inter-rater reliability study, we usually have two (or more) reviewers who classify two (or more) subjects into one of two (or more) categories; that process is called rating. In your setup

    Originally posted by Laila Voss View Post
    For each movie, a reviewer answered 8 yes or no factual questions.
    the movies are probably your subjects. However, the movies are classified with respect to more than one dimension (8 questions). There is no obvious way to combine agreement on 8 different dimensions.

    Originally posted by Laila Voss View Post
    I want to check interrater reliability by movie_ID i.e. interrater reliability of reviewers between answers to questions for movie 1, answers for movie 3, etc.
    Again, it appears as if you wanted the extent to which reviewers agree on rating one movie. Reviewers do not rate movies, though; they rate 8 different aspcets/dimenisons of a given movie.

    Originally posted by Laila Voss View Post
    I'm also curious to see if there's a way to store these kappa values by reviewer_ID so that I can see if there any reviewers who have consistently low agreement with other reviewers.
    Agreement coefficients usually refer to subjects not to reviewers. How do your define a low agreement with other reviewers?

    Perhaps you could say more about your research question(s) and what you are interested in, substantially.

    Comment

    Working...
    X