Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Interrater agreement

    Dear all

    I have a dataset titled "readings". This consists of readings on the same XRays by 3 different interpreters.
    The readings are continuous and I am trying to assess the similarity/agreement between the 3 different interpreters.

    ID Reader 1 Reader 2 Reader 3

    Bland Altman and Cohens Kappa will not work as I have 3 interpreters.
    I have tried using Fleiss Kappa using the kappatec code but this apparently is not appropriate for comparing continuous readings.

    Just wondering if anybody had any suggestions on STATA. There are a number of agreement functions on STATA but none that take into account continuous readings by 3 different interpreters.

    Many many thanks in advance

  • #2
    Originally posted by Omar Zahaf View Post
    [ ... ]Cohens Kappa will not work as I have 3 interpreters
    The generalized version by Conger (1980) handles three unique raters. This is implemented in kappaetc (SSC or SJ).


    Originally posted by Omar Zahaf View Post
    I have tried using Fleiss Kappa using the kappatec code but this apparently is not appropriate for comparing continuous readings.
    You can use weights for (dis)agreement; this is documented in kappaetc's help file. If the rating categories are truly continuous, they are usually not pre-defined. In that case, you might want to estimate intraclass correlation coefficients. These are also implemented in kappaetc. Type

    Code:
    help kappaetc icc
    for more.


    Conger, A. J. (1980). Integration and Generalization of Kappa for Multiple Raters. Psychological Bulletin, 88, 322--328.

    Comment


    • #3
      Thank you so much, so my code is :

      kappaetc Reader1 Reader2 Reader3, icc(oneway)


      I have read the help file but did not quit understand what you mean by using weights for (dis)agreements. The readings are all measurements from one X-ray

      Comment


      • #4
        You might be interested in Klein (2018) for some background. Weighted (dis)agreement is also covered in the examples of [R] kappa.

        If you have the same three raters (readers), you probably want the random or mixed-effects model for the ICC.



        Klein, D. 2018. Implementing a General Framework for Assessing Interrater Agreement in Stata. The Stata Journal,18, 4, 871--901.

        Comment

        Working...
        X