Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ordinal agreement with data prevalence

    My data consists of ordinal scores, 1-5, {https://www.ndph.ox.ac.uk/research/research-groups/applied-health-research-unit-ahru/discern/the-discern-questionnaire} and I wish to compare the agreement between two examiners. The problem is that there is a prevalence of two scores.

    was going to use Cohen's Kappa but I understand that this prevalence makes Kappa uncertain, % agreement 70% but Kappa 0.4. It has been suggested Gwet's AC2 may be an appropriate measure but I could not find this in Stata. I am not sure if AC from kappaetc (SSC) is a suitable measure for this situation.

    How should I approach this analysis?

    This is typical data, the real dataset is much larger.


    ----------------------- copy starting from the next line -----------------------
    Code:
    * Example generated by -dataex-. For more info, type help dataex
    clear
    input byte(d1 h1)
    1 1
    3 1
    1 1
    3 3
    1 3
    1 1
    3 3
    1 1
    3 1
    3 3
    end
    ------------------ copy up to and including the previous line ------------------

    Julie

  • #2
    I appears gwet AC is useful for this situation, but there is some disagreement (as with everything).

    Comment


    • #3
      Perhaps this post might provide some directions:

      https://www.statalist.org/forums/for...pabak-in-stata
      Last edited by Tiago Pereira; 08 Sep 2025, 09:30.

      Comment


      • #4
        Both Cohen's kappa and Gwet's AC are sensitive to the prevalence of rating categories, but in different ways. With an increasing prevalence index, kappa tends to decrease while Gwet's AC tends to increase, albeit to a lesser extent. The PABAK (hinted to in #3) is not affected by the prevalence (or the marginal distribution) at all. Which one you want depends on how you think about chance agreement. Is the prevalence of rating categories informative regarding genuine agreement? If not, use PABAK. Otherwise, use kappa if you believe that a high prevalence index inflates chance agreement, or use AC if you believe that a high prevalence index reflects genuine agreement. Either way, kappaetc computes all three coefficients, optionally using weights for ordinal ratings.
        Last edited by daniel klein; 08 Sep 2025, 12:50.

        Comment


        • #5
          Thank you all very much for your advice. I think that I will follow the PABAK route.

          Julie

          Comment


          • #6
            Is anyone familiar with the method described in this article?

            Nelson KP, Edwards D. Measures of agreement between many raters for ordinal classifications. Stat Med. 2015 Oct 15;34(23):3116-32. doi: 10.1002/sim.6546. Epub 2015 Jun 21. PMID: 26095449; PMCID: PMC4560692. https://pmc.ncbi.nlm.nih.gov/articles/PMC4560692/
            --
            Bruce Weaver
            Email: [email protected]
            Version: Stata/MP 19.5 (Windows)

            Comment


            • #7
              R code for #6 is here: https://github.com/ayamitani/modelkappa/tree/master

              Comment


              • #8
                Originally posted by Bruce Weaver View Post
                Is anyone familiar with the method described in this article?
                Originally posted by George Ford View Post
                R code for #6 is here
                Does anyone have access to the Holmquist example dataset for those two citations? I don't have access to the original article (below), don't seem to be able to download the R workspace from that GItHub site and the closest that I can get is a crosstabulation of the ratings for only the first two of the seven pathologists.

                N. D. Holmquist, C. A. McMahan and O. D. Williams, Variability in classification of carcinoma in situ of the uterine cervix. Archives of Pathology 84:334–345 (1967)

                Comment


                • #9
                  I got this from the R repository mentioned above and converted it.
                  Attached Files

                  Comment


                  • #10
                    Originally posted by George Ford View Post
                    I got this from the R repository mentioned above and converted it.
                    Thank you, George, I appreciate your kindness and your help in this.

                    Comment

                    Working...
                    X