Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • 95%CI for interater agreement & weighted kappa

    Hello,
    I'm using Stata 13.1. Can you please give me some guidance on how to estimate 95% CI for weighted kappa and interater agreement among 3 observers and 6 ratings? Ratings can be analyzed as both ordinal or nominal scores.

    My data look like this:
    id rater1 rater2 rater3
    90 0 1 0
    63 2 2 3
    85 4 5 4
    210 3 3 4
    74 4 4 4


    I've used 'kapci rater1 rater2 rater3, wgt(w)' for kappa weight, but it gives me a different 95%CI from the one I get in other programs or online calculators. Is this command correct to calculate weighted kappa?
    Is there a command to estimate 95%CI for percent agreement?

    Thank you!

  • #2
    Hello Laura,

    Welcome to the Stata Forum.

    I'm sharing a benchmark article on this (http://www.stata-journal.com/sjpdf.h...iclenum=st0076), and I believe the text will interest you, at least for having approached the CIs for kappa.

    By the way, you have 3 raters. Therefore, and according to the aforementioned article,


    wgt(wgtid) specifies the type of weight that is to be used to weight disagreements. This option is ignored if there are more than two raters/replications (varlist ≥ 3).


    Hopefully that helps.

    Best,

    Marcos
    Last edited by Marcos Almeida; 07 Mar 2016, 12:02.
    Best regards,

    Marcos

    Comment


    • #3
      Thank you Marcos!

      I've seen the article before but thank you for pointing it out. However, I can't find the command to estimate 95%CI for agreement (not kappa) in this article nor in any of the forums with related questions. Any suggestions are welcome.

      Regards,
      Laura

      Comment


      • #4
        The measurement of interrater agreement is part and parcel of the overall kappa statistics. Indeed, there we have the observed percentage of agreement as well as the "expected" percentage of agreement. Under - kapci - we "contemplate" both aspects at once, and that's something to clearly remark. What is more, we've got CIs from different sources of estimation, including bootstrapping,

        In short, I fear I cannot see the point of avoiding the overall test and selecting a bias-prone proportion of the observed agreeement, hoping the CIs would provide an accurate perspective.

        Sometimes - well, really, oftentimes - I truly believe that, when not allowing some "customized" estimations, Stata saves us by pointing just to the good direction.
        Best regards,

        Marcos

        Comment


        • #5
          Thank you Marcos,
          That's the answer I was looking for. Thanks for your time.

          Laura

          Comment

          Working...
          X