Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Kappa p-values

    In an attempt to export the p-values from the kap command to an excel file as part of the postfile command I have been creating them directly (as they are not stored in r()) using:

    1-normal(abs(z))

    However, I am getting different values from the kap command for some variables. From looking at the kap.ado file it looks as though the kap command is using

    1-normprob(return(z))

    This is fine when the observed agreement is greater than the expected agreement but shouldn't it be using the absolute value of z when z is negative?

    I'm happy to be corrected but I can't see why this is not using the absolute value.

    Any thoughts on this?

    Laura


  • #2
    Laura,

    Traditionally, people only really care if observed agreement is better than expected, not whether it is worse. If you think there is some reason to think that being worse than expected is a notable finding, then I suppose there is no reason you couldn't make it a two-sided test.

    More to the point, p-values for kappa statistics are not generally considered that useful. Like correlation coefficients, a kappa doesn't have to be very high in order to be statistically significant. There have been various attempts to provide guidelines for what constitutes good agreement, but even these are subjective and require caution. Better to just report the kappa and a confidence interval and interpret them in your particular context.

    Regards,
    Joe

    Comment


    • #3
      Thanks Joe. Unfortunately the expected agreement for some of my variables is very high so there are some where the observed agreement is lower and hence kappa is negative. I was planning to use kappa and CI but I usually include p-values too in my own reports.

      I'm also not convinced that kappa is great when the expected agreement is high or when there are too many zero cells (e.g. a 2x2 table with two zero cells - one on the diagonal). I haven't been able to find any evidence to support this. If anyone knows of any literature discussing this I'd be very grateful.

      Thanks all.

      Laura

      Comment


      • #4
        Though I am not really into this topic very deeply, I have recently implemented Krippendorff's alpha coefficient and came across an article discussing different methods to measure reliability (Hayes and Krippendorff, 2007). Maybe this helps you.

        Best
        Daniel


        Hayes, Andrew F., Krippendorff, Klaus (2007). Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1, 77-89.
        (available online: http://www.afhayes.com/public/cmm2007.pdf)

        Comment


        • #5
          Fabulous. This looks very helpful. Many thanks.

          Laura

          Comment


          • #6
            Laura,

            I get why the kappa is negative, but that doesn't mean that you should take the absolute value. A negative kappa (and the correspondingly negative z-value) will just give you a non-significant p-value (something higher than 0.5), which is what you want unless you are trying to determine whether the observed agreement is significantly worse than expected.

            I agree with you that kappa seems to unfairly penalize situations where the expected agreement is very high, but I don't know what a good alternative is. That said, perhaps a situation where the expected agreement is high and the observed agreement is substantially lower should raise some red flags.

            Regards,
            Joe

            Comment

            Working...
            X