Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Deepani Siriwardhana View Post
    There are some occasions that I'm getting this message 'ratings do not vary' and Stata does not compute the agreement coefficients. How should I report my results in this case? Is it okay to consider all the agreement coefficients as 1?
    No, you should not report the coefficients as 1. When observed agreement is 1, Brennan and Prediger's coefficient is 1, too. Mathematically, Cohens Kappa is undefined, since expected agreement equals 1, leading to a division by zero. Krippendorff's Alpha is set to 0 by definition of its author in this situation.

    I cannot give good advice what to report here, but I find the information that ratings do not vary important. Perfect agreement might not mean much, when there is essentially no variance in the data.

    Best
    Daniel

    Comment


    • #17
      Thank you very much for your explanation Daniel.
      Best regards,
      Deepani

      Comment


      • #18
        Dear Daniel,
        I was hoping to get help on an error message I am getting when trying to use the "loa" option for the kappaetc command:
        Code:
        kappaetc s1bmigrp4_m s1bmigrp4_sr, loa(95)
        option loa() not allowed
        r(198);
        Thanks so much for your help,
        Leah Lipsky

        Comment


        • #19
          Leah,

          please note our preference for full real names (cf. FAQ #6) and act accordingly.

          Concerning the error message, I cannot replicate your problem. Please make sure you have the latest (available) version of kappaetc installed. Type

          Code:
          which kappaetc
          which should give

          Code:
          *! version 1.6.0 30jan2018 daniel klein
          If you get something else, type

          Code:
          ssc install kappaetc , replace
          If the error still persists, please report back and include a (reproducible) example with the code, exactly as typed and the output, exactly as provided by Stata; use CODE delimiters to do this (cf. FAQ #12).

          Best
          Daniel
          Last edited by daniel klein; 10 May 2018, 15:52. Reason: FAQ links

          Comment


          • #20
            The latest version of kappaetc is now available from the SJ along with an article (Klein 2018) in which I review Gwet's (2014) theoretical framework and describe the command.

            I have also sent the respective files to Kit Baum so the updated version will be available from the SSC.

            The update includes a couple of minor bug fixes and a somehow serious bug fix. There are also some minor enhancements, additional returned results, and some syntax changes. These are the major points:
            • Intraclass correlation coefficients, when subjects were rated repeatedly, were incorrect when the data was not sorted on subject-ids. The problem is caused by what I consider a bug in Stata/Mata's panelsetup(). Anyway, the updated version includes an ad-hoc fix so results will now be correct irrespective of the sort order of the data.
            • Option se() has a new syntax: se(jackknife) standard errors are renamed se(conditional subjects). This change might break old code and I apologize for any inconveniences. I feel the change was necessary for two reasons: (1) the jackknife procedures in Stata usually apply to observations whereas kappaetc applied it to variables; this might lead to confusion. (2) I am planning on making jackknife standard errors (in the usual sense) available with kappaetc in the future.
            • Confidence intervals for standard errors conditional on subjects are now based on the t-distribution rather than the standard normal distribution; the largesample option may be used to replicate previous results.
            • Some weighting options have a new syntax; old syntax continues to work.
            • New (semi-documented) option specific estimates observed specific agreement (Dice 1945, Uebersax 1983); for more information, type help kappaetc specific
            Best
            Daniel


            Dice, L. R. (1945). Measures of the amount of ecologic association between species, Ecology 26: pp. 297-302.

            Gwet, K. L. (2014). Handbook of Inter-Rater Reliability. Gaithersburg, MD: Advanced Analytics, LLC.

            Klein, D. (2018). Implementing a general framework for assessing interrater agreement in Stata, The Stata Journal 6: pp. 871-901.

            Uebersax, J. S. (1983). A design-independent method for measuring the reliability of psychiatric diagnosis. Journal of Psychiatric Research 17: pp. 335-342.
            Last edited by daniel klein; 06 Jan 2019, 05:35.

            Comment


            • #21
              I'm getting the below output which doesn't make sense.
              I have two columns of data, each column is a rater's values which are a continuous variable. These are numbers up to one decimal point, such as 34.6, 32.2, etc.
              How come it is not working? Thank you!
              I suspect there is an issue because it is saying the number of rating categories is 248 which is way off - there are no categories - they are continuous variables.

              Interrater agreement Number of subjects = 178
              Ratings per subject = 2
              Number of rating categories = 248
              ------------------------------------------------------------------------------
              | Coef. Std. Err. t P>|t| [95% Conf. Interval]
              ---------------------+--------------------------------------------------------
              Percent Agreement | 0.0000 0.0000 . . 0.0000 0.0000
              Brennan and Prediger | -0.0040 0.0000 . . -0.0040 -0.0040
              Cohen/Conger's Kappa | -0.0009 0.0003 -2.87 0.005 -0.0014 -0.0003
              Scott/Fleiss' Pi | -0.0052 0.0003 -15.26 0.000 -0.0059 -0.0045
              Gwet's AC | -0.0040 0.0000 -2926.72 0.000 -0.0040 -0.0040
              Krippendorff's Alpha | -0.0024 0.0003 -7.00 0.000 -0.0031 -0.0017
              ------------------------------------------------------------------------------

              Comment


              • #22
                If you do not have predefined (known before the rating/scoring takes place) categories, then (chance-corrected) agreement coefficients are not an appropriate measure for inter-rater agreement/reliability. See

                Code:
                help kappaetc choosing
                for a summary of which method is (often) appropriate for which type of ratings.

                For the situation you describe, you might be better off with an intraclass correlation coefficient. See

                Code:
                help kappaetc icc
                In the future, please read the FAQs, provide the (exact) code that you are using, and use CODE delimiters to present (Stata) output.

                Best
                Daniel

                Comment


                • #23
                  When I write this command line in "help kappaetc icc" it says there is no help file associated with this command.
                  Yes, I am seeking an ICC - what is the command I can write that runs an ICC on the data using the kappaetc do-file?
                  Thanks!

                  Comment


                  • #24
                    If there is no such help file, then something is wrong either with your installation of kappaetc or with your Stata setup or with something else. It is hard to say more.

                    Try typing in Stata

                    Code:
                    ssc install kappaetc , replace
                    That should install the latest (full) version of kappaetc. Then type in Stata

                    Code:
                    discard
                    which kappaetc
                    Stata should respond with something like

                    Code:
                    . which kappaetc
                    ...\k\kappaetc.ado
                    *! version 2.0.0 28jun2018 daniel klein
                    To estimate different versions of ICC, you need to specify the icc() option and choose the appropriate model (most likely random or mixed). The respective help file that I have pointed to explains the available models.

                    Best
                    Daniel
                    Last edited by daniel klein; 11 Jan 2020, 13:34.

                    Comment


                    • #25
                      Wow, it worked! Thanks so much - that was huge.
                      Really appreciate it.
                      I'm brand new to forum because I have old version of Stata 12 without the built in ICC function and found this site via google.

                      Output:
                      . kappaetc sl1 sl12, icc(random)

                      Interrater reliability Number of subjects = 178
                      Two-way random-effects model Ratings per subject = 2
                      ------------------------------------------------------------------------------
                      | Coef. F df1 df2 P>F [95% Conf. Interval]
                      ---------------+--------------------------------------------------------------
                      ICC(2,1) | 0.8156 9.85 177.00 177.00 0.000 0.7600 0.8595
                      ---------------+--------------------------------------------------------------
                      sigma_s | 7.0630
                      sigma_r | 0.0000 (replaced)
                      sigma_e | 3.3579
                      ------------------------------------------------------------------------------

                      Comment


                      • #26
                        Originally posted by Zach Morris View Post
                        I'm brand new to forum
                        Welcome. Now that we have solved your first problem, please take the time to read through the FAQ to make sure you are getting the best out of Statalist (and the other way round) in the future.

                        For example,

                        Originally posted by Zach Morris View Post
                        because I have old version of Stata 12
                        is a piece of information that you are asked to give in the initial description of the problem. The general rule is: If you are not explicitly stating otherwise, we assume that you are using the most recent version of Stata (fully updated). For your problem and, more importantly, for my proposed solution that information turned out to be immaterial but that is not always the case.

                        Moreover, I encourage you to type in Stata

                        Code:
                        update all
                        to update your (old) Stata 12 copy to

                        Code:
                        . about
                        
                        Stata/IC 12.1 for Windows (64-bit x86-64)
                        Revision 23 Jan 2014
                        Copyright 1985-2011 StataCorp LP
                        ...
                        This particular update would install Stata's icc command which was added in Stata 12.1. Thus, your problem

                        Originally posted by Zach Morris View Post
                        without the built in ICC function and found this site via google.
                        would be solved. By the way, icc is a command, not a function.

                        Best
                        Daniel

                        Comment


                        • #27
                          Hi all,
                          This is a general STATA question for version 12. If I have a column of binary data, such as 0 and 1, how do I then identify the row with the 1s so I can then express the string data in another column? Thanks so much.

                          Comment


                          • #28
                            While any kind of question about Stata, not STATA (see FAQ 18) is naturally welcome on Statalist, the appropriate way to post general questions is to start a new thread with an informative title; see 1.5 Starting a new thread.

                            As I have already advised in #26, please make sure to read the FAQs (at least sections 10 to 12) before you post and improve your question.

                            Best
                            Daniel

                            Comment


                            • #29

                              Dear Daniel or whomever might be able to offer advice,

                              I recently, began using kappa etc. and while it has been working flawlessly, I still have one question I hope you could clarify for me.
                              Then using the “kappaetc” command, the first result generated is “percentage agreement”. Despite my best efforts I haven’t been able to find a clear definition of this term. Hence, I ask, if you could clarify how “Percentage agreement” is calculated and how it should be interpreted (also in the case of multiple raters)?
                              Thank you!
                              Best regards,
                              Christian Dam

                              Comment


                              • #30
                                While I can easily imagine that the term "percent agreement" (often also called: observed agreement) does not have a single agreed-on definition, I find it rather surprising that you were not able to find how it is calculated. The help file for kappaetc has some references to books and articles discussing agreement coefficients, all of which are based on the percent/observed agreement. StataCorp's documentation of their kappa command includes a Methods and formulas section that also explains how this is calculated.

                                Anyway, percent agreement between two raters can be calculated by counting the number of times they agree on classifying a given subject into the same (pre-)defined category and dividing that number by the number of all subjects that the raters have classified. Here is a simple example

                                Code:
                                // example data
                                webuse p615b , clear
                                list
                                
                                // calculate percent agreement for raters 1 and 2
                                quietly count if rater1 == rater2
                                local match = r(N)
                                local po = `match'/_N
                                display "percent agreement is " %4.3f `po'
                                
                                // verify
                                kappaetc rater1 rater2
                                For more than two raters, percent agreement can be calculated for each pair of raters, summed up, and then divided by the number of possible pairs of raters. If there are r raters, then there is a total of r choose 2 pairs of raters. Here is an example of that

                                Code:
                                // calculate percent agreement for all 5 raters
                                local po = 0
                                forvalues i = 1/5 {
                                    local i1 = `i'+1
                                    forvalues j = `i1'/5 {
                                        quietly count if rater`i' == rater`j'
                                        local po = `po' + r(N)/_N
                                    }
                                }
                                local po = `po'/comb(5, 2)
                                display "percent agreement is " %4.3f `po'
                                
                                // verify
                                kappaetc rater1-rater5
                                There are computationally more efficient formulas for calculating these averages. Things get a bit more complex when we assign weights to account for partial agreement and when there are missing ratings. This presentation I held at the Stata Users Group Meeting a couple of years ago includes the formulas on slides 11 and 15. Watch out: The notation is a bit messed up; the third sum (running index l) should be inside the parentheses of the numerator. A full discussion is in the Stata Journal article, liked in #20.

                                Best
                                Daniel

                                Comment

                                Working...
                                X