I am trying to calculate a Kappa coefficient.
background:
There are 438 patients
I am comparing ratio1 and ratio2.
I am trying to determine if ratio1 is able to identify tall patients equally as a ratio2.
I have separated the variables into ratio1_tall and ratio2_tall.
I have then proceeded to calculate a kappa coefficient. In normal numerical terms I have found that ratio1 and ratio2 equally identify 3 TALL PATIENTS.
On the other hand for the short patients ratio1 and ratio2 equally identify 5 short patients.
With these numbers in mind, the ratios agree in 1.84% (8 out of 438 patients in total).
However when the kappa value is calculated using the following code
I am surprised to find that the kappa coefficient for tall is 0.08.
The kappa here means that this is a poor agreement however with regards to percentage this is between 70 to 80% for tall and short. The results are contradictory.
My question:
Can anyone explain why this is so? As I would have expected a poor kappa and also a % poor agreement.
Should I be grouping tall + short for ratio1 into one variable: ratio1_tall.short and the same for ratio2 i.e tall +short for ratio2 into one variable: ratio2_tall.short ?
Then I can compare a kappa?
background:
There are 438 patients
I am comparing ratio1 and ratio2.
I am trying to determine if ratio1 is able to identify tall patients equally as a ratio2.
I have separated the variables into ratio1_tall and ratio2_tall.
I have then proceeded to calculate a kappa coefficient. In normal numerical terms I have found that ratio1 and ratio2 equally identify 3 TALL PATIENTS.
On the other hand for the short patients ratio1 and ratio2 equally identify 5 short patients.
With these numbers in mind, the ratios agree in 1.84% (8 out of 438 patients in total).
However when the kappa value is calculated using the following code
Code:
clear input float(match match2 ratio1_tall ratio1_short ratio2_tall ratio2_short) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 end tab ratio1_short ratio2_short kap ratio1_short ratio2_short tab ratio1_tall ratio2_tall kap ratio1_tall ratio2_tall
The kappa here means that this is a poor agreement however with regards to percentage this is between 70 to 80% for tall and short. The results are contradictory.
My question:
Can anyone explain why this is so? As I would have expected a poor kappa and also a % poor agreement.
Should I be grouping tall + short for ratio1 into one variable: ratio1_tall.short and the same for ratio2 i.e tall +short for ratio2 into one variable: ratio2_tall.short ?
Then I can compare a kappa?
Comment