Hi there,
I am checking that I have correctly specified the variables for test-retest reliability.
I have participants (ID) who took a questionnaire with three survey instruments on two seperate occassions. I'm looking to compare the score agreement across time between the three instruments.
Here's my data in long format
input int ID byte(time can) float(pwi who)
211 1 7 45 12
211 2 8 48 9
212 1 7 52 18
212 2 8 56 20
213 1 7 52 17
213 2 8 57 16
214 1 7 49 8
214 2 7 52 5
215 1 7 56 19
215 2 . . .
216 1 9 59 22
216 2 8 62 20
217 1 6 46 16
217 2 7 48 13
218 1 . . .
218 2 9 67 21
219 1 7 50 10
219 2 6 57 10
220 1 7 56 20
220 2 7 56 17
222 1 7 59 8
222 2 8 58 12
223 1 . . .
223 2 9 61 24
224 1 7 45 11
224 2 . . .
225 1 7 58 16
225 2 7 55 15
226 1 8 55 16
226 2 8 58 12
227 1 6 54 20
227 2 7 58 22
228 1 5 44 13
228 2 8 47 15
229 1 7 49 13
229 2 7 51 15
230 1 9 60 18
230 2 9 61 20
231 1 8 65 20
231 2 9 58 .
232 1 8 55 17
232 2 7 60 19
233 1 . . .
233 2 7 52 15
234 1 6 41 14
234 2 5 44 13
235 1 5 46 13
235 2 6 49 13
236 1 6 50 11
236 2 7 48 12
237 1 7 55 19
237 2 7 58 18
238 1 8 57 19
238 2 9 60 18
239 1 7 55 17
239 2 6 53 16
240 1 9 67 21
240 2 10 58 25
241 1 7 48 16
241 2 8 51 17
242 1 8 66 23
242 2 9 64 22
243 1 . . .
243 2 . 57 17
244 1 6 44 14
244 2 7 52 17
245 1 7 53 13
245 2 6 47 14
end
[/CODE]
To test reliability of one survey instrument, can, I ran:
(5 targets omitted from computation because not rated by all raters)
But this model is estimating timepoint as the fixed effect, rather than the participants retaking the same survey.
output:
Q1: in the output, does Individual and Average refer to the correlation between timepoints within an individual and between individuals within timepoints, respectively?
Q2: is there a way to specify the fixed effects for ID using icc?
Because I've seen it recommended, I've also used -kappaetc-, icc
input int ID byte(can1 can2) float(pwi1 pwi2 who1 who2)
237 7 7 55 58 19 18
230 9 9 60 61 18 20
224 7 . 45 . 11 .
216 9 8 59 62 22 20
245 7 6 53 47 13 14
211 7 8 45 48 12 9
239 7 6 55 53 17 16
234 6 5 41 44 14 13
238 8 9 57 60 19 18
241 7 8 48 51 16 17
236 6 7 50 48 11 12
232 8 7 55 60 17 19
242 8 9 66 64 23 22
219 7 6 50 57 10 10
240 9 10 67 58 21 25
212 7 8 52 56 18 20
215 7 . 56 . 19 .
231 8 9 65 58 20 .
227 6 7 54 58 20 22
213 7 8 52 57 17 16
225 7 7 58 55 16 15
233 . 7 . 52 . 15
244 6 7 44 52 14 17
214 7 7 49 52 8 5
218 . 9 . 67 . 21
223 . 9 . 61 . 24
222 7 8 59 58 8 12
229 7 7 49 51 13 15
243 . . . 57 . 17
228 5 8 44 47 13 15
217 6 7 46 48 16 13
235 5 6 46 49 13 13
226 8 8 55 58 16 12
220 7 7 56 56 20 17
end
[/CODE]
here I can see the interrater reliability - ICC(3,1)- is very similar to what the icc command found above.
. kappaetc can1 can2 , icc(mixed) listwise
Interrater reliability Number of subjects = 28
Two-way mixed-effects model Ratings per subject = 2
------------------------------------------------------------------------------
| Coef. F df1 df2 P>F [95% Conf. Interval]
---------------+--------------------------------------------------------------
ICC(3,1) | 0.6193 4.25 27.00 27.00 0.000 0.3262 0.8037
---------------+--------------------------------------------------------------
sigma_s | 0.8622
sigma_e | 0.6760
------------------------------------------------------------------------------
Q3: Is ID the fixed effect in the kappaetc , icc(mixed) estimate by default here?
Q4: I would like to report both the individual and average ICC coefficients (assuming I've interpreted these correctly above)- is it possible to see determine the group average ICC using kappaetc (similar to the icc command output)?
I am checking that I have correctly specified the variables for test-retest reliability.
I have participants (ID) who took a questionnaire with three survey instruments on two seperate occassions. I'm looking to compare the score agreement across time between the three instruments.
Here's my data in long format
input int ID byte(time can) float(pwi who)
211 1 7 45 12
211 2 8 48 9
212 1 7 52 18
212 2 8 56 20
213 1 7 52 17
213 2 8 57 16
214 1 7 49 8
214 2 7 52 5
215 1 7 56 19
215 2 . . .
216 1 9 59 22
216 2 8 62 20
217 1 6 46 16
217 2 7 48 13
218 1 . . .
218 2 9 67 21
219 1 7 50 10
219 2 6 57 10
220 1 7 56 20
220 2 7 56 17
222 1 7 59 8
222 2 8 58 12
223 1 . . .
223 2 9 61 24
224 1 7 45 11
224 2 . . .
225 1 7 58 16
225 2 7 55 15
226 1 8 55 16
226 2 8 58 12
227 1 6 54 20
227 2 7 58 22
228 1 5 44 13
228 2 8 47 15
229 1 7 49 13
229 2 7 51 15
230 1 9 60 18
230 2 9 61 20
231 1 8 65 20
231 2 9 58 .
232 1 8 55 17
232 2 7 60 19
233 1 . . .
233 2 7 52 15
234 1 6 41 14
234 2 5 44 13
235 1 5 46 13
235 2 6 49 13
236 1 6 50 11
236 2 7 48 12
237 1 7 55 19
237 2 7 58 18
238 1 8 57 19
238 2 9 60 18
239 1 7 55 17
239 2 6 53 16
240 1 9 67 21
240 2 10 58 25
241 1 7 48 16
241 2 8 51 17
242 1 8 66 23
242 2 9 64 22
243 1 . . .
243 2 . 57 17
244 1 6 44 14
244 2 7 52 17
245 1 7 53 13
245 2 6 47 14
end
[/CODE]
To test reliability of one survey instrument, can, I ran:
Code:
icc can ID time, mixed absolute
But this model is estimating timepoint as the fixed effect, rather than the participants retaking the same survey.
output:
Code:
Intraclass correlations Two-way mixed-effects model Absolute agreement Random effects: ID Number of targets = 28 Fixed effects: time Number of raters = 2 -------------------------------------------------------------- can | ICC [95% conf. interval] -----------------------+-------------------------------------- Individual | .5894074 .2834621 .786455 Average | .7416694 .4417148 .8804644 -------------------------------------------------------------- F test that ICC=0.00: F(27.0, 27.0) = 4.25 Prob > F = 0.000
Q2: is there a way to specify the fixed effects for ID using icc?
Because I've seen it recommended, I've also used -kappaetc-, icc
input int ID byte(can1 can2) float(pwi1 pwi2 who1 who2)
237 7 7 55 58 19 18
230 9 9 60 61 18 20
224 7 . 45 . 11 .
216 9 8 59 62 22 20
245 7 6 53 47 13 14
211 7 8 45 48 12 9
239 7 6 55 53 17 16
234 6 5 41 44 14 13
238 8 9 57 60 19 18
241 7 8 48 51 16 17
236 6 7 50 48 11 12
232 8 7 55 60 17 19
242 8 9 66 64 23 22
219 7 6 50 57 10 10
240 9 10 67 58 21 25
212 7 8 52 56 18 20
215 7 . 56 . 19 .
231 8 9 65 58 20 .
227 6 7 54 58 20 22
213 7 8 52 57 17 16
225 7 7 58 55 16 15
233 . 7 . 52 . 15
244 6 7 44 52 14 17
214 7 7 49 52 8 5
218 . 9 . 67 . 21
223 . 9 . 61 . 24
222 7 8 59 58 8 12
229 7 7 49 51 13 15
243 . . . 57 . 17
228 5 8 44 47 13 15
217 6 7 46 48 16 13
235 5 6 46 49 13 13
226 8 8 55 58 16 12
220 7 7 56 56 20 17
end
[/CODE]
here I can see the interrater reliability - ICC(3,1)- is very similar to what the icc command found above.
. kappaetc can1 can2 , icc(mixed) listwise
Interrater reliability Number of subjects = 28
Two-way mixed-effects model Ratings per subject = 2
------------------------------------------------------------------------------
| Coef. F df1 df2 P>F [95% Conf. Interval]
---------------+--------------------------------------------------------------
ICC(3,1) | 0.6193 4.25 27.00 27.00 0.000 0.3262 0.8037
---------------+--------------------------------------------------------------
sigma_s | 0.8622
sigma_e | 0.6760
------------------------------------------------------------------------------
Q3: Is ID the fixed effect in the kappaetc , icc(mixed) estimate by default here?
Q4: I would like to report both the individual and average ICC coefficients (assuming I've interpreted these correctly above)- is it possible to see determine the group average ICC using kappaetc (similar to the icc command output)?
Comment