Dear Stata-list members,
I have two questions:
Your help in answering these questions would be much appreciated.
Siddhartha
I have two questions:
- What, if any, is the difference between test-retest reliability and intra-rater reliability? The terms often seem to be used inter-changeably in the literature, but there is no precise explanation of the salient differences. Is it, for example, that intra-rater reliability is about agreement/consistency in ratings for each rater taken separately while test-retest reliability does not take account of the rater(s) but examines overall agreement/consistency between two measurements by the same set of raters.
- I have a dataset in which 3 raters have each rated the same 30 videotaped meetings on 11 dimensions using 7-point ordinal scales at two time points, 3 months apart. The data are nested. One form of nesting could be the following: dimensions nested in raters nested in films nested in time. What indicator should I use to measure test-retest/intra-rater reliability in this case, and is there a Stata command that would help me implement it?
Your help in answering these questions would be much appreciated.
Siddhartha
Comment