SORRY - THIS IS WORK IN PROGRESS ...
Intraclass Correlation Coefficients (ICC) are hard to understand for us plain commoners, especially if the focus is not primarily on "classical" reliability,
\[ \text{ICC(3,1) by Shrout or ICC(C,1) by McCraw = } {\sigma^2_\text{Subject} \over \sigma^2_\text{Subject} + \sigma^2_\text{Error}} \]
but rather on the "expected trial-to-trial noise in the data" as in a test-re-test setting = seeing how much the scores agree with one another when repeating trials.
In this case it is suggested that one also includes systematic error in the denominator due to trials, so that
\[ \text{ICC(2,1) by Shrout or ICC(A,1) by McCraw = } {\sigma^2_\text{Subject} \over \sigma^2_\text{Subject} + \sigma^2_\text{Trials} + \sigma^2_\text{Error}} \]
Moreover, one quickly gets pointed towards a concept called "agreement" or "absolute reliability" (vs. "relative reliability"); in any case, more often than not, this concept is termed "Standard Error of Measurement" (SEM; not to be confused with the Standard Error of the Mean).
We can calculate the SEM by different methods, but most involve the ICC or components thereof - such as variance components.
\[ \text{ICC(2,1)} = {\text{MS}_\text{Subject} - \text{MS}_\text{Error} \over \text{MS}_\text{Subject} + \text{(Number of trials - 1)} * \text{MS}_\text{Error}} \]
Intraclass Correlation Coefficients (ICC) are hard to understand for us plain commoners, especially if the focus is not primarily on "classical" reliability,
\[ \text{ICC(3,1) by Shrout or ICC(C,1) by McCraw = } {\sigma^2_\text{Subject} \over \sigma^2_\text{Subject} + \sigma^2_\text{Error}} \]
but rather on the "expected trial-to-trial noise in the data" as in a test-re-test setting = seeing how much the scores agree with one another when repeating trials.
In this case it is suggested that one also includes systematic error in the denominator due to trials, so that
\[ \text{ICC(2,1) by Shrout or ICC(A,1) by McCraw = } {\sigma^2_\text{Subject} \over \sigma^2_\text{Subject} + \sigma^2_\text{Trials} + \sigma^2_\text{Error}} \]
Moreover, one quickly gets pointed towards a concept called "agreement" or "absolute reliability" (vs. "relative reliability"); in any case, more often than not, this concept is termed "Standard Error of Measurement" (SEM; not to be confused with the Standard Error of the Mean).
We can calculate the SEM by different methods, but most involve the ICC or components thereof - such as variance components.
- \[ {\text{Pooled standard deviation} * \sqrt{\text{1-ICC }}} \]
- \[ {\sqrt{\text{Error Mean Square} \text{ = MS}_\text{Error}}} \] or \[ {\sqrt{\sigma^2_\text{Error}}} \]
\[ \text{ICC(2,1)} = {\text{MS}_\text{Subject} - \text{MS}_\text{Error} \over \text{MS}_\text{Subject} + \text{(Number of trials - 1)} * \text{MS}_\text{Error}} \]
Code:
* Example generated by -dataex-. To install: ssc install dataex clear input int outcome byte(person_row _person_id trial_column) 166 1 1 1 168 2 2 1 160 3 3 1 150 4 4 1 147 5 5 1 146 6 6 1 156 7 7 1 155 8 8 1 160 1 1 2 172 2 2 2 142 3 3 2 159 4 4 2 135 5 5 2 143 6 6 2 147 7 7 2 168 8 8 2 end
Comment