### Abstract

Objectives There are similarities between the different forms of reliability, such as internal consistency (internal reliability) and interrater and intrarater reliability. Reliability coefficients that are based on classical test theory can be expressed as intraclass correlation coefficients (ICCs), such as Cronbach's alpha. The Spearman–Brown prophecy formula (SB formula) is used to calculate the reliability when the number of items in a questionnaire is changed. This paper aims to increase insight into reliability studies by pointing to the assumptions of reliability coefficients, similarities between various coefficients, and the subsequent new applications of reliability coefficients. Design, Settings and Results The origin and assumptions of Cronbach's alpha and the SB formula are discussed. Cronbach's alpha is written as an ICC formula, using the well-known property that taking the average value of a number of ratings increases the reliability of a measurement. We illustrate with an example that the ICC formulas for average measurements of multiple raters and the SB formula give similar results. This implies that the SB formula can be used to decide on the number of measurements to be averaged and thus on the number of raters required, for obtaining measurements with acceptable reliability, even if the variance components of the ICC formula are not known. Using the same example, we illustrate the principle of “Cronbach's alpha if item deleted” to decide on the poorest performing raters in a set of raters. Conclusion These applications have different assumptions: the principle of “Cronbach's alpha if item deleted” is based on the assumption of a fixed set of items/raters and the SB formula is based on the assumption of random raters. The example also emphasizes the need for more raters in the design of the reliability study to obtain a robust estimation of reliability.

Original language | English |
---|---|

Pages (from-to) | 45-49 |

Number of pages | 5 |

Journal | Journal of Clinical Epidemiology |

Volume | 85 |

DOIs | |

Publication status | Published - 1 May 2017 |

### Cite this

}

**Spearman–Brown prophecy formula and Cronbach's alpha : different faces of reliability and opportunities for new applications.** / de Vet, Henrica C.W.; Mokkink, Lidwine B.; Mosmuller, David G.; Terwee, Caroline B.

Research output: Contribution to journal › Article › Academic › peer-review

TY - JOUR

T1 - Spearman–Brown prophecy formula and Cronbach's alpha

T2 - different faces of reliability and opportunities for new applications

AU - de Vet, Henrica C.W.

AU - Mokkink, Lidwine B.

AU - Mosmuller, David G.

AU - Terwee, Caroline B.

PY - 2017/5/1

Y1 - 2017/5/1

N2 - Objectives There are similarities between the different forms of reliability, such as internal consistency (internal reliability) and interrater and intrarater reliability. Reliability coefficients that are based on classical test theory can be expressed as intraclass correlation coefficients (ICCs), such as Cronbach's alpha. The Spearman–Brown prophecy formula (SB formula) is used to calculate the reliability when the number of items in a questionnaire is changed. This paper aims to increase insight into reliability studies by pointing to the assumptions of reliability coefficients, similarities between various coefficients, and the subsequent new applications of reliability coefficients. Design, Settings and Results The origin and assumptions of Cronbach's alpha and the SB formula are discussed. Cronbach's alpha is written as an ICC formula, using the well-known property that taking the average value of a number of ratings increases the reliability of a measurement. We illustrate with an example that the ICC formulas for average measurements of multiple raters and the SB formula give similar results. This implies that the SB formula can be used to decide on the number of measurements to be averaged and thus on the number of raters required, for obtaining measurements with acceptable reliability, even if the variance components of the ICC formula are not known. Using the same example, we illustrate the principle of “Cronbach's alpha if item deleted” to decide on the poorest performing raters in a set of raters. Conclusion These applications have different assumptions: the principle of “Cronbach's alpha if item deleted” is based on the assumption of a fixed set of items/raters and the SB formula is based on the assumption of random raters. The example also emphasizes the need for more raters in the design of the reliability study to obtain a robust estimation of reliability.

AB - Objectives There are similarities between the different forms of reliability, such as internal consistency (internal reliability) and interrater and intrarater reliability. Reliability coefficients that are based on classical test theory can be expressed as intraclass correlation coefficients (ICCs), such as Cronbach's alpha. The Spearman–Brown prophecy formula (SB formula) is used to calculate the reliability when the number of items in a questionnaire is changed. This paper aims to increase insight into reliability studies by pointing to the assumptions of reliability coefficients, similarities between various coefficients, and the subsequent new applications of reliability coefficients. Design, Settings and Results The origin and assumptions of Cronbach's alpha and the SB formula are discussed. Cronbach's alpha is written as an ICC formula, using the well-known property that taking the average value of a number of ratings increases the reliability of a measurement. We illustrate with an example that the ICC formulas for average measurements of multiple raters and the SB formula give similar results. This implies that the SB formula can be used to decide on the number of measurements to be averaged and thus on the number of raters required, for obtaining measurements with acceptable reliability, even if the variance components of the ICC formula are not known. Using the same example, we illustrate the principle of “Cronbach's alpha if item deleted” to decide on the poorest performing raters in a set of raters. Conclusion These applications have different assumptions: the principle of “Cronbach's alpha if item deleted” is based on the assumption of a fixed set of items/raters and the SB formula is based on the assumption of random raters. The example also emphasizes the need for more raters in the design of the reliability study to obtain a robust estimation of reliability.

KW - Classical test theory

KW - Cronbach's alpha

KW - Intraclass correlation coefficient

KW - Reliability

KW - Spearman–Brown prophecy formula

UR - http://www.scopus.com/inward/record.url?scp=85017334777&partnerID=8YFLogxK

U2 - 10.1016/j.jclinepi.2017.01.013

DO - 10.1016/j.jclinepi.2017.01.013

M3 - Article

VL - 85

SP - 45

EP - 49

JO - Journal of Clinical Epidemiology

JF - Journal of Clinical Epidemiology

SN - 0895-4356

ER -