Objectives There are similarities between the different forms of reliability, such as internal consistency (internal reliability) and interrater and intrarater reliability. Reliability coefficients that are based on classical test theory can be expressed as intraclass correlation coefficients (ICCs), such as Cronbach's alpha. The Spearman–Brown prophecy formula (SB formula) is used to calculate the reliability when the number of items in a questionnaire is changed. This paper aims to increase insight into reliability studies by pointing to the assumptions of reliability coefficients, similarities between various coefficients, and the subsequent new applications of reliability coefficients. Design, Settings and Results The origin and assumptions of Cronbach's alpha and the SB formula are discussed. Cronbach's alpha is written as an ICC formula, using the well-known property that taking the average value of a number of ratings increases the reliability of a measurement. We illustrate with an example that the ICC formulas for average measurements of multiple raters and the SB formula give similar results. This implies that the SB formula can be used to decide on the number of measurements to be averaged and thus on the number of raters required, for obtaining measurements with acceptable reliability, even if the variance components of the ICC formula are not known. Using the same example, we illustrate the principle of “Cronbach's alpha if item deleted” to decide on the poorest performing raters in a set of raters. Conclusion These applications have different assumptions: the principle of “Cronbach's alpha if item deleted” is based on the assumption of a fixed set of items/raters and the SB formula is based on the assumption of random raters. The example also emphasizes the need for more raters in the design of the reliability study to obtain a robust estimation of reliability.