Specific agreement on dichotomous outcomes can be calculated for more than two raters

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Objective For assessing interrater agreement, the concepts of observed agreement and specific agreement have been proposed. The situation of two raters and dichotomous outcomes has been described, whereas often, multiple raters are involved. We aim to extend it for more than two raters and examine how to calculate agreement estimates and 95% confidence intervals (CIs). Study Design and Setting As an illustration, we used a reliability study that includes the scores of four plastic surgeons classifying photographs of breasts of 50 women after breast reconstruction into “satisfied” or “not satisfied.” In a simulation study, we checked the hypothesized sample size for calculation of 95% CIs. Results For m raters, all pairwise tables [ie, m (m − 1)/2] were summed. Then, the discordant cells were averaged before observed and specific agreements were calculated. The total number (N) in the summed table is m (m − 1)/2 times larger than the number of subjects (n), in the example, N = 300 compared to n = 50 subjects times m = 4 raters. A correction of n√(m − 1) was appropriate to find 95% CIs comparable to bootstrapped CIs. Conclusion The concept of observed agreement and specific agreement can be extended to more than two raters with a valid estimation of the 95% CIs.

Original languageEnglish
Pages (from-to)85-89
Number of pages5
JournalJournal of Clinical Epidemiology
Volume83
DOIs
Publication statusPublished - 1 Mar 2017

Cite this

@article{eff911386bdd4d4caad7f239ae5dbe87,
title = "Specific agreement on dichotomous outcomes can be calculated for more than two raters",
abstract = "Objective For assessing interrater agreement, the concepts of observed agreement and specific agreement have been proposed. The situation of two raters and dichotomous outcomes has been described, whereas often, multiple raters are involved. We aim to extend it for more than two raters and examine how to calculate agreement estimates and 95{\%} confidence intervals (CIs). Study Design and Setting As an illustration, we used a reliability study that includes the scores of four plastic surgeons classifying photographs of breasts of 50 women after breast reconstruction into “satisfied” or “not satisfied.” In a simulation study, we checked the hypothesized sample size for calculation of 95{\%} CIs. Results For m raters, all pairwise tables [ie, m (m − 1)/2] were summed. Then, the discordant cells were averaged before observed and specific agreements were calculated. The total number (N) in the summed table is m (m − 1)/2 times larger than the number of subjects (n), in the example, N = 300 compared to n = 50 subjects times m = 4 raters. A correction of n√(m − 1) was appropriate to find 95{\%} CIs comparable to bootstrapped CIs. Conclusion The concept of observed agreement and specific agreement can be extended to more than two raters with a valid estimation of the 95{\%} CIs.",
keywords = "Confidence intervals, Continuity correction, Fleiss correction, Observed agreement, Specific agreement",
author = "{de Vet}, {Henrica C.W.} and Dikmans, {Rieky E.} and Iris Eekhout",
year = "2017",
month = "3",
day = "1",
doi = "10.1016/j.jclinepi.2016.12.007",
language = "English",
volume = "83",
pages = "85--89",
journal = "Journal of Clinical Epidemiology",
issn = "0895-4356",
publisher = "Elsevier USA",

}

Specific agreement on dichotomous outcomes can be calculated for more than two raters. / de Vet, Henrica C.W.; Dikmans, Rieky E.; Eekhout, Iris.

In: Journal of Clinical Epidemiology, Vol. 83, 01.03.2017, p. 85-89.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - Specific agreement on dichotomous outcomes can be calculated for more than two raters

AU - de Vet, Henrica C.W.

AU - Dikmans, Rieky E.

AU - Eekhout, Iris

PY - 2017/3/1

Y1 - 2017/3/1

N2 - Objective For assessing interrater agreement, the concepts of observed agreement and specific agreement have been proposed. The situation of two raters and dichotomous outcomes has been described, whereas often, multiple raters are involved. We aim to extend it for more than two raters and examine how to calculate agreement estimates and 95% confidence intervals (CIs). Study Design and Setting As an illustration, we used a reliability study that includes the scores of four plastic surgeons classifying photographs of breasts of 50 women after breast reconstruction into “satisfied” or “not satisfied.” In a simulation study, we checked the hypothesized sample size for calculation of 95% CIs. Results For m raters, all pairwise tables [ie, m (m − 1)/2] were summed. Then, the discordant cells were averaged before observed and specific agreements were calculated. The total number (N) in the summed table is m (m − 1)/2 times larger than the number of subjects (n), in the example, N = 300 compared to n = 50 subjects times m = 4 raters. A correction of n√(m − 1) was appropriate to find 95% CIs comparable to bootstrapped CIs. Conclusion The concept of observed agreement and specific agreement can be extended to more than two raters with a valid estimation of the 95% CIs.

AB - Objective For assessing interrater agreement, the concepts of observed agreement and specific agreement have been proposed. The situation of two raters and dichotomous outcomes has been described, whereas often, multiple raters are involved. We aim to extend it for more than two raters and examine how to calculate agreement estimates and 95% confidence intervals (CIs). Study Design and Setting As an illustration, we used a reliability study that includes the scores of four plastic surgeons classifying photographs of breasts of 50 women after breast reconstruction into “satisfied” or “not satisfied.” In a simulation study, we checked the hypothesized sample size for calculation of 95% CIs. Results For m raters, all pairwise tables [ie, m (m − 1)/2] were summed. Then, the discordant cells were averaged before observed and specific agreements were calculated. The total number (N) in the summed table is m (m − 1)/2 times larger than the number of subjects (n), in the example, N = 300 compared to n = 50 subjects times m = 4 raters. A correction of n√(m − 1) was appropriate to find 95% CIs comparable to bootstrapped CIs. Conclusion The concept of observed agreement and specific agreement can be extended to more than two raters with a valid estimation of the 95% CIs.

KW - Confidence intervals

KW - Continuity correction

KW - Fleiss correction

KW - Observed agreement

KW - Specific agreement

UR - http://www.scopus.com/inward/record.url?scp=85013054023&partnerID=8YFLogxK

U2 - 10.1016/j.jclinepi.2016.12.007

DO - 10.1016/j.jclinepi.2016.12.007

M3 - Article

VL - 83

SP - 85

EP - 89

JO - Journal of Clinical Epidemiology

JF - Journal of Clinical Epidemiology

SN - 0895-4356

ER -