Measures of Agreement with Multiple Raters: Fréchet Variances and Inference.
AC1
Cohen kappa
agreement
inter-rater reliability
Journal
Psychometrika
ISSN: 1860-0980
Titre abrégé: Psychometrika
Pays: United States
ID NLM: 0376503
Informations de publication
Date de publication:
08 Jan 2024
08 Jan 2024
Historique:
received:
30
08
2022
accepted:
06
12
2023
medline:
8
1
2024
pubmed:
8
1
2024
entrez:
8
1
2024
Statut:
aheadofprint
Résumé
Most measures of agreement are chance-corrected. They differ in three dimensions: their definition of chance agreement, their choice of disagreement function, and how they handle multiple raters. Chance agreement is usually defined in a pairwise manner, following either Cohen's kappa or Fleiss's kappa. The disagreement function is usually a nominal, quadratic, or absolute value function. But how to handle multiple raters is contentious, with the main contenders being Fleiss's kappa, Conger's kappa, and Hubert's kappa, the variant of Fleiss's kappa where agreement is said to occur only if every rater agrees. More generally, multi-rater agreement coefficients can be defined in a g-wise way, where the disagreement weighting function uses g raters instead of two. This paper contains two main contributions. (a) We propose using Fréchet variances to handle the case of multiple raters. The Fréchet variances are intuitive disagreement measures and turn out to generalize the nominal, quadratic, and absolute value functions to the case of more than two raters. (b) We derive the limit theory of g-wise weighted agreement coefficients, with chance agreement of the Cohen-type or Fleiss-type, for the case where every item is rated by the same number of raters. Trying out three confidence interval constructions, we end up recommending calculating confidence intervals using the arcsine transform or the Fisher transform.
Identifiants
pubmed: 38190018
doi: 10.1007/s11336-023-09945-2
pii: 10.1007/s11336-023-09945-2
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Informations de copyright
© 2024. The Author(s).
Références
Berry, K. J., Johnston, J. E., & Mielke, P. W., Jr. (2008). Weighted kappa for multiple raters. Perceptual and Motor Skills, 107(3), 837–848. https://doi.org/10.2466/pms.107.3.837-848
doi: 10.2466/pms.107.3.837-848
Berry, K. J., & Mielke, P. W. (1988). A generalization of Cohen’s kappa agreement measure to interval measurement and multiple raters. Educational and Psychological Measurement, 48(4), 921–933. https://doi.org/10.1177/0013164488484007
doi: 10.1177/0013164488484007
Carrasco, J. L., & Jover, L. (2003). Estimating the generalized concordance correlation coefficient through variance components. Biometrics, 59(4), 849–858. https://doi.org/10.1111/j.0006-341x.2003.00099.x
doi: 10.1111/j.0006-341x.2003.00099.x
Cicchetti, D. V., & Feinstein, A. R. (1990). High agreement but low kappa: II. Resolving the paradoxes. Journal of Clinical Epidemiology, 43(6), 551–558. https://doi.org/10.1016/0895-4356(90)90159-m
doi: 10.1016/0895-4356(90)90159-m
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46. https://doi.org/10.1177/001316446002000104
doi: 10.1177/001316446002000104
Cohen, J. (1968). Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4), 213–220. https://doi.org/10.1037/h0026256
doi: 10.1037/h0026256
Cohen, M. B. , Lee, Y. T. , Miller, G. , Pachocki, J., & Sidford, A. (2016). Geometric median in nearly linear time. In Proceedings of the forty-eighth annual ACM symposium on theory of computing (pp. 9–21). Association for Computing Machinery. https://doi.org/10.1145/2897518.2897647
Conger, A. J. (1980). Integration and generalization of kappas for multiple raters. Psychological Bulletin, 88(2), 322–328. https://doi.org/10.1037/0033-2909.88.2.322
doi: 10.1037/0033-2909.88.2.322
Cooil, B., & Rust, R. T. (1994). Reliability and expected loss: A unifying principle. Psychometrika, 59(2), 203–216. https://doi.org/10.1007/BF02295184
doi: 10.1007/BF02295184
Drezner, Z., Klamroth, K., Schöbel, A., & Wesolowsky, G. O. (2002). The weber broblem. In Z. Drezner & H. Horst (Eds.), Facility location: Applications and theory (pp. 1–36). Springer.
Dubey, P., & Müller, H. G. (2019). Fréchet analysis of variance for random objects. Biometrika, 106(4), 803–821. https://doi.org/10.1093/biomet/asz052
doi: 10.1093/biomet/asz052
Efron, B. (1987). Better bootstrap confidence intervals. Journal of the American Statistical Association, 82(397), 171–185. https://doi.org/10.2307/2289144
doi: 10.2307/2289144
Fisher, R. A. (1915). Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population. Biometrika, 10(4), 507–521. https://doi.org/10.2307/2331838
doi: 10.2307/2331838
Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378–382. https://doi.org/10.1037/h0031619
doi: 10.1037/h0031619
Fréchet. (1948). Les éléments aléatoires de nature quelconque dans un espace distancié. Annales de l’institut Henri Poincaré, 10(4), 215–230.
Gwet, K. L. (2008). Computing inter-rater reliability and its variance in the presence of high agreement. The British Journal of Mathematical and Statistical Psychology, 61, 29–48. https://doi.org/10.1348/000711006X126600
doi: 10.1348/000711006X126600
Gwet, K. L. (2014). Handbook of inter-rater reliability. Advanced Analytics, LLC.
Gwet, K. L. (2021). Large-sample variance of fleiss generalized kappa. Educational and Psychological Measurement. https://doi.org/10.1177/0013164420973080
Hoeffding, W. (1992). A class of statistics with asymptotically normal distribution. In: S. Kotz & N. L. Johnson (eds), Breakthroughs in statistics: Foundations and basic theory (pp. 308–334). Springer. https://doi.org/10.1007/978-1-4612-0919-5_20
Huber, P. J. (1964). Robust estimation of a location parameter. Annals of Mathematical Statistics, 35(1), 73–101. https://doi.org/10.1214/aoms/1177703732
doi: 10.1214/aoms/1177703732
Hubert, L. (1977). Kappa revisited. Psychological Bulletin, 84(2), 289–297. https://doi.org/10.1037/0033-2909.84.2.289
doi: 10.1037/0033-2909.84.2.289
Janson, H., & Olsson, U. (2001). A measure of agreement for interval or nominal multivariate observations. Educational and Psychological Measurement, 61(2), 277–289. https://doi.org/10.1177/00131640121971239
doi: 10.1177/00131640121971239
Korolyuk, V. S., & Borovskich, Y. V. (2013). Theory of U-statistics (1994th ed.). Springer.
Krippendorff, K. (1970). Bivariate agreement coefficients for reliability of data. Sociological Methodology, 2, 139–150. https://doi.org/10.2307/270787
doi: 10.2307/270787
Krippendorff, K. (2018). Content analysis: An introduction to its methodology.
Lee, A. J. (2019). U-statistics: Theory and practice. Routledge.
Lehmann, E. L. (2004). Elements of large-sample theory. Springer.
Light, R. J. (1971). Measures of response agreement for qualitative data: Some generalizations and alternatives. Psychological Bulletin, 76(5), 365–377. https://doi.org/10.1037/h0031643
doi: 10.1037/h0031643
Lin, L. I. (1989). A concordance correlation coefficient to evaluate reproducibility. Biometrics, 45(1), 255–268. https://doi.org/10.2307/2532051
doi: 10.2307/2532051
Lin, L. I. (1992). Assay validation using the concordance correlation coefficient. Biometrics, 48(2), 599–604. https://doi.org/10.2307/2532314
Martín Andrés, A., & Álvarez Hernández, M. (2020). Hubert’s multi-rater kappa revisited. The British Journal of Mathematical and Statistical Psychology, 73(1), 1–22. https://doi.org/10.1111/bmsp.12167
doi: 10.1111/bmsp.12167
Maxwell, A. E. (1977). Coefficients of agreement between observers and their interpretation. The British Journal of Psychiatry, 130, 79–83. https://doi.org/10.1192/bjp.130.1.79
doi: 10.1192/bjp.130.1.79
Moss, J. (2023). Measuring agreement using guessing models and knowledge coefficients. Psychometrika. https://doi.org/10.1007/s11336-023-09919-4
doi: 10.1007/s11336-023-09919-4
pmcid: 10444669
O’Connell, D. L., & Dobson, A. J. (1984). General Observer-Agreement measures on individual subjects and groups of subjects. Biometrics, 40(4), 973–983. https://doi.org/10.2307/2531148
doi: 10.2307/2531148
Perreault, W. D., & Leigh, L. E. (1989). Reliability of nominal data based on qualitative judgments. Journal of Marketing Research, 26(2), 135–148. https://doi.org/10.1177/002224378902600201
doi: 10.1177/002224378902600201
Sandifer, M. G., Hordern, A., Timbury, G. C., & Green, L. M. (1968). Psychiatric diagnosis: A comparative study in north Carolina, London and Glasgow. The British Journal of Psychiatry, 114(506), 1–9. https://doi.org/10.1192/bjp.114.506.1
doi: 10.1192/bjp.114.506.1
Schouten, H. J. A. (1980). Measuring pairwise agreement among many observers. Biometrical Journal, 22(6), 497–504. https://doi.org/10.1002/bimj.4710220605
doi: 10.1002/bimj.4710220605
Schouten, H. J. A. (1982). Measuring pairwise agreement among many observers. II. Some improvements and additions. Biometrical Journal, 24(5), 431–435. https://doi.org/10.1002/bimj.4710240502
doi: 10.1002/bimj.4710240502
Schuster, C., & Smith, D. A. (2005). Dispersion-weighted kappa: An integrative framework for metric and nominal scale agreement coefficients. Psychometrika. https://doi.org/10.1007/s11336-003-1110-4
doi: 10.1007/s11336-003-1110-4
Scott, W. A. (1955). Reliability of content analysis: The case of nominal scale coding. Public Opinion Quarterly, 19(3), 321–325. https://doi.org/10.1086/266577
doi: 10.1086/266577
Serfling, R. J. (1980). Approximation theorems of mathematical statistics. Wiley.
van Oest, R. (2019). A new coefficient of interrater agreement: The challenge of highly unequal category proportions. Psychological Methods, 24(4), 439–451. https://doi.org/10.1037/met0000183
doi: 10.1037/met0000183
Varian, H. R. (1975). A Bayesian approach to real estate assessment. In: A. Z. Stephen & E. Fienberg (Eds.), Studies in Bayesian econometric and statistics in honor of Leonard J. Savage (pp. 195–208). North Holland.
Warrens, M. J. (2012). Equivalences of weighted kappas for multiple raters. Statistical Methodology, 9(3), 407–422. https://doi.org/10.1016/j.stamet.2011.11.001
doi: 10.1016/j.stamet.2011.11.001
Warton, D. I., & Hui, F. K. C. (2011). The arcsine is asinine: The analysis of proportions in ecology. Ecology, 92(1), 3–10. https://doi.org/10.1890/10-0340.1
doi: 10.1890/10-0340.1
Zapf, A., Castell, S., Morawietz, L., & Karch, A. (2016). Measuring inter-rater reliability for nominal data—Which coefficients and confidence intervals are appropriate? BMC Medical Research Methodology, 16, 93. https://doi.org/10.1186/s12874-016-0200-9
doi: 10.1186/s12874-016-0200-9
pmcid: 4974794