The Inter-Rater Reliability of Technical Skills Assessment and Retention of Rater Training.
Inter-rater reliability
Laparoscopic skills assessment
Medical Knowledge
Rater training
Residents
Simulation
Video-based assessment
Journal
Journal of surgical education
ISSN: 1878-7452
Titre abrégé: J Surg Educ
Pays: United States
ID NLM: 101303204
Informations de publication
Date de publication:
Historique:
received:
27
08
2018
revised:
02
12
2018
accepted:
06
01
2019
pubmed:
3
2
2019
medline:
28
7
2020
entrez:
3
2
2019
Statut:
ppublish
Résumé
The inter-rater reliability (IRR) of laparoscopic skills assessment is usually determined in the context of motivated raters from a single subspecialty practice group with significant experience using similar tools. The purpose of this study was to determine the IRR among attending surgeons of different experience and practices, the extent of rater training that is necessary to achieve good IRR, and if rater training is retained over periods of nonuse. In Part 1, 5 surgeons of different practice backgrounds assessed 3 laparoscopic cholecystectomy videos using the Global Operative Assessment of Laparoscopic Skills instrument. In Part 2, 2 of the surgeons assessed a total of 33 videos over 5 scoring sessions distributed across 6 months. They participated in 2 different training sessions, and retention was tested in the other 3 sessions. IRR was calculated for Parts 1 and 2 with an intraclass correlation (ICC) in a 2-way random-effects model. The ICC for Part 1 was poor (ICC = 0.26). In Part 2, the ICC was highest after each training session (scoring #1 ICC = 0.76, scoring #3 ICC = 0.74). The ICC was not retained 1.5 months after the brief video-based training session (scoring #2 ICC = -0.17). The ICC was retained 2.5 months after the in-depth discussion training session (scoring #4 ICC = 0.70), but not 4.5 months later (scoring #5 ICC = 0.04). Good IRR is not implicit among surgeons with varying backgrounds and experience. Good IRR can be achieved with different types of rater training, but the impact of rater training is lost in periods of nonuse. This suggests the need for further study of the IRR of technical skills assessment when performed by the wide variety of surgeon raters as is commonly encountered in the environment of postgraduate resident assessment.
Sections du résumé
BACKGROUND
BACKGROUND
The inter-rater reliability (IRR) of laparoscopic skills assessment is usually determined in the context of motivated raters from a single subspecialty practice group with significant experience using similar tools. The purpose of this study was to determine the IRR among attending surgeons of different experience and practices, the extent of rater training that is necessary to achieve good IRR, and if rater training is retained over periods of nonuse.
METHODS
METHODS
In Part 1, 5 surgeons of different practice backgrounds assessed 3 laparoscopic cholecystectomy videos using the Global Operative Assessment of Laparoscopic Skills instrument. In Part 2, 2 of the surgeons assessed a total of 33 videos over 5 scoring sessions distributed across 6 months. They participated in 2 different training sessions, and retention was tested in the other 3 sessions. IRR was calculated for Parts 1 and 2 with an intraclass correlation (ICC) in a 2-way random-effects model.
RESULTS
RESULTS
The ICC for Part 1 was poor (ICC = 0.26). In Part 2, the ICC was highest after each training session (scoring #1 ICC = 0.76, scoring #3 ICC = 0.74). The ICC was not retained 1.5 months after the brief video-based training session (scoring #2 ICC = -0.17). The ICC was retained 2.5 months after the in-depth discussion training session (scoring #4 ICC = 0.70), but not 4.5 months later (scoring #5 ICC = 0.04).
CONCLUSIONS
CONCLUSIONS
Good IRR is not implicit among surgeons with varying backgrounds and experience. Good IRR can be achieved with different types of rater training, but the impact of rater training is lost in periods of nonuse. This suggests the need for further study of the IRR of technical skills assessment when performed by the wide variety of surgeon raters as is commonly encountered in the environment of postgraduate resident assessment.
Identifiants
pubmed: 30709756
pii: S1931-7204(18)30641-X
doi: 10.1016/j.jsurg.2019.01.001
pii:
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Pagination
1088-1093Informations de copyright
Copyright © 2019 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.