Methodologically rigorous risk of bias tools for nonrandomized studies had low reliability and high evaluator burden.
Evaluator burden
Interconsensus reliability
Interrater reliability
Nonrandomized studies
ROBINS-I
RoB instrument for NRS of exposures
Journal
Journal of clinical epidemiology
ISSN: 1878-5921
Titre abrégé: J Clin Epidemiol
Pays: United States
ID NLM: 8801383
Informations de publication
Date de publication:
12 2020
12 2020
Historique:
received:
01
07
2020
revised:
09
09
2020
accepted:
22
09
2020
pubmed:
29
9
2020
medline:
15
4
2021
entrez:
28
9
2020
Statut:
ppublish
Résumé
To assess the real-world interrater reliability (IRR), interconsensus reliability (ICR), and evaluator burden of the Risk of Bias (RoB) in Nonrandomized Studies (NRS) of Interventions (ROBINS-I), and the ROB Instrument for NRS of Exposures (ROB-NRSE) tools. A six-center cross-sectional study with seven reviewers (2 reviewer pairs) assessing the RoB using ROBINS-I (n = 44 NRS) or ROB-NRSE (n = 44 NRS). We used Gwet's AC For ROBINS-I, both IRR and ICR for individual domains ranged from poor to substantial agreement. IRR and ICR on overall RoB were poor. The evaluator burden was 48.45 min (95% CI 45.61 to 51.29). For ROB-NRSE, the IRR and ICR for the majority of domains were poor, while the rest ranged from fair to perfect agreement. IRR and ICR on overall RoB were slight and poor, respectively. The evaluator burden was 36.98 min (95% CI 34.80 to 39.16). We found both tools to have low reliability, although ROBINS-I was slightly higher. Measures to increase agreement between raters (e.g., detailed training, supportive guidance material) may improve reliability and decrease evaluator burden.
Identifiants
pubmed: 32987166
pii: S0895-4356(20)31119-7
doi: 10.1016/j.jclinepi.2020.09.033
pii:
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Pagination
140-147Informations de copyright
Copyright © 2020 Elsevier Inc. All rights reserved.