The performance of prognostic models depended on the choice of missing value imputation algorithm: a simulation study.
Prediction model
aregImpute
complete case analysis
mice
missForest
missing value imputation
Journal
Journal of clinical epidemiology
ISSN: 1878-5921
Titre abrégé: J Clin Epidemiol
Pays: United States
ID NLM: 8801383
Informations de publication
Date de publication:
24 Sep 2024
24 Sep 2024
Historique:
received:
24
05
2024
revised:
03
09
2024
accepted:
16
09
2024
medline:
27
9
2024
pubmed:
27
9
2024
entrez:
26
9
2024
Statut:
aheadofprint
Résumé
The development of clinical prediction models is often impeded by the occurrence of missing values in the predictors. Various methods for imputing missing values before modelling have been proposed. Some of them are based on variants of multiple imputation by chained equations, while others are based on single imputation. These methods may include elements of flexible modelling or machine learning algorithms, and for some of them user-friendly software packages are available. The aim of this study was to investigate by simulation if some of these methods consistently outperform others in performance measures of clinical prediction models. We simulated development and validation cohorts by mimicking observed distributions of predictors and outcome variable of a real data set. In the development cohorts, missing predictor values were created in 36 scenarios defined by the missingness mechanism and proportion of non-complete cases. We applied three imputation algorithms that were available in R software: mice, aregImpute and missForest. These algorithms differed in their use of linear or flexible models, or random forests, the way of sampling from the predictive posterior distribution, and the generation of a single or multiple imputed data sets. For multiple imputation we also investigated the impact of the number of imputations. Logistic regression models were fitted with the simulated development cohorts before (full data analysis) and after missing value generation (complete case analysis), and with the imputed data. Prognostic model performance was measured by the scaled Brier score, c-statistic, calibration intercept and slope, and by the mean absolute prediction error evaluated in validation cohorts without missing values. Performance of full data analysis was considered as ideal. None of the imputation methods achieved the model's predictive accuracy that would be obtained in case of no missingness. In general, complete case analysis yielded the worst performance, and deviation from ideal performance increased with increasing percentage of missingness and decreasing sample size. Across all scenarios and performance measures, aregImpute and mice, both with 100 imputations, resulted in highest predictive accuracy. Surprisingly aregImpute outperformed full data analysis in achieving calibration slopes very close to 1 across all scenarios and outcome models. The increase of mice's performance with 100 compared to 5 imputations was only marginal. The differences between the imputation methods decreased with increasing sample sizes and decreasing proportion of non-complete cases. In our simulation study, model calibration was more affected by the choice of the imputation method than model discrimination. While differences in model performance after using imputation methods were generally small, multiple imputation methods as mice and aregImpute that can handle linear or nonlinear associations between predictors and outcome are an attractive and reliable choice in most situations.
Identifiants
pubmed: 39326470
pii: S0895-4356(24)00295-6
doi: 10.1016/j.jclinepi.2024.111539
pii:
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Pagination
111539Informations de copyright
Copyright © 2024 The Author(s). Published by Elsevier Inc. All rights reserved.