Reinforcement learning application in diabetes blood glucose control: A systematic review.
Artificial pancreas
Blood glucose control
Closed-loop
Insulin infusion
Reinforcement learning
Journal
Artificial intelligence in medicine
ISSN: 1873-2860
Titre abrégé: Artif Intell Med
Pays: Netherlands
ID NLM: 8915031
Informations de publication
Date de publication:
04 2020
04 2020
Historique:
received:
30
07
2018
revised:
03
08
2019
accepted:
19
02
2020
entrez:
6
6
2020
pubmed:
6
6
2020
medline:
19
8
2021
Statut:
ppublish
Résumé
Reinforcement learning (RL) is a computational approach to understanding and automating goal-directed learning and decision-making. It is designed for problems which include a learning agent interacting with its environment to achieve a goal. For example, blood glucose (BG) control in diabetes mellitus (DM), where the learning agent and its environment are the controller and the body of the patient respectively. RL algorithms could be used to design a fully closed-loop controller, providing a truly personalized insulin dosage regimen based exclusively on the patient's own data. In this review we aim to evaluate state-of-the-art RL approaches to designing BG control algorithms in DM patients, reporting successfully implemented RL algorithms in closed-loop, insulin infusion, decision support and personalized feedback in the context of DM. An exhaustive literature search was performed using different online databases, analyzing the literature from 1990 to 2019. In a first stage, a set of selection criteria were established in order to select the most relevant papers according to the title, keywords and abstract. Research questions were established and answered in a second stage, using the information extracted from the articles selected during the preliminary selection. The initial search using title, keywords, and abstracts resulted in a total of 404 articles. After removal of duplicates from the record, 347 articles remained. An independent analysis and screening of the records against our inclusion and exclusion criteria defined in Methods section resulted in removal of 296 articles, leaving 51 relevant articles. A full-text assessment was conducted on the remaining relevant articles, which resulted in 29 relevant articles that were critically analyzed. The inter-rater agreement was measured using Cohen Kappa test, and disagreements were resolved through discussion. The advances in health technologies and mobile devices have facilitated the implementation of RL algorithms for optimal glycemic regulation in diabetes. However, there exists few articles in the literature focused on the application of these algorithms to the BG regulation problem. Moreover, such algorithms are designed for control tasks as BG adjustment and their use have increased recently in the diabetes research area, therefore we foresee RL algorithms will be used more frequently for BG control in the coming years. Furthermore, in the literature there is a lack of focus on aspects that influence BG level such as meal intakes and physical activity (PA), which should be included in the control problem. Finally, there exists a need to perform clinical validation of the algorithms.
Sections du résumé
BACKGROUND
Reinforcement learning (RL) is a computational approach to understanding and automating goal-directed learning and decision-making. It is designed for problems which include a learning agent interacting with its environment to achieve a goal. For example, blood glucose (BG) control in diabetes mellitus (DM), where the learning agent and its environment are the controller and the body of the patient respectively. RL algorithms could be used to design a fully closed-loop controller, providing a truly personalized insulin dosage regimen based exclusively on the patient's own data.
OBJECTIVE
In this review we aim to evaluate state-of-the-art RL approaches to designing BG control algorithms in DM patients, reporting successfully implemented RL algorithms in closed-loop, insulin infusion, decision support and personalized feedback in the context of DM.
METHODS
An exhaustive literature search was performed using different online databases, analyzing the literature from 1990 to 2019. In a first stage, a set of selection criteria were established in order to select the most relevant papers according to the title, keywords and abstract. Research questions were established and answered in a second stage, using the information extracted from the articles selected during the preliminary selection.
RESULTS
The initial search using title, keywords, and abstracts resulted in a total of 404 articles. After removal of duplicates from the record, 347 articles remained. An independent analysis and screening of the records against our inclusion and exclusion criteria defined in Methods section resulted in removal of 296 articles, leaving 51 relevant articles. A full-text assessment was conducted on the remaining relevant articles, which resulted in 29 relevant articles that were critically analyzed. The inter-rater agreement was measured using Cohen Kappa test, and disagreements were resolved through discussion.
CONCLUSIONS
The advances in health technologies and mobile devices have facilitated the implementation of RL algorithms for optimal glycemic regulation in diabetes. However, there exists few articles in the literature focused on the application of these algorithms to the BG regulation problem. Moreover, such algorithms are designed for control tasks as BG adjustment and their use have increased recently in the diabetes research area, therefore we foresee RL algorithms will be used more frequently for BG control in the coming years. Furthermore, in the literature there is a lack of focus on aspects that influence BG level such as meal intakes and physical activity (PA), which should be included in the control problem. Finally, there exists a need to perform clinical validation of the algorithms.
Identifiants
pubmed: 32499004
pii: S0933-3657(18)30454-8
doi: 10.1016/j.artmed.2020.101836
pii:
doi:
Substances chimiques
Blood Glucose
0
Insulin
0
Types de publication
Journal Article
Research Support, Non-U.S. Gov't
Review
Systematic Review
Langues
eng
Sous-ensembles de citation
IM
Pagination
101836Informations de copyright
Copyright © 2020 Elsevier B.V. All rights reserved.
Déclaration de conflit d'intérêts
Declaration of Competing Interest None.