Deep Neural Networks Can Accurately Detect Blood Loss and Hemorrhage Control Task Success From Video.
Journal
Neurosurgery
ISSN: 1524-4040
Titre abrégé: Neurosurgery
Pays: United States
ID NLM: 7802914
Informations de publication
Date de publication:
01 06 2022
01 06 2022
Historique:
received:
30
06
2021
accepted:
24
11
2021
pubmed:
24
3
2022
medline:
18
5
2022
entrez:
23
3
2022
Statut:
ppublish
Résumé
Deep neural networks (DNNs) have not been proven to detect blood loss (BL) or predict surgeon performance from video. To train a DNN using video from cadaveric training exercises of surgeons controlling simulated internal carotid hemorrhage to predict clinically relevant outcomes. Video was input as a series of images; deep learning networks were developed, which predicted BL and task success from images alone (automated model) and images plus human-labeled instrument annotations (semiautomated model). These models were compared against 2 reference models, which used average BL across all trials as its prediction (control 1) and a linear regression with time to hemostasis (a metric with known association with BL) as input (control 2). The root-mean-square error (RMSE) and correlation coefficients were used to compare the models; lower RMSE indicates superior performance. One hundred forty-three trials were used (123 for training and 20 for testing). Deep learning models outperformed controls (control 1: RMSE 489 mL, control 2: RMSE 431 mL, R2 = 0.35) at BL prediction. The automated model predicted BL with an RMSE of 358 mL (R2 = 0.4) and correctly classified outcome in 85% of trials. The RMSE and classification performance of the semiautomated model improved to 260 mL and 90%, respectively. BL and task outcome classification are important components of an automated assessment of surgical performance. DNNs can predict BL and outcome of hemorrhage control from video alone; their performance is improved with surgical instrument presence data. The generalizability of DNNs trained on hemorrhage control tasks should be investigated.
Sections du résumé
BACKGROUND
Deep neural networks (DNNs) have not been proven to detect blood loss (BL) or predict surgeon performance from video.
OBJECTIVE
To train a DNN using video from cadaveric training exercises of surgeons controlling simulated internal carotid hemorrhage to predict clinically relevant outcomes.
METHODS
Video was input as a series of images; deep learning networks were developed, which predicted BL and task success from images alone (automated model) and images plus human-labeled instrument annotations (semiautomated model). These models were compared against 2 reference models, which used average BL across all trials as its prediction (control 1) and a linear regression with time to hemostasis (a metric with known association with BL) as input (control 2). The root-mean-square error (RMSE) and correlation coefficients were used to compare the models; lower RMSE indicates superior performance.
RESULTS
One hundred forty-three trials were used (123 for training and 20 for testing). Deep learning models outperformed controls (control 1: RMSE 489 mL, control 2: RMSE 431 mL, R2 = 0.35) at BL prediction. The automated model predicted BL with an RMSE of 358 mL (R2 = 0.4) and correctly classified outcome in 85% of trials. The RMSE and classification performance of the semiautomated model improved to 260 mL and 90%, respectively.
CONCLUSION
BL and task outcome classification are important components of an automated assessment of surgical performance. DNNs can predict BL and outcome of hemorrhage control from video alone; their performance is improved with surgical instrument presence data. The generalizability of DNNs trained on hemorrhage control tasks should be investigated.
Identifiants
pubmed: 35319539
doi: 10.1227/neu.0000000000001906
pii: 00006123-202206000-00022
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Pagination
823-829Commentaires et corrections
Type : CommentIn
Type : CommentIn
Informations de copyright
Copyright © Congress of Neurological Surgeons 2022. All rights reserved.
Références
Hashimoto DA, Rosman G, Witkowski ER, et al. Computer vision analysis of intraoperative video: automated recognition of operative steps in laparoscopic sleeve gastrectomy. Ann Surg. 2019;270(3):414-421.
Esteva A, Chou K, Yeung S, et al. Deep learning-enabled medical computer vision. NPJ Digital Med. 2021;4(1):1-9.
Loukas C. Video content analysis of surgical procedures. Surg Endosc. 2018;32(2):553-568.
Makary MA. The power of video recording: taking quality to the next level. JAMA. 2013;309(15):1591-1592.
Makary MA, Xu T, Pawlik TM. Can video recording revolutionise medical quality? BMJ. 2015;351:h5169.
Payer C, Štern D, Bischof H, Urschler M. Regressing heatmaps for multiple landmark localization using CNNs. In: Ourselin S, Joskowicz L, Sabuncu MR, Unal G, Wells W, eds. Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016. Vol 9901, Lecture Notes in Computer Science. Springer International Publishing; 2016:230-238.
Kitajima M, Hirai T, Katsuragawa S, et al. Differentiation of common large sellar-suprasellar masses effect of artificial neural network on radiologists' diagnosis performance. Acad Radiol. 2009;16(3):313-320.
Volkov M, Hashimoto DA, Rosman G, Meireles OR, Rus D. Machine learning and coresets for automated real-time video segmentation of laparoscopic and robot-assisted surgery. Paper presented at: 2017 IEEE International Conference on Robotics and Automation (ICRA) 2017. IEEE; 2017:754-759.
Pangal DJ, Kugener G, Shahrestani S, Attenello F, Zada G, Donoho DA. A guide to annotation of neurosurgical intraoperative video for machine learning analysis and computer vision. World Neurosurg. 2021;150:26-30.
Donoho DA, Pangal DJ, Kugener G, et al. Improved surgeon performance following cadaveric simulation of internal carotid artery injury during endoscopic endonasal surgery: training outcomes of a nationwide prospective educational intervention. J Neurosurg. 2021;135(5):1347-1355.
Shen J, Hur K, Zhang Z, et al. Objective validation of perfusion-based human cadaveric simulation training model for management of internal carotid artery injury in endoscopic endonasal sinus and skull base surgery. Oper Neurosurg. 2018;15(2):231-238.
Pham M, Kale A, Marquez Y, et al. A perfusion-based human cadaveric model for management of carotid artery injury during endoscopic endonasal skull base surgery. J Neurol Surg B Skull Base. 2014;75(5):309-313.
Donoho DA, Johnson CE, Hur KT, et al. Costs and training results of an objectively validated cadaveric perfusion-based internal carotid artery injury simulation during endoscopic skull base surgery. Int Forum Allergy Rhinol. 2019;9(7):787-794.
Lopez-Picado A, Albinarrate A, Barrachina B. Determination of perioperative blood loss: accuracy or approximation? Anesth Analg. 2017;125(1):280-286.
Thomas S, Ghee L, Sill AM, Patel ST, Kowdley GC, Cunningham SC. Measured versus estimated blood loss: interim analysis of a prospective quality improvement study. Am Surg. 2020;86(3):228-231.
Mirchi N, Bissonnette V, Yilmaz R, Ledwos N, Winkler-Schwartz A, Del Maestro RF. The virtual operative assistant: an explainable artificial intelligence tool for simulation-based training in surgery and medicine. PLoS One. 2020;15(2):e0229596.
Kugener G, Pangal DJ, Cardinal T, et al. Utility of the simulated outcomes following carotid artery laceration video data set for machine learning applications. JAMA Network Open. 2022;5(3):e223177.
He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. arXiv:151203385 [cs]; 2015. Accessed May 24, 2021. http://arxiv.org/abs/1512.03385 .
Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L. ImageNet: a large-scale hierarchical image database. Paper presented at: 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE; 2009:248-255.
Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9(8):1735-1780.
Khalid S, Goldenberg M, Taati B, Rudzicz F. Evaluation of deep learning models for identifying surgical actions and measuring performance | medical education and training. JAMA Netw Open. 2020;3(3):e201664.
Yengera G, Mutter D, Marescaux J, Padoy N. Less is More: Surgical Phase Recognition With Less Annotations Through Self-Supervised Pre-training of CNN-LSTM Networks. arXiv:180508569 [cs]. 2018. http://arxiv.org/abs/1805.08569 . Accessed January 20, 2021.
Serapio ET, Pearlson GA, Drey EA, Kerns JL. Estimated versus measured blood loss during dilation and evacuation: an observational study. Contraception. 2018;97(5):451-455.
Saoud F, Stone A, Nutter A, Hankins GD, Saade GR, Saad AF. Validation of a new method to assess estimated blood loss in the obstetric population undergoing cesarean delivery. Am J Obstet Gynecol. 2019;221(3):267.e1-267.e6.
Rubenstein AF, Zamudio S, Douglas C, Sledge S, Thurer RL. Automated quantification of blood loss versus visual estimation in 274 vaginal deliveries. Am J Perinatol. 2020;38(10):1031-1035.
Konig G, Holmes AA, Garcia R, et al. Vitro evaluation of a novel system for monitoring surgical hemoglobin loss. Anesth Analg. 2014;119(3):595-600.
Suzuki T, Sakurai Y, Yoshimitsu K, Nambu K, Muragaki Y, Iseki H. Intraoperative multichannel audio-visual information recording and automatic surgical phase and incident detection. Annu Int Conf IEEE Eng Med Biol Soc. 2010;2010:1190-1193.
Jones KI, Amawi F, Bhalla A, Peacock O, Williams JP, Lund JN. Assessing surgeon stress when operating using heart rate variability and the state trait anxiety inventory: will surgery be the death of us? Colorectal Dis. 2015;17(4):335-341.
Heemskerk J, Zandbergen HR, Keet SW, et al. Relax, it's just laparoscopy! A prospective randomized trial on heart rate variability of the surgeon in robot-assisted versus conventional laparoscopic cholecystectomy. Dig Surg. 2014;31(3):225-232.
Luong M-T, Pham H, Manning CD. Effective Approaches to Attention-Based Neural Machine Translation. arXiv:150804025 [cs]; 2015. Accessed September 29, 2021. http://arxiv.org/abs/1508.04025 .
Simonyan K, Vedaldi A, ZissermanDeep Inside Convolutional Networks A, Image Classification Models Visualising, Maps Saliency. arXiv:13126034 [cs]; 2014. Accessed September 29, 2021. http://arxiv.org/abs/1312.6034 .