Efficient EndoNeRF reconstruction and its application for data-driven surgical simulation.

3D reconstruction NeRF Robotic surgery Surgery simulation

Journal

International journal of computer assisted radiology and surgery
ISSN: 1861-6429
Titre abrégé: Int J Comput Assist Radiol Surg
Pays: Germany
ID NLM: 101499225

Informations de publication

Date de publication:
24 Apr 2024
Historique:
received: 14 06 2023
accepted: 13 03 2024
medline: 25 4 2024
pubmed: 25 4 2024
entrez: 24 4 2024
Statut: aheadofprint

Résumé

The healthcare industry has a growing need for realistic modeling and efficient simulation of surgical scenes. With effective models of deformable surgical scenes, clinicians are able to conduct surgical planning and surgery training on scenarios close to real-world cases. However, a significant challenge in achieving such a goal is the scarcity of high-quality soft tissue models with accurate shapes and textures. To address this gap, we present a data-driven framework that leverages emerging neural radiance field technology to enable high-quality surgical reconstruction and explore its application for surgical simulations. We first focus on developing a fast NeRF-based surgical scene 3D reconstruction approach that achieves state-of-the-art performance. This method can significantly outperform traditional 3D reconstruction methods, which have failed to capture large deformations and produce fine-grained shapes and textures. We then propose an automated creation pipeline of interactive surgical simulation environments through a closed mesh extraction algorithm. Our experiments have validated the superior performance and efficiency of our proposed approach in surgical scene 3D reconstruction. We further utilize our reconstructed soft tissues to conduct FEM and MPM simulations, showcasing the practical application of our method in data-driven surgical simulations. We have proposed a novel NeRF-based reconstruction framework with an emphasis on simulation purposes. Our reconstruction framework facilitates the efficient creation of high-quality surgical soft tissue 3D models. With multiple soft tissue simulations demonstrated, we show that our work has the potential to benefit downstream clinical tasks, such as surgical education.

Identifiants

pubmed: 38658450
doi: 10.1007/s11548-024-03114-1
pii: 10.1007/s11548-024-03114-1
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Subventions

Organisme : Multi-Scale Medical Robotics Centre InnoHK
ID : N/A
Organisme : Research Grants Council of the Hong Kong Special Administrative Region, China
ID : T45-401/22-N
Organisme : Shenzhen-Hong Kong Collaborative Development Zone
ID : N/A

Informations de copyright

© 2024. The Author(s).

Références

Liu X, Stiber M, Huang J, Ishii M, Hager GD, Taylor RH, Unberath M (2020) Reconstructing sinus anatomy from endoscopic video–towards a radiation-free approach for quantitative longitudinal assessment. In: MICCAI, pp 3–13
Chen W, Liao X, Sun Y, Wang Q (2020) Improved orb-slam based 3d dense reconstruction for monocular endoscopic image. In: ICVRV, pp 101–106
Recasens D, Lamarca J, Fácil JM, Montiel J, Civera J (2021) Endo-depth-and-motion: reconstruction and tracking in endoscopic videos using depth networks and photometric constraints. IEEE Robot Automat Lett 6(4):7225–7232
doi: 10.1109/LRA.2021.3095528
Wei G, Yang H, Shi W, Jiang Z, Chen T, Wang Y (2021) Laparoscopic scene reconstruction based on multiscale feature patch tracking method. In: EIECS, pp 588–592. IEEE
Wei R, Li B, Mo H, Lu B, Long Y, Yang B, Dou Q, Liu Y, Sun D (2022) Stereo dense scene reconstruction and accurate localization for learning-based navigation of laparoscope in minimally invasive surgery. IEEE Trans Biomed Eng 70(2):488–500
doi: 10.1109/TBME.2022.3195027
Long Y, Li Z, Yee CH, Ng CF, Taylor RH, Unberath M, Dou Q (2021) E-dssr: efficient dynamic surgical scene reconstruction with transformer-based stereoscopic depth perception. In: MICCAI, pp 415–425
Wang Y, Long Y, Fan SH, Dou Q (2022) Neural rendering for stereo 3d reconstruction of deformable tissues in robotic surgery. MICCAI
Mildenhall B, Srinivasan PP, Tancik M, Barron JT, Ramamoorthi R, Ng R (2020) Nerf: Representing scenes as neural radiance fields for view synthesis. In: ECCV, pp 405–421
Müller M, Heidelberger B, Hennix M, Ratcliff J (2007) Position based dynamics. J Vis Commun Image Represent 18(2):109–118
doi: 10.1016/j.jvcir.2007.01.005
Sifakis E, Barbic J (2012) Fem simulation of 3d deformable solids: a practitioner’s guide to theory, discretization and model reduction. In: Acm Siggraph 2012 Courses, pp 1–50
Qian K, Bai J, Yang X, Pan J, Zhang J (2017) Essential techniques for laparoscopic surgery simulation. Comput Animat Virtual Worlds 28(2):1724
doi: 10.1002/cav.1724
Qian K, Jiang T, Wang M, Yang X, Zhang J (2016) Energized soft tissue dissection in surgery simulation. Comput Animat Virtual Worlds 27(3–4):280–289
Hu Y, Li T-M, Anderson L, Ragan-Kelley J, Durand F (2019) Taichi: a language for high-performance computation on spatially sparse data structures. ACM Trans Gr(TOG) 38(6):1–16
Hu Y, Fang Y, Ge Z, Qu Z, Zhu Y, Pradhana A, Jiang C (2018) A moving least squares material point method with displacement discontinuity and two-way rigid body coupling. ACM Trans Graph(TOG) 37(4):1–14
Liang J, Makoviychuk V, Handa A, Chentanez N, Macklin M, Fox D (2018) Gpu-accelerated robotic simulation for distributed reinforcement learning. In: CoRL, pp. 270–282. PMLR
Müller T, Evans A, Schied C, Keller A (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graphics (ToG) 41(4):1–15
doi: 10.1145/3528223.3530127
Sun C, Sun M, Chen H-T (2022) Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In: CVPR
Fridovich-Keil S, Yu A, Tancik M, Chen Q, Recht B, Kanazawa A (2022) Plenoxels: Radiance fields without neural networks. In: CVPR, pp. 5501–5510
Fridovich-Keil S, Meanti G, Warburg FR, Recht B, Kanazawa A (2023) K-planes: Explicit radiance fields in space, time, and appearance. In: CVPR, pp. 12479–12488
Pumarola A, Corona E, Pons-Moll G, Moreno-Noguer F (2021) D-nerf: Neural radiance fields for dynamic scenes. In: CVPR, pp. 10318–10327
Park K, Sinha U, Barron JT, Bouaziz S, Goldman DB, Seitz SM, Martin-Brualla R (2021) Nerfies: Deformable neural radiance fields. In: ICCV, pp. 5865–5874
Chen A, Xu Z, Geiger A, Yu J, Su H (2022) Tensorf: Tensorial radiance fields. In: ECCV, pp. 333–350
Cao A, Johnson J (2023) Hexplane: A fast representation for dynamic scenes. In: CVPR, pp. 130–141
Xu H, Zhang J, Cai J, Rezatofighi H, Yu F, Tao D, Geiger A (2023) Unifying flow, stereo and depth estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence
Li Z, Liu X, Drenkow N, Ding A, Creighton FX, Taylor RH, Unberath M (2021) Revisiting stereo depth estimation from a sequence-to-sequence perspective with transformers. In: ICCV, pp. 6197–6206
Huber PJ (1992) Robust estimation of a location parameter. In: Breakthroughs in Statistics: Methodology and Distribution, pp. 492–518
Wang X, Qiu Y, Slattery SR, Fang Y, Li M, Zhu S-C, Zhu Y, Tang M, Manocha D, Jiang C (2020) A massively parallel and scalable multi-GPU material point method. ACM Trans Graph(TOG) 39(4):30–1
Hu Y, Zhou Q, Gao X, Jacobson A, Zorin D, Panozzo D (2018) Tetrahedral meshing in the wild. ACM Trans Graph(TOG) 37(4):60–1
Hu Y, Schneider T, Wang B, Zorin D, Panozzo D (2020) Fast tetrahedral meshing in the wild. ACM Trans Graph(TOG) 39(4):117
Si H (2015) Tetgen, a delaunay-based quality tetrahedral mesh generator. ACM Trans Math Softw 41(2):11. https://doi.org/10.1145/2629697
doi: 10.1145/2629697
Long Y, Li C, Dou Q (2022) Robotic surgery remote mentoring via AR with 3D scene streaming and hand interaction. Comput Methods Biomech Biomed Eng: Imag Vis 11(4):1027–1032
Long Y, Wei W, Huang T, Wang Y, Dou Q (2023) Human-in-the-loop embodied intelligence with interactive simulation environment for surgical robot learning. IEEE Robotics and Automation Letters
Sulsky D, Zhou S-J, Schreyer HL (1995) Application of a particle-in-cell method to solid mechanics. Comput Phys Commun. 87(1–2):236–252
Wolper J, Fang Y, Li M, Lu J, Gao M, Jiang C (2019) Cd-MPM: continuum damage material point methods for dynamic fracture animation. ACM Trans Graph (TOG) 38(4):1–15
Wolper J, Chen Y, Li M, Fang Y, Qu Z, Lu J, Cheng M, Jiang C (2020) AnisoMPM: animating anisotropic damage mechanics. ACM Trans Graph (TOG) 39(4):37

Auteurs

Yuehao Wang (Y)

Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.

Bingchen Gong (B)

Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.

Yonghao Long (Y)

Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.

Siu Hin Fan (SH)

Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong, China.

Qi Dou (Q)

Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China. qidou@cuhk.edu.hk.

Classifications MeSH