Paper
16 March 2020 Rigid and deformable corrections in real-time using deep learning for prostate fusion biopsy
Author Affiliations +
Abstract
Fusion biopsy reduces false negative rates in prostatic cancer detection compare to systemic biopsy. However, accuracy in biopsy sampling depends upon quality of alignment between pre-operative 3D MR and intra-operative 2D US. During live biopsy, the US-MR alignment may be disturbed due to prostate or patient rigid motion. Further, prostate gland deform due to probe pressure, which add error in biopsy sampling. In this paper, we describe a method for real-time 2D-3D multimodal registration, utilizing deep learning, to correct for rigid and deformable errors. Our method do not require an intermediate 3D US and works in real-time with an average runtime of 112 ms for both rigid and deformable corrections. On 12 patient data, our method reduces mean trans-registration error (TRE) from 8.890±5.106 mm to 2.988±1.513 mm, comparable to other state of the arts in accuracy.
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Aditya Bhardwaj, Jun-Sung Park, Soumik Mukhopadhyay, Sikander Sharda, Yuri Son, Bhavya Ajani, and Srinivas Rao Kudavelly "Rigid and deformable corrections in real-time using deep learning for prostate fusion biopsy", Proc. SPIE 11315, Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling, 113151W (16 March 2020); https://doi.org/10.1117/12.2548589
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Biopsy

Magnetic resonance imaging

Prostate

Image registration

Image segmentation

Image fusion

Ultrasonography

Back to Top