Radiofrequency ablation (RFA) with continuous ultrasonography (US) monitoring is a non-surgical alternative to traditional thyroid surgery for treating benign symptomatic thyroid nodules. Monitoring nodules over time through US imaging is used to determine procedural success, primarily indicated by measured volume reduction. These images also capture other rich clinical characteristics that we believed could be systematically interrogated across patients to better understand and stratify nodule response to RFA. We performed radiomic texture analysis on 56 preoperative and postoperative US thyroid nodule images from patients treated with RFA that generated 767 radiomic feature (RFs) measurements. Using dimensionality reduction and clustering of thyroid nodules by their US image texture features, unique populations of nodules were discovered that suggest these methods combined with radiomics texture analysis as a useful system for stratifying thyroid nodules. Additionally, individual texture features were found to be different between nodules with successful and unsuccessful outcomes, further supporting radiomics features as potential biomarkers for RFA-treated thyroid nodules.
KEYWORDS: Video, Mixed reality, Ultrasonography, Real time imaging, Video processing, Injuries, Imaging systems, Displays, Diseases and disorders, Augmented reality, Holographic displays, Ultrasound real time imaging
Ergonomics for image-guided procedures can be improved by using mixed reality headsets. Such headsets offer the ability to position holographic monitors that display information, such as an ultrasound stream, within the operator’s field of view during procedures. However, one of the barriers of clinical adoption of mixed reality headsets is high latency of information projected on to the headset. Upwards of 40% of the overall latency of the entire system can be due to video cards that capture procedural imaging for wireless streaming. The costs of the video cards can vary widely, from as low as $20 to upwards of several hundreds of dollars. The latencies of four separate video cards with a range of costs were evaluated. Based on these results, we propose an ideal tradeoff between latency and costs for clinical use of wirelessly mirroring procedural imaging into mixed reality headsets.
Augmented reality (AR) can enable physicians to “see” inside of patients by projecting cross-sectional imaging directly onto the patient during procedures. In order to maintain workflow, imaging must be quickly and accurately registered to the patient. We describe a method for automatically registering a CT image set projected from an augmented reality headset to a set of points in the real world as a first step towards real-time registration of medical images to patients. Sterile, radiopaque fiducial markers with unique optical identifiers were placed on a patient prior to acquiring a CT scan of the abdomen. For testing purposes, the same fiducial markers were then placed on a tabletop as a representation of the patient. Our algorithm then automatically located the fiducial markers in the CT image set, optically identified the fiducial markers on the tabletop, registered the markers in the CT image set with the optically detected markers and finally projected the registered CT image set onto the real-world markers using the augmented reality headset.The registration time for aligning the image set using 3 markers was 0.9 ± 0.2 seconds with an accuracy of 5 ± 2 mm. These findings demonstrate the feasibility of fast and accurate registration using unique radiopaque markers for aligning patient imaging onto patients for procedural planning and guidance.
Augmented reality (AR) can be used to visualize virtual 3D models of medical imaging in actual 3D physical space. Accurate registration of these models onto patients will be essential for AR-assisted image-guided interventions. In this study, registration methods were developed, and registration times for aligning a virtual 3D anatomic model of patient imaging onto a CT grid commonly used in CT-guided interventions were compared. The described methodology enabled automated and accurate registration within seconds using computer vision detection of the CT grid as compared to minutes using user-interactive registration methods. Simple, accurate, and near instantaneous registration of virtual 3D models onto CT grids will facilitate the use of AR for real-time procedural guidance and combined virtual/actual 3D navigation during image-guided interventions.
Dynamic contrast enhanced (DCE) MRI has emerged as a reliable and diagnostically useful functional imaging technique. DCE protocol typically lasts 3-15 minutes and results in a time series of N volumes. For automated analysis, it is important that volumes acquired at different times be spatially coregistered. We have recently introduced a novel 4D, or volume time series, coregistration tool based on a user-specified target volume of interest (VOI). However, the relationship between coregistration accuracy and target VOI size has not been investigated. In this study, coregistration accuracy was quantitatively measured using various sized target VOIs. Coregistration of 10 DCE-MRI mouse head image sets were performed with various sized VOIs targeting the mouse brain. Accuracy was quantified by measures based on the union and standard deviation of the coregistered volume time series. Coregistration accuracy was determined to improve rapidly as the size of the VOI increased and approached the approximate volume of the target (mouse brain). Further inflation of the VOI beyond the volume of the target (mouse brain) only marginally improved coregistration accuracy. The CPU time needed to accomplish coregistration is a linear function of N that varied gradually with VOI size. From the results of this study, we recommend the optimal size of the VOI to be slightly overinclusive, approximately by 5 voxels, of the target for computationally efficient and accurate coregistration.
KEYWORDS: Kidney, Image segmentation, Magnetic resonance imaging, Image processing, Ear, Radiology, Medicine, Computer aided diagnosis and therapy, Signal to noise ratio, Software development
The precision, accuracy, and efficiency of a novel semi-automated segmentation technique for VIBE MRI sequences was analyzed using clinical datasets. Two observers performed whole-kidney segmentation using EdgeWave software based on constrained morphological growth, with average inter-observer disagreement of 2.7% for whole kidney volume, 2.1% for cortex, and 4.1% for medulla. Ground truths were prepared by constructing ROI on individual slices, revealing errors of 2.8%, 3.1%, and 3.6%, respectfully. It took approximately 7 minutes to perform one segmentation. These improvements over our existing graph-cuts segmentation technique make kidney volumetry a reality in many clinical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.