We propose a Markov Random Field (MRF) formulation for the intensity-based N-view 2D-3D registration problem. The
transformation aligning the 3D volume to the 2D views is estimated by iterative updates obtained by discrete optimization
of the proposed MRF model. We employ a pairwise MRF model with a fully connected graph in which the nodes represent
the parameter updates and the edges encode the image similarity costs resulting from variations of the values of adjacent
nodes. A label space refinement strategy is employed to achieve sub-millimeter accuracy. The evaluation on real and
synthetic data and comparison to state-of-the-art method demonstrates the potential of our approach.
Digital tomosynthesis (DTS) often suffers from slow reconstruction speed due to the high complexity of the
computation, particularly when iterative methods are employed. To fulfill clinical performance constraints, graphics
cards (GPUs) were investigated and proved to be an efficient platform for accelerating tomographic reconstruction.
However, hardware programming constraints often led to complicated memory management and resulted in reduced
accuracy or compromised performance. In this paper we proposed a new GPU-based reconstruction framework targeting
on tomosynthesis applications. Our framework benefits from latest GPU functionalities and improves the design from
previous applications. A high-quality ray-driven forward projection help simplify the data flow when arbitrary
acquisition matrices are provided. Our results show that a near-interactive reconstruction speed is achieved with the new
framework at no loss of accuracy.
Tracer kinetic modeling with dynamic contrast enhanced MRI (DCE-MRI) and the quantification of the kinetic parameters are active fields of research which have the potential to improve the measurement of renal function. However, the strong coronal motion of the kidney in the time series inhibits an accurate assessment of the kinetic parameters. Automatic motion correction is challenging due to the large movement of the kidney and the strong intensity changes caused by the injected bolus. In this work, we improve the quantification results by a template matching motion correction method using a gradient-based similarity measure. Thus, a tedious manual motion correction is replaced by an automatic procedure. The only remaining user interaction is reduced to a selection of a reference slice and a coarse manual segmentation of the kidney in this slice. These steps do not present an overhead to the interaction needed for the assessment of the kinetic parameters. In order to achieve reliable and fast results, we constrain the degrees of freedom for the correction method as far as possible. Furthermore, we compare our method to deformable registration using the same similarity measure. In all our tests, the presented template matching correction was superior to the deformable approach in terms of reliability, leading to more accurate parameter quantification. The evaluation on 10 patient data series with 180-230 images each demonstrate that the quantitative analysis by a two-compartment model can be improved by our method.
Using 2D-3D registration it is possible to extract the body transformation between the coordinate systems of
X-ray and volumetric CT images. Our initial motivation is the improvement of accuracy of external beam
radiation therapy, an effective method for treating cancer, where CT data play a central role in radiation
treatment planning. Rigid body transformation is used to compute the correct patient setup. The drawback
of such approaches is that the rigidity assumption on the imaged object is not valid for most of the patient
cases, mainly due to respiratory motion. In the present work, we address this limitation by proposing a flexible
framework for deformable 2D-3D registration consisting of a learning phase incorporating 4D CT data sets and
hardware accelerated free form DRR generation, 2D motion computation, and 2D-3D back projection.
Alignment of angiographic preoperative 3D scans to intraoperative 2D projections is an important issue for 3D
depth perception and navigation during interventions. Currently, in a setting where only one 2D projection is
available, methods employing a rigid transformation model present the state of the art for this problem. In
this work, we introduce a method capable of deformably registering 3D vessel structures to a respective single
projection of the scene. Our approach addresses the inherent ill-posedness of the problem by incorporating a
priori knowledge about the vessel structures into the formulation. We minimize the distance between the 2D
points and corresponding projected 3D points together with regularization terms encoding the properties of
length preservation of vessel structures and smoothness of deformation. We demonstrate the performance and
accuracy of the proposed method by quantitative tests on synthetic examples as well as real angiographic scenes.
KEYWORDS: Calibration, Ultrasonography, Technetium, Image sensors, 3D image reconstruction, In vivo imaging, Transducers, 3D acquisition, Liver, Reconstruction algorithms
For freehand ultrasound systems, a calibration method is necessary to locate the position and orientation of a
2D B-mode ultrasound image plane with respect to a position sensor attached to the transducer. In addition,
the acquisition time discrepancy between the position measurements and the image frames has the be computed.
We developed a new method that adresses both of these problems, based on the fact that a freehand ultrasound
system establishes consistent 3D data of an arbitrary object. Two angulated sweeps of any object containing
well visible structures are recorded, each at a different orientation. A non-linear optimization strategy maximizes
the similarity of 2D ultrasound images from one sweep to reconstructions computed from the other sweep. No
designated phantom is required for this calibration. The process can be performed in vivo on the patient. We
evaluated our method using freehand acquisitions on both a phantom and the human liver. The accuracy of the
approach was validated using a 3D ultrasound probe as a known reference geometry.
We present a novel methodology for combining breast image data obtained at different times, in different geometries, and by different techniques. We combine data based on diffuse optical tomography (DOT) and magnetic resonance imaging (MRI). The software platform integrates advanced multimodal registration and segmentation algorithms, requires minimal user experience, and employs computationally efficient techniques. The resulting superposed 3-D tomographs facilitate tissue analyses based on structural and functional data derived from both modalities, and readily permit enhancement of DOT data reconstruction using MRI-derived a-priori structural information. We demonstrate the multimodal registration method using a simulated phantom, and we present initial patient studies that confirm that tumorous regions in a patient breast found by both imaging modalities exhibit significantly higher total hemoglobin concentration (THC) than surrounding normal tissues. The average THC in the tumorous regions is one to three standard deviations larger than the overall breast average THC for all patients.
KEYWORDS: Image registration, Image segmentation, 3D image processing, Blood vessels, Image processing algorithms and systems, 3D acquisition, Error analysis, X-rays, Magnetic resonance imaging, Medical imaging
We propose a novel and fast way to perform 2D-3D registration between available intra-operative 2D images with pre-operative 3D images in order to provide better image-guidance. The current work is a feature based registration algorithm that allows the similarity to be evaluated in a very efficient and faster manner than that of intensity based approaches. The current approach is focused on solving the problem for neuro-interventional applications and therefore we use blood vessels, and specifically their centerlines as the features for registration. The blood vessels are segmented from the 3D datasets and their centerline is extracted using a sequential topological thinning algorithm. Segmentation of the 3D datasets is straightforward because of the injection of contrast agents. For the 2D image, segmentation of the blood vessel is performed by subtracting the image with no contrast (native) from the one with a contrast injection (fill). Following this we compute a modified version of the 2D distance transform. The modified distance transform is computed such that distance is zero on the centerline and increases as we move away from the centerline. This allows us a smooth metric that is minimal at the centerline and large as we move away from the vessel. This is a one time computation, and need not be reevaluated during the iterations. Also we simply sum over all the points rather than evaluating distances over all point pairs as would be done for similar Iterative Closest Point (ICP) based approaches. We estimate the three rotational and three translational parameters by minimizing this cost over all points in the 3D centerline. The speed improvement allows us to perform the registration in under a second on current workstations and therefore provides interactive registration for the interventionalist.
We are developing an augmented reality (AR) image guidance system in which information derived from medical images is overlaid onto a video view of the patient. The centerpiece of the system is a head-mounted display custom fitted with two miniature color video cameras that capture the stereo view of the scene. Medical graphics is overlaid onto the video view and appears firmly anchored in the scene, without perceivable time lag or jitter. We have been testing the system for different clinical applications. In this paper we discuss minimally invasive thoracoscopic spine surgery as a promising new orthopedic application. In the standard approach, the thoracoscope - a rigid endoscope - provides visual feedback for the minimally invasive procedure of removing a damaged disc and fusing the two neighboring vertebrae. The navigation challenges are twofold. From a global perspective, the correct vertebrae on the spine have to be located with the inserted instruments. From a local perspective, the actual spine procedure has to be performed precisely. Visual feedback from the thoracoscope provides only limited support for both of these tasks. In the augmented reality approach, we give the surgeon additional anatomical context for the navigation. Before the surgery, we derive a model of the patient's anatomy from a CT scan, and during surgery we track the location of the surgical instruments in relation to patient and model. With this information, we can help the surgeon in both the global and local navigation, providing a global map and 3D information beyond the local 2D view of the thoracoscope. Augmented reality visualization is a particularly intuitive method of displaying this information to the surgeon. To adapt our augmented reality system to this application, we had to add an external optical tracking system, which works now in combination with our head-mounted tracking camera. The surgeon's feedback to the initial phantom experiments is very positive.
We have developed a software platform for multimodal integration and visualization of diffuse optical tomography and magnetic resonance imaging. Novel registration and segmentation algorithms have been integrated into the platform. The multimodal registration technique enables the alignment of non-concurrently acquired MR and DOT breast data. The non-rigid registration algorithm uses two-dimensional signatures (2D digitally reconstructed radiographs) of the reference and moving volumes in order to register them. Multiple two-dimensional signatures can robustly represent the volume depending on the way signatures are generated. An easy way to conceptualize the idea is to understand the motion of an object by tracking three perpendicular shadows of the object. The breast MR image segmentation
technique enables a priori structural information derived from MRI to be incorporated into the reconstruction of DOT data. The segmentation algorithm is based on "Random walkers". Both registration and segmentation algorithms were tested and have shown promising results. The average Target Registration Error (TRE) for phantom models simulating the large breast compression differences was always below 5%. Tests on patient datasets also showed satisfying visual results. Several tests were also conducted for segmentation assessment and results have shown high quality MR breast image segmentation.
We have developed a software platform for multimodal integration and visualization of diffuse optical tomography (DOT) and magnetic resonance imaging (MRI) of breast cancer. The image visualization platform allows multimodality 3D image visualization and manipulation of datasets, such as a variety of 3D rendering technique, and the ability to simultaneously control multiple fields of view. This platform enables quantitative and qualitative analysis of structural and functional diagnostic data, using both conventional & molecular imaging. The functional parameters, together with morphological parameters from MR can be suitably combined and correlated to the absolute diagnosis from histopathology. Fusion of the multimodal datasets will eventually lead to a significant improvement in the sensitivity and specificity of breast cancer detection. Fusion may also allow a priori structural information derived from MRI to be incorporated into the reconstruction of diffuse optical tomography images. We will present the early results of image visualization and registration on multimodal breast cancer data, DOT and MRI.
KEYWORDS: Visualization, Head-mounted displays, 3D displays, 3D acquisition, Navigation systems, Camera shutters, Glasses, 3D visualizations, 3D image processing, Stereoscopy
We present a user performance analysis of four navigation systems based on different visualization schemes (2D, 3D,
stereoscopy on a monitor, and a stereo head mounted display (HMD)). We developed a well-defined user workflow,
which starts with the selection of a safe and efficient needle path, followed by the placement, insertion and removal of
the needle. We performed the needle procedure on a foam-based phantom, targeting a virtual lesion while avoiding
virtual critical structures. The phantom and needle’s position and orientation were optically tracked in real-time. 28
users performed each a total of 20 needle placements, on five phantom configurations using the four visualization
schemes. Based on digital measurements, and on qualitative user surveys, we computed the following parameters:
accuracy and duration of the procedure, user progress, efficiency, confidence, and judgment. The results show that all
systems are about equivalent when it comes to reaching the center of the target. However the HMD- and 2D- based
systems performed better in avoiding the surrounding structures. The needle procedures were performed in a shorter
amount of time using the HMD- and 3D- based systems. With appropriate user training, procedure time for the 2D-
based system decreased significantly.
We developed an augmented reality navigation system for MR-guided interventions. A head-mounted display provides in real-time a stereoscopic video-view of the patient, which is augmented with three-dimensional medical information to perform MR-guided needle placement procedures. Besides with the MR image information, we augment the scene with 3D graphics representing a forward extension of the needle and the needle itself. During insertion, the needle can be observed virtually at its actual location in real-time, supporting the interventional procedure in an efficient and intuitive way. In this paper we report on quantitative results of AR guided needle placement procedures on gel phantoms with embedded targets of 12mm and 6mm diameter; we furthermore evaluate our first animal experiment involving needle insertion into deep lying anatomical structures of a pig.
We are developing an augmented reality (AR) image guidance system, in which information derived from medical images is overlaid onto a video view of the patient. The interventionalist wears a head-mounted display (HMD) that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture the stereo view of the scene. A third video camera, operating in the near IR, is also attached to the HMD and is used for head tracking. The system achieves real-time performance of 30 frames per second. The graphics appears firmly anchored in the scne, without any noticeable swimming or jitter or time lag. For the application of CT-guided interventions, we extended our original prototype system to include tracking of a biopsy needle to which we attached a set of optical markers. The AR visualization provides very intuitive guidance for planning and placement of the needle and reduces radiation to patient and radiologist. We used an interventional abdominal phantom with simulated liver lesions to perform an inital set of experiments. The users were consistently able to locate the target lesion with the first needle pass. These results provide encouragement to move the system towards clinical trials.
We developed an augmented reality system targeting image guidance for surgical procedures. The surgeon wears a video- see-through head mounted display that provides him with a stereo video view of the patient. The live video images are augmented with graphical representations of anatomical structures that are segmented from medical image data. The surgeon can see, e.g., a tumor in its actual location inside the patient. This in-situ visualization, where the computer maps the image information onto the patient, promises the most direct, intuitive guidance for surgical procedures. In this paper, we describe technical details of the system and its installation in UCLA's iMRI operating room. We added instrument tracking to the capabilities of our system to prepare it for minimally invasive procedures. We discuss several pre-clinical phantom experiments that support the potential clinical usefulness of augmented reality guidance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.