Fluoroscopy-guided endovascular interventions are being performing for more and more complex cases with longer screening times. However, X-ray is much better at visualizing interventional devices and dense structures compared to vasculature. To visualise vasculature, angiography screening is essential but requires the use of iodinated contrast medium (ICM) which is nephrotoxic. Acute kidney injury is the main life-threatening complication of ICM. Digital subtraction angiography (DSA) is also often a major contributor to overall patient radiation dose (81% reported). Furthermore, a DSA image is only valid for the current interventional view and not the new view once the C-arm is moved. In this paper, we propose the use of 2D-3D image registration between intraoperative images and the preoperative CT volume to facilitate DSA remapping using a standard fluoroscopy system. This allows repeated ICM-free DSA and has the potential to enable a reduction in ICM usage and radiation dose. Experiments were carried out using 9 clinical datasets. In total, 41 DSA images were remapped. For each dataset, the maximum and averaged remapping accuracy error were calculated and presented. Numerical results showed an overall averaged error of 2.50 mm, with 7 patients scoring averaged errors < 3 mm and 2 patients < 6 mm.
We present novel methodologies for compounding large numbers of 3D echocardiography volumes. Our aim is to investigate the effect of using an increased number of images, and to compare the performance of different compounding methods on image quality. Three sets of 3D echocardiography images were acquired from three volunteers. Each set of data (containing 10+ images) were registered using external tracking followed by state-of-the-art image registration. Four compounding methods were investigated, mean, maximum, and two methods derived from phase-based compounding. The compounded images were compared by calculating signal-to-noise ratios and contrast at manually identified anatomical positions within the images, and by visual inspection by experienced echocardiographers. Our results indicate that signal-to-noise ratio and contrast can be improved using increased number of images, and that a coherent compounded image can be produced using large (10+) numbers of 3D volumes.
We present a novel method to register three-dimensional echocardiography (echo) images with magnetic resonance
images (MRI) based on anatomical features, which could be used in the registration pipeline for overlaying MRI-derived
roadmaps onto two-dimensional live X-ray images in electrophysiology (EP) procedures. The features used in image
registration are the surface of the left ventricle and a manually defined centerline of the descending aorta. The MR-derived
surface is generated using a fully automated algorithm, and the echo-derived surface is produced using a semi-automatic
process. We test our method on six volunteers and three patients. We validated registration accuracy using two
methods. The first calculated a root mean square distance error using anatomical landmarks. The second method used
catheters as landmarks in one clinical EP procedure. Results show a mean error of 4.24 mm, which is acceptable for our
clinical application, and no failed registrations were observed. In addition, our algorithm works on clinical data, is fast
and only requires a small amount of manual input, and so it is applicable to use during EP procedures.
Knee arthroscopy is a minimally invasive procedure that is routinely carried out for the diagnosis and treatment of
pathologies of the knee joint. A high level of expertise is required to carry out this procedure and therefore the clinical
training is extensive. There are several reasons for this that include the small field of view seen by the arthroscope and
the high degree of distortion in the video images. Several virtual arthroscopy simulators have been proposed to augment
the learning process. One of the limitations of these simulators is the generic models that are used. We propose to
develop a new virtual arthroscopy simulator that will allow the use of pathology-specific models with an increased level
of photo-realism. In order to generate these models we propose to use registered magnetic resonance images (MRI) and
arthroscopic video images collected from patients with a variety of knee pathologies. We present a method to perform
this registration based on the use of a combined X-ray and MR imaging system (XMR). In order to validate our
technique we carried out MR imaging and arthroscopy of a custom-made acrylic phantom in the XMR environment. The
registration between the two modalities was computed using a combination of XMR and camera calibration, and optical
tracking. Both two-dimensional (2D) and three-dimensional (3D) registration errors were computed and shown to be
approximately 0.8 and 3 mm, respectively. Further to this, we qualitatively tested our approach using a more realistic
plastic knee model that is used for the arthroscopy training.
We present a novel method to calibrate a 3D ultrasound probe which has a 2D transducer array. By optically tracking a calibrated 3D probe we are able to produce extended field of view 3D ultrasound images. Tracking also enables us to register our ultrasound images to other tracked and calibrated surgical instruments or to other tracked and calibrated imaging devices. Our method applies rigid intensity-based image registration to three or more ultrasound images. These images can either be of a simple phantom, or could potentially be images of the patient. In this latter case we would have an automated calibration system which required no phantom, no image segmentation and was optimized to the patient's ultrasound characteristics i.e. speed of sound. We have carried out experiments using a simple calibration phantom and with ultrasound images of a volunteer's liver. Results are compared to an independent gold-standard. These showed our method to be accurate to 1.43mm using the phantom images and 1.56mm using the liver data, which is slightly better than the traditional point-based calibration method (1.7mm in our experiments).
We propose a novel system for image guidance in totally endoscopic coronary artery bypass (TECAB). A key requirement
is the availability of 2D-3D registration techniques that can deal with non-rigid motion and deformation. Image guidance
for TECAB is mainly required before the mechanical stabilization of the heart, thus the most dominant source of non-rigid
deformation is the motion of the beating heart.
To augment the images in the endoscope of the da Vinci robot, we have to find the transformation from the coordinate
system of the preoperative imaging modality to the system of the endoscopic cameras.
In a first step we build a 4D motion model of the beating heart. Intraoperatively we can use the ECG or video processing
to determine the phase of the cardiac cycle. We can then take the heart surface from the motion model and register it to
the stereo-endoscopic images of the da Vinci robot using 2D-3D registration methods. We are investigating robust feature
tracking and intensity-based methods for this purpose.
Images of the vessels available in the preoperative coordinate system can then be transformed to the camera system and
projected into the calibrated endoscope view using two video mixers with chroma keying. It is hoped that the augmented
view can improve the efficiency of TECAB surgery and reduce the conversion rate to more conventional procedures.
We present a segmentation algorithm using a statistical deformation model constructed from CT data of adult male pelves coupled to MRI appearance data. The algorithm allows the semi-automatic segmentation of bone for a limited population of MRI data sets. Our application is pelvic bone delineation from pre-operative MRI for image guided pelvic surgery. Specifically, we are developing image guidance for prostatectomies using the daVinci telemanipulator. Hence the use of male pelves only. The algorithm takes advantage of the high contrast of bone in CT data, allowing a robust shape model to be constructed relatively easily. This shape model can then be applied to a population of MRI data sets using a single data set that contains both CT and MRI data. The model is constructed automatically using fluid based non-rigid registration between a set of CT training images, followed by principal component analysis. MRI appearance data is imported using CT and MRI data from the same patient. Registration optimisation is performed using differential evolution. Based on our limited validation to date, the algorithm may outperform segmentation using non-rigid registration between MRI images without the use of shape data. The mean surface registration error achieved was 1.74 mm. The algorithm shows promise for use in segmentation of pelvic bone from MRI, though further refinement and validation is required. We envisage that the algorithm presented could be extended to allow the rapid creation of application specific models in various imaging modalities using a shape model based on CT data.
We present a novel framework for describing intensity-based multi-modal similarity measures. Our framework is
based around a concept of internal, or self, similarity. Firstly the locations of multiple regions or patches which
are "similar" to each other are identified within a single image. The term "similar" is used here to represent
a generic intra-modal similarity measure. Then if we examine a second image in the same locations, and this
image is registered to the first image, we should find that the patches in these locations are also "similar", though
the actual features in the patches when compared between the images could be very different. We propose that
a measure based on this principle could be used as an inter-modal similarity measure because, as the two
images become increasingly misregistered then the patches within the second image should become increasingly
dissimilar. Therefore, our framework results in an inter-modal similarity measure by using two intra-modal
similarity measures applied separately within each image.
In this paper we describe how popular multi-modal similarity measures such as mutual information can be
described within this framework. In addition the framework has the potential to allow the formation of novel
similarity measures which can register using regional information, rather than individual pixel/voxel intensities.
An example similarity measure is produced and its ability to guide a registration algorithm is investigated. Registration
experiments are carried out using three datasets. The pairs of images to be registered were specifically chosen as they were expected to challenge (i.e. result in misregistrations) standard intensity-based measures, such as mutual information. The images include synthetic data, cadaver data and clinical data and cover a range of modalities. Our experiments show that our proposed measure is able to achieve accurate registrations where standard intensity-based measures, such as mutual information, fail.
KEYWORDS: Magnetic resonance imaging, Image registration, Brain, Image fusion, 3D image processing, In vitro testing, In vivo imaging, Spatial resolution, Medical imaging, Image resolution
Introduction - Fusion of histology and MRI is frequently demanded in biomedical research to study in vitro
tissue properties in an in vivo reference space. Distortions and artifacts caused by cutting and staining of
histological slices as well as differences in spatial resolution make even the rigid fusion a difficult task. State-of-
the-art methods start with a mono-modal restacking yielding a histological pseudo-3D volume. The 3D
information of the MRI reference is considered subsequently. However, consistency of the histology volume and
consistency due to the corresponding MRI seem to be diametral goals. Therefore, we propose a novel fusion
framework optimizing histology/histology and histology/MRI consistency at the same time finding a balance
between both goals.
Method - Direct slice-to-slice correspondence even in irregularly-spaced cutting sequences is achieved by
registration-based interpolation of the MRI. Introducing a weighted multi-image mutual information metric
(WI), adjacent histology and corresponding MRI are taken into account at the same time. Therefore, the
reconstruction of the histological volume as well as the fusion with the MRI is done in a single step.
Results - Based on two data sets with more than 110 single registrations in all, the results are evaluated
quantitatively based on Tanimoto overlap measures and qualitatively showing the fused volumes. In comparison
to other multi-image metrics, the reconstruction based on WI is significantly improved. We evaluated different
parameter settings with emphasis on the weighting term steering the balance between intra- and inter-modality
consistency.
KEYWORDS: Image registration, Liver, Ultrasonography, Motion models, Magnetic resonance imaging, Data modeling, 3D image processing, Modeling, 3D modeling, Visualization
We present a method for non-rigid registration of preoperative magnetic resonance (MR) images and an interventional plan to sparse intraoperative ultrasound (US) of the liver. Our clinical motivation is to enable the accurate transfer of information from preoperative imaging modalities to intraoperative ultrasound to aid needle placement for thermal ablation of liver metastases. An inital rigid registration to intraoperative coordinates is obtained using a set of ultrasound images acquired at maximum exhalation. A pre-processing step is applied to both the MR and US images. The preoperative image and plan are then aligned to a single ultrasound slice acquired at an unknown point in the breathing cycle where the liver is likely to have moved and deformed relative to the preoperative image. Alignment is constrained using a patient-specific model of breathing motion and deformation. Target registration error is estimated by carrying out simulation experiments using sparsely re-sliced MR volumes in place of real ultrasound and comparing the registration results to a gold-standard registration performed on the full MR volume. Experiments using real ultrasound are then carried out and verified using visual inspection.
The purpose of the proposed template propagation method is to support the comparative analysis of image pairs even when large deformations (e.g. from movement) are present. Starting from a position where valid starting estimates are known, small sub-volumes (templates) are registered rigidly. Propagating registration results to neighboring templates, the algorithm proceeds layer by layer until corresponding points for the whole volume are available. Template classification is important for defining the templates to be registered, for propagating registration results and for selecting successfully registered templates which finally represent the motion vector field. This contribution discusses a template selection and classification strategy based on the analysis of the similarity measure in the vicinity of the optimum. For testing the template propagation and classification methods, deformation fields of four volume pairs exhibiting considerable deformations have been estimated and the results have been compared to corresponding points picked by an expert. In all four cases, the proposed classification scheme was successful. Based on homologous points resulting from template propagation, an elastic transformation was performed.
Soft-tissue deformation can be a problem if a pre-operative modality is used to help guide a surgical or an interventional procedure. We present a method which can warp a pre-operative CT image to represent the intra-operative scene shown by an interventional fluoroscopy image. The method is a novel combination of a 2D-3D image registration algorithm and a deformation algorithm which allows rigid bodies to be incorporated into a non-linear deformation based on a radial basis function. The 2D-3D registration algorithm is used to obtain information on relative vertebral movements between pre-operative and intra-operative images. The deformation algorithm uses this information to warp the pre-operative image to represent the intra-operative scene more accurately. Images from an aortic stenting procedure were used. The observed deformation was 5 degree flexion and 5 mm lengthening of the lumbar spine. The vertebral positions in the warped CT volume represent the intra-operative scene more accurately than in the pre-operative CT volume. Although we had no gold- standard with which to assess the registration accuracy of soft-tissue structures, the position of soft-tissue structures within the warped CT volume appeared visually realistic.
2D/3D registration makes it possible to use pre-operative CT scans for navigation purposes during X-ray fluoroscopy guided interventions. We present a fast voxel-based method for this registration task, which uses a recently introduced similarity measure (pattern intensity). This measure is especially suitable for 2D/3D registration, because it is robust with respect to structures such as a stent visible in the X-ray fluoroscopy image but not in the CT scan. The method uses only a part of the CT scan for the generation of digitally reconstructed radiographs (DRRs) to accelerate their computation. Nevertheless, computation time is crucial for intra-operative application and a further speed-up is required, because numerous DRRs must be computed. For that reason, the suitability of different volume rendering methods for 2D/3D registration has been investigated. A method based on the shear-warp factorization of the viewing transformation turned out to be especially suitable and builds the basis of the registration algorithm. The algorithm has been applied to images of a spine phantom and to clinical images. For comparison, registration results have been calculated using ray-casting. The shear-warp factorization based rendering method accelerates registration by a factor of up to seven compared to ray-casting without degrading registration accuracy. Using a vertebra as feature for registration, computation time is in the range of 3-4s (Sun UltraSparc, 300 MHz) which is acceptable for intra-operative application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.