Recent advances in minimally invasive vascular disease treatments have led to the use of interventional tools like guidewires and stents,1 guided by fluoroscopy with high temporal resolution but limited depth information. To address this limitation, there is a growing interest in 3D image guidance, or 4D interventional guidance, which involves displaying a series of 3D images during procedures. However, implementing X-ray-based 4D interventional guidance requires a high-temporal-resolution reconstruction algorithm with minimal dose per 3D reconstruction. V¨oth et al.2 proposed, based on prior work of Eulig et al.,3, 4 an algorithm for the 3D reconstruction of interventional material from only two newly acquired X-ray images. Their pipeline utilizes the deep tool extraction (DTE) algorithm to compute interventional material images, which are then back-projected into a volume. A 3D U-Net5 called the deep tool reconstruction (DTR) transforms these backprojections into 3D reconstructions of the interventional material. While the pipeline shows impressive 3D reconstruction quality, it occasionally outputs false positives or negatives. In this work, we enhance the temporal information utilization by feeding the reconstructions of previous time steps as additional inputs to the DTR, improving the Dice coefficient from 71.21% to 76.84% on a simulated guidewire dataset.
Today, the subcutaneous, minimally invasive procedures performed in interventional radiology are usually guided by 2D X-ray fluoroscopy. In 2D X-ray fluoroscopy a series of 2D X-ray images is displayed. For challenging procedures however, 3D X-ray fluoroscopy would be advantageous. In 3D X-ray fluoroscopy, a series of 3D images, which is reconstructed from a series of 2D X-ray images, is displayed. Because the number of images used for guiding an intervention is very high, little dose can be spent per 3D reconstruction of a 3D fluoroscopy. To save dose and to minimize motion artifacts, a reconstruction algorithm that requires very few X-ray projections is desirable. Earlier work showed that guidewires, stents and coils, which are commonly used in interventions, can be reconstructed using only four synthetic X-ray projections. The reconstruction from two or three X-ray projections was only studied briefly. In this work, we improve the method by using a more suitable neural network architecture and by using a multi-channel backprojection instead of a single-channel backprojection. We then apply the improved method to more realistic data measured in an anthropomorphic phantom. The results show that the method produces 3D reconstructions of stents and guidewires with submillimeter accuracy using only three measured X-ray projections.
Today, 2D+T fluoroscopy is usually used for image guidance in interventional radiology. For challenging procedures, 4D (3D+T) image guidance would be advantageous. The difficulty in realizing X-ray-based 4D interventional guidance lies in the development of a very dose efficient reconstruction algorithm. To this end, we improve on a previously presented algorithm for the reconstruction of interventional tools. By incorporating temporal information into a 3D convolutional neural network, we reduce the number of X-ray projections that need to be acquired for the 3D reconstruction of guidewires from four to two, thereby halving dose and decreasing the demands put on imaging devices implementing the algorithm. In experiments with two moving guidewires in an anthropomorphic phantom, we observe little deviation of our 3D reconstructions from the ground truth.
Ring artifacts are a well-known problem in computed tomography (CT) and in particular in cone-beam CT (CBCT). This work addresses the reduction of ring artifacts in CT acquisitions using a data-driven approach. Deep convolutional neural networks (CNNs) of different dimensionalities are trained to estimate the ring artifacts directly from an uncorrected volume. This approach has the advantage that neither raw-data has to be available, nor any kind of resampling of the data is necessary. In addition to ring artifacts, our networks are also trained to correct for partial ring artifacts as they may occur in spiral CT or CBCT. This study shows that ring artifacts can be reduced in image domain by these neural networks. Our results suggest that a three-dimensional network is most suitable for this task.
Cross-scatter is often the dominant scatter mode in modern dual source CT (DSCT). Like forward scatter (intra source-detector-pair scatter), which is present in all CT systems, cross-scatter (inter source-detector-pair scatter) leads to streak and cupping artifacts. Having recently developed the deep scatter estimation (DSE) to estimate forward scatter in single source CT, we now tested the performance of DSE in such a cross-scatter-dominated DSCT. Given only the total intensity in a projection as input, we trained a deep convolutional neural network to estimate the scatter distribution, which was then subtracted from the total intensity, to obtain scatter-corrected data. The projections used for training and testing were simulated using Monte Carlo methods. Our method estimates cross- and forward scatter simultaneously and in real-time, with a mean error of only 1.7 %. The error of the CT values is reduced from hundreds of HU to a few dozens of HU. Our method can compete with a measurement-based approach, but does not require any additional hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.