KEYWORDS: Digital breast tomosynthesis, Mammography, Breast, X-rays, Modulation transfer functions, Imaging systems, Tomosynthesis, Spatial resolution, Breast cancer
Digital breast tomosynthesis (DBT) enables significantly higher cancer detection rates compared to full-field digital mammography (FFDM) without compromising the recall rate. However, regarding microcalcification assessment established tomosynthesis system concepts still tend to be inferior to FFDM. To further boost the clinical role of DBT in breast cancer screening and diagnosis, a system concept was developed that enables fast wide-angle DBT with the unique in-plane resolution capabilities known from FFDM. The concept comprises a novel x-ray tube concept that incorporates an adaptive focal spot position, fast flat-panel detector technology, and innovative algorithmic concepts for image reconstruction. We have built a DBT system that provides tomosynthesis image stacks and synthetic mammograms from 50° tomosynthesis scans realized in less than five seconds. In this contribution, we motivate the design of the system concept, present a physics characterization of its imaging performance, and outline the algorithmic concepts used for image processing. We conclude with illustrating the potential clinical impact by means of clinical case examples from first evaluations in Europe.
Wide–angle digital breast tomosynthesis (DBT) is well known to offer benefits in mass perceptibility compared to narrow–angle DBT due to reduced anatomical overlap. Regarding the perceptibility of micro–calcifications the situation is somehow inverted. On the one hand this can be related to effects during data acquisition and their impact on the system MTF. On the other hand there is a wider spread of calcifications in depth direction in narrow–angle DBT, which distributes calcifications over more slices. This is equivalent to an inherent thicker slice for high spatial frequencies. In this work we want to assume an equivalent quality of raw data and only focus on the effects of different acquisition angles in the reconstruction. We propose an algorithm which creates so–called hybrid thick DBT slices and optimizes the visualization of calcifications while preserving the high mass perceptibility of thin wide–angle DBT slices. The algorithm is purely based on filtered backprojection (FBP) and can be implemented in an efficient manner. For validation simulation studies using the VICTRE (FDA) pipeline are performed. Our results indicate that hybrid thick–slices in wide-angle DBT enable to successfully solve the contrarian imaging tasks of high mass and high calcification perception within one imaging setup.
Recently introduced multi-layer flat panel detectors (FPDs) enable single acquisition spectral radiography. We perform an in-depth simulation study to investigate different decomposition algorithms under the influence of adipose tissue and scattered radiation using physics-based material decomposition algorithms for the task of bone removal. We examine a matrix-based material decomposition (MBMD) under assumption of monoenergetic X-ray spectra (equivalent to weighted logarithmic subtraction (WLS)), a matrix-based material decomposition with polynomial beam hardening pre-correction (MBMD-PBC) and a projection domain decomposition (PDD). The simulated setup corresponds to an intensive care unit (ICU) anterior posterior (AP) bedside chest examination (contact scan). The limitations of the three algorithms are evaluated using a high-fidelity X-ray simulator with five phantom realizations that differ in terms of added adipose tissue. For each simulated phantom realization, different amounts of scatter correction are considered, ranging from no correction at all to an ideal scatter correction. Unless quantitative imaging is required, the three algorithms are capable of removing bone structures when adipose tissue is present. Bone removal using a multi-layer FPDs in an ICU setup is feasible. However, uncorrected scatter can lead to bone structures becoming visible in the soft tissue image. This indicates the need for accurate scatter estimation and correction algorithms, especially when using quantitative algorithms such as PDD.
Denoising algorithms are sensitive to the noise level and noise power spectrum of the input image and their ability to adapt to this. In the worst-case, image structures can be accidentally removed or even added. This holds up for analytical image filters but even more for deep learning-based denoising algorithms due to their high parameter space and their data-driven nature. We propose to use the knowledge about the noise distribution of the image at hand to limit the influence and ability of denoising algorithms to a known and plausible range. Specifically, we can use the physical knowledge of X-ray radiography by considering the Poisson noise distribution and the noise power spectrum of the detector. Through this approach, we can limit the change of the acquired signal by the denoising algorithm to the expected noise range, and therefore prevent the removal or hallucination of small relevant structures. The presented method allows to use denoising algorithms and especially deep learning-based methods in a controlled and safe fashion in medical x-ray imaging.
Contrast-enhanced mammography (CEM) offers a promising alternative to address the limitations of digital mammography, particularly in cases of dense breast tissue, which compromises the performance of non-contrast x-ray imaging modalities. CEM uses iodinated contrast material to enhance cancer detection in denser breasts and provides critical functional information about suspicious findings. However, the process of combining images acquired with different x-ray energy spectra in CEM can introduce artifacts, challenging interpretation and confidence in CEM images. This study presents novel approaches to improve CEM image quality. First, deep learning (DL)-based algorithms for scatter correction in both low-energy and high-energy images are proposed to enhance contrast-enhancement patterns and iodine quantification. Additionally, a unique deep learning network is introduced to predict pixel-by-pixel the compressed breast thickness, enabling the use of local thickness-based image subtraction-weighting maps throughout the breast area. Results in phantom cases demonstrate the effectiveness of the scatter correction models in predicting the scatter signal, even in cases with the anti-scatter grid present. The thickness map model accurately estimates the local thickness, particularly in the constant thickness area of the breast. Comparison with clinical data revealed good agreement between estimated thickness maps and ground truth, with minor discrepancies attributed to alignment issues. Furthermore, the study explored the combined use of scatter correction and thickness-based weighting maps in creating recombined CEM images. This approach showed a marginal positive impact due to scatter correction, with larger improvements observed in the signal intensity homogeneity at the border of the breast. These advancements aim to enhance the CEM diagnostic accuracy, making it a valuable tool for breast cancer detection and evaluation, especially in cases with dense breast tissue.
KEYWORDS: Denoising, Breast, Education and training, Digital breast tomosynthesis, Tomosynthesis, Computer simulations, Deep learning, X-rays, Breast density, Photons
PurposeHigh noise levels due to low X-ray dose are a challenge in digital breast tomosynthesis (DBT) reconstruction. Deep learning algorithms show promise in reducing this noise. However, these algorithms can be complex and biased toward certain patient groups if the training data are not representative. It is important to thoroughly evaluate deep learning-based denoising algorithms before they are applied in the medical field to ensure their effectiveness and fairness. In this work, we present a deep learning-based denoising algorithm and examine potential biases with respect to breast density, thickness, and noise level.ApproachWe use physics-driven data augmentation to generate low-dose images from full field digital mammography and train an encoder-decoder network. The rectified linear unit (ReLU)-loss, specifically designed for mammographic denoising, is utilized as the objective function. To evaluate our algorithm for potential biases, we tested it on both clinical and simulated data generated with the virtual imaging clinical trial for regulatory evaluation pipeline. Simulated data allowed us to generate X-ray dose distributions not present in clinical data, enabling us to separate the influence of breast types and X-ray dose on the denoising performance.ResultsOur results show that the denoising performance is proportional to the noise level. We found a bias toward certain breast groups on simulated data; however, on clinical data, our algorithm denoises different breast types equally well with respect to structural similarity index.ConclusionsWe propose a robust deep learning-based denoising algorithm that reduces DBT projection noise levels and subject it to an extensive test that provides information about its strengths and weaknesses.
KEYWORDS: Breast, Digital breast tomosynthesis, 3D modeling, Systems modeling, 3D scanning, 3D acquisition, Image compression, Data modeling, Scanners, Optical scanning systems
Improving the modeling of the breast shapes during mechanical compression in both cranio-caudal (CC) and medio-lateral oblique (MLO) views can enhance the development of image processing and dosimetric estimates in digital mammography and digital breast tomosynthesis (DBT). In previous work, a CC model was created using a pair of optical structured light scanning systems, but acquiring similar data for an MLO view model during clinical practice proved impractical with these devices. The present work instead uses two smartphone infrared cameras with 3D-printed holders to obtain surface scans for the MLO view during a DBT acquisition. The study compared the average distance between the MLO breast shape information recorded by the smartphone-based scans to the corresponding DBT exam-based surface scan for 20 patient breasts. Results showed that there was close overlap between the smartphone-based scanned surfaces of the breast and the corresponding DBT images. The agreement between the breast shape represented by these surfaces was dependent on the smartphone-based scanner precision, the segmentation procedure used to obtain the DBT surface, and the manual alignment of the smartphone-based left and right-side view of the breast. The agreement was however of sufficiently good quality for the data to be used for the development of an MLO breast shape model.
KEYWORDS: Medical image reconstruction, Bone, X-ray computed tomography, Sensors, X-rays, Medical imaging, Aluminum, Physics, Photons, Signal attenuation, Monte Carlo methods
We investigate the feasibility of bone marrow edema (BME) detection using a kV-switching Dual-Energy (DE) Cone-Beam CT (CBCT) protocol. This task is challenging due to unmatched X-ray paths in the low-energy (LE) and high-energy (HE) spectral channels, CBCT non-idealities such as X-ray scatter, and narrow spectral separation between fat (bone marrow) and water (BME). We propose a comprehensive DE decomposition framework consisting of projection interpolation onto matching LE and HE view angles, fast Monte Carlo scatter correction with low number of tracked photons and Gaussian denoising, and two-stage three-material decompositions involving two-material (fat-Aluminum) Projection-Domain Decomposition (PDD) followed by image-domain three-material (fat-water-bone) base-change. Performance in BME detection was evaluated in simulations and experiments emulating a kV-switching CBCT wrist imaging protocol on a robotic x-ray system with 60 kV LE beam, 120 kV HE beam, and 0.5° angular shift between the LE and HE views. Cubic B-spline interpolation was found to be adequate to resample HE and LE projections of a wrist onto common view angles required by PDD. The DE decomposition maintained acceptable BME detection specificity (⪅0.2 mL erroneously detected BME volume compared to 0.85 mL true BME volume) over +/-10% range of scatter magnitude errors, as long as the scatter shape was estimated without major distortions. Physical test bench experiments demonstrated successful discrimination of ~20% change in fat concentrations in trabecular bone-mimicking solutions of varying water and fat content.
PURPOSE: To investigate differences in microcalcification detection performance for different acquisition setups in digital breast tomosynthesis (DBT), a convex dose distribution and sparser number of projections compared to the standard set-up was evaluated via a virtual clinical trial (VCT). METHODS AND MATERIALS: Following the Institutional Review Board (IRB) approval and patient consent, mediolateral oblique (MLO) DBT views were acquired at twice the automatic exposure controlled (AEC) dose level; omitting the craniocaudal (CC) view limited the total examination dose. Microcalcification clusters were simulated into the DBT projections and noise was added to simulate lower dose levels. Three set-ups were evaluated: (1) 25 DBT projections acquired with a fixed dose/projection at the clinically used AEC dose level, (2) 25 DBT projections with dose/projection following a convex dose distribution along the scan arc, and (3) 13 DBT projections at higher dose with the total scan dose equal to the AEC dose level and preserving the angular range of 50° (sparse). For the convex set-up, dose/projection started at 0.035 mGy at the extremes and increased to 0.163 mGy for the central projection. A Siemens prototype algorithm was used for reconstruction. An alternative free-response receiver operating characteristic (AFROC) study was conducted with 6 readers to compare the microcalcification detection between the acquisition set-ups. Sixty cropped VOIs of 50x50x(breast thickness) mm3 per set-up were included, of which 50% contained a microcalcification cluster. In addition to localization of the cluster, the readers were asked to count the individual calcifications. The area under the AFROC curve was used to compare the different acquisition set-ups and a paired t-test was used to test significance. RESULTS: The AUCs for the standard, convex and sparse set-up were 0.97±0.01, 0.95±0.02 and 0.89±0.03, respectively, indicating no significant difference between standard and convex set-up (p=0.309), but a significant decrease in detectability was found for the sparse set-up (p=0.001). The number of detected calcifications per cluster was not significantly different between standard and convex set-ups (p=0.049), with 42%±9% and 40%±8%, respectively. The sparse set-up scored lower with a relative number of detected microcalcifications of 34%±11%, but this decrease was not significant (p=0.031). CONCLUSION: A convex dose distribution that increased dose along the scan arc towards the central projections did not increase detectability of microcalcifications in the DBT planes compared to the current AEC set-up. Conversely, a sparse set of projections acquired over the total scan arc decreased microcalcification detectability compared to the variable dose and current clinical set-up.
Purpose: We investigated the feasibility of detection and quantification of bone marrow edema (BME) using dual-energy (DE) Cone-Beam CT (CBCT) with a dual-layer flat panel detector (FPD) and three-material decomposition. Methods: A realistic CBCT system simulator was applied to study the impact of detector quantization, scatter, and spectral calibration errors on the accuracy of fat-water-bone decompositions of dual-layer projections. The CBCT system featured 975 mm source-axis distance, 1,362 mm source-detector distance and a 430 × 430 mm2 dual-layer FPD (top layer: 0.20 mm CsI:Tl, bottom layer: 0.55 mm CsI:Tl; a 1 mm Cu filter between the layers to improve spectral separation). Tube settings were 120 kV (+2 mm Al, +0.2 mm Cu) and 10 mAs per exposure. The digital phantom consisted of a 160 mm water cylinder with inserts containing mixtures of water (volume fraction ranging 0.18 to 0.46) - fat (0.5 to 0.7) - Ca (0.04 to 0.12); decreasing fractions of fat indicated increasing degrees of BME. A two-stage three-material DE decomposition was applied to DE CBCT projections: first, projection-domain decomposition (PDD) into fat-aluminum basis, followed by CBCT reconstruction of intermediate base images, followed by image-domain change of basis into fat, water and bone. Sensitivity to scatter was evaluated by i) adjusting source collimation (12 to 400 mm width) and ii) subtracting various fractions of the true scatter from the projections at 400 mm collimation. The impact of spectral calibration was studied by shifting the effective beam energy (± 2 keV) when creating the PDD lookup table. We further simulated a realistic BME imaging framework, where the scatter was estimated using a fast Monte Carlo (MC) simulation from a preliminary decomposition of the object; the object was a realistic wrist phantom with an 0.85 mL BME stimulus in the radius. Results: The decomposition is sensitive to scatter: approx. <20 mm collimation width or <10% error of scatter correction in a full field-of-view setting is needed to resolve BME. A mismatch in PDD decomposition calibration of ± 1 keV results in ~25% error in fat fraction estimates. In the wrist phantom study with MC scatter corrections, we were able to achieve ~0.79 mL true positive and ~0.06 mL false positive BME detection (compared to 0.85 mL true BME volume). Conclusions: Detection of BME using DE CBCT with dual-layer FPD is feasible, but requires scatter mitigation, accurate scatter estimation, and robust spectral calibration.
Using anti-scatter grids in digital breast tomosynthesis (DBT) is challenging due to the relative motion between source and detector. Therefore, algorithmic scatter correction methods could be beneficial to compensate for the image quality loss due to scatter radiation. In this work, we present a deep learning model capable of predicting the scatter fraction (SF) image for simulations of realistically shaped homogeneous phantoms. The model was trained, validated, and tested on a dataset of 600 homogeneous phantoms representing the compressed breast with thicknesses ranging from 30 mm to 89 mm with randomly generated breast shapes. Monte Carlo simulations were performed for all cases and at different projection angles to obtain estimates of the DBT primary and scatter projections. The same procedure was performed for patient-based phantoms with realistic internal glandular and adipose texture to evaluate the generalizability of our model results. The median and interquartile range (IQR) of the mean relative difference (MRD) and mean absolute error (MAE) for the homogeneous phantoms between Monte Carlo SF ground truth and model predictions resulted in approximately 0.49% (IQR, 0.26−0.76%) and 2.06% (IQR, 1.83−2.26%), respectively, while the patient-based phantoms resulted in results with a MRD of -1.39% (IQR, -4.13−1.60%) and a MAE of 3.97% (IQR, 3.38−4.71%). This seems to indicate that the model trained on the homogeneous phantoms captures the average representation inside the homogeneous breast, with reasonable accuracy in breasts with texture variations. Therefore, the model has the potential to be used as an algorithmic scatter correction method in the future.
KEYWORDS: Denoising, X-rays, Digital breast tomosynthesis, X-ray imaging, Photons, Mammography, Sensors, Physics, Signal to noise ratio, Interference (communication)
Digital Breast Tomosynthesis (DBT) is becoming increasingly popular for breast cancer screening because of its high depth resolution. It uses a set of low-dose x-ray images called raw projections to reconstruct an arbitrary number of planes. These are typically used in further processing steps like backprojection to generate DBT slices or synthetic mammography images. Because of their low x-ray dose, a high amount of noise is present in the projections. In this study, the possibility of using deep learning for the removal of noise in raw projections is investigated. The impact of loss functions on the detail preservation is analized in particular. For that purpose, training data is augmented following the physics driven approach of Eckert et al.1 In this method, an x-ray dose reduction is simulated. First pixel intensities are converted to the number of photons at the detector. Secondly, Poisson noise is enhanced in the x-ray image by simulating a decrease in the mean photon arrival rate. The Anscombe Transformation2 is then applied to construct signal independent white Gaussian noise. The augmented data is then used to train a neural network to estimate the noise. For training several loss functions are considered including the mean square error (MSE), the structural similarity index (SSIM)3 and the perceptual loss.4 Furthermore the ReLU-Loss1 is investigated, which is especially designed for mammogram denoising and prevents the network from noise overestimation. The denoising performance is then compared with respect to the preservation of small microcalcifications. Based on our current measurements, we demonstrate that the ReLU-Loss in combination with SSIM improves the denoising results.
We investigate an image-based strategy to compensate for cardiac motion-induced artifacts in Digital Chest Tomosynthesis (DCT). We apply the compensation to conventional unidirectional vertical “↕” scan DCT and to a multidirectional circular trajectory "O" providing improved depth resolution. Propagation of heart motion into the lungs was simulated as a dynamic deformation. The studies investigated a range of motion propagation distances and scan times. Projection-domain retrospective gating was used to detect heart phases. Sparsely sampled reconstructions of each phase were deformably aligned to yield a motion compensated image with reduced sampling artifacts. The proposed motion compensation mitigates artifacts and blurring in DCT images both for “↕” and "O" scan trajectories. Overall, the “O” orbit achieved the same or better nodule structural similarity index in than the conventional “↕” orbit. Increasing the scan time improved the sampling of individual phase reconstructions.
In a clinical pilot study, we evaluated the impact of a radiolucent, inflatable air cushion during tomosynthesis breast imaging. 101 patients were included to quantify the degree of reduction in discomfort as well as the impact on image quality, patient positioning and applied compression force. All underwent tomosynthesis examination in two different settings, routine compression and compression including the cushion without exposing them to additional acquisitions. The cushion had the same size for all breasts and was placed directly on the patient support table of the mammography unit. In the study, the cushion was inflated with air after the standard compression to a breast-individual level. Due to inflation of the cushion, the contact area between breast and compression paddle increased and additional force was therefore added. We expected a decrease in the peak pressure and, due to increased contact area an increase in the desirable spreading of the breast tissue. After examination, patients were asked to complete a questionnaire to rate the tolerability of compression with and without the cushion. The deployment of the cushion decreased the negative perception significantly, lowering it by 18.4% and only 2.0% (p < 0.001, ∝ = 0.05) of patients left to experience a discomfort during compression. When comparing the two compression settings, the increase in comfort did not have a negative impact on image quality, positioning, and the ability to detect all pertinent anatomy. Design and usability of the cushion as well as more sophisticated compression routines will be further investigated, analyzed, and discussed.
Purpose: We compare the effects of scatter on the accuracy of areal bone mineral density (BMD) measurements obtained using two flat-panel detector (FPD) dual-energy (DE) imaging configurations: a dual-kV acquisition and a dual-layer detector. Methods: Simulations of DE projection imaging were performed with realistic models of x-ray spectra, scatter, and detector response for dual-kV and dual-layer configurations. A digital body phantom with 4 cm Ca inserts in place of vertebrae (concentrations 50 - 400 mg/mL) was used. The dual-kV configuration involved an 80 kV low-energy (LE) and a 120 kV high-energy (HE) beam and a single-layer, 43x43 cm FPD with a 650 μm cesium iodide (CsI) scintillator. The dual-layer configuration involved a 120 kV beam and an FPD consisting of a 200 μm CsI layer (LE data), followed by a 1 mm Cu filter, and a 550 μm CsI layer (HE data). We investigated the effects of an anti-scatter grid (13:1 ratio) and scatter correction. For the correction, the sensitivity to scatter estimation error (varied ±10% of true scatter distribution) was evaluated. Areal BMD was estimated from projection-domain DE decomposition. Results: In the gridless dual-kV setup, the scatter-to-primary ratio (SPR) was similar for the LE and HE projections, whereas in the gridless dual layer setup, the SPR was ~26% higher in the LE channel (top CsI layer) than in the HE channel (bottom layer). Because of the resulting bias in LE measurements, the conventional projection-domain DE decomposition could not be directly applied to dual-layer data; this challenge persisted even in the presence of a grid. In contrast, DE decomposition of dual-kV data was possible both without and with the grid; the BMD error of the 400 mg/mL insert was -0.4 g/cm2 without the grid and +0.3 g/cm2 with the grid. The dual-layer FPD configuration required accurate scatter correction for DE decomposition: a -5% scatter estimation error resulted in -0.1 g/cm2 BMD error for the 50 mg/mL insert and a -0.5 g/cm2 BMD error for the 400 mg/mL with a grid, compared to <0.1 g/cm2 for all inserts in a dual-kV setup with the same scatter estimation error. Conclusion: This comparative study of quantitative performance of dual-layer and dual-kV FPD-based DE imaging indicates the need for accurate scatter correction in the dual-layer setup due to increased susceptibility to scatter errors in the LE channel.
Purpose: We investigate the feasibility of slot-scan dual-energy x-ray absorptiometry (DXA) on robotic x-ray platforms capable of synchronized source and detector translation. This novel approach will enhance the capabilities of such platforms to include quantitative assessment of bone quality using areal bone mineral density (aBMD), normally obtained only with a dedicated DXA scanner. Methods: We performed simulation studies of a robotized x-ray platform that enables fast linear translation of the x-ray source and flat-panel detector (FPD) to execute slot-scan dual-energy (DE) imaging of the entire spine. Two consecutive translations are performed to acquire the low-energy (LE, 80 kVp) and high-energy (HE, 120 kVp) data in <15 sec total time. The slot views are corrected with convolution-based scatter estimation and backprojected to yield tiled long-length LE and HE radiographs. Projection-based DE decomposition is applied to the tiled radiographs to yield (i) aBMD measurements in bone, and (ii) adipose content measurement in bone-free regions. The feasibility of achieving accurate aBMD estimates was assessed using a high-fidelity simulation framework with a digital body phantom and a realistic bone model covering a clinically relevant range of mineral densities. Experiments examined the effects of slot size (1 – 20 cm), scatter correction, and patient size/adipose content (waist circumference: 77 – 95 cm) on the accuracy and reproducibility of aBMD. Results: The proposed combination of backprojection-based tiling of the slot views and DE decomposition yielded bone density maps of the spine that were free of any apparent distortions. The x-ray scatter increased with slot width, leading to aBMD errors ranging from 0.2 g/cm2 for a 5 cm slot to 0.7 g/cm2 for a 20 cm slot when no scatter correction was applied. The convolution-based correction reduced the aBMD error to within 0.02 g/cm2 for slot widths <10 cm. Reproducible aBMD measurements across a range of body sizes (aBMD variability <0.1 g/cm2) were achieved by applying a calibration based on DE adipose thickness estimates from peripheral body sites. Conclusion: The feasibility of accurate and reproducible aBMD measurements on an FPD-based x-ray platform was demonstrated using DE slot scan trajectories, backprojection-domain decomposition, scatter correction, and adipose precorrection.
Purpose: We investigate cone-beam CT (CBCT) imaging protocols and scan orbits for 3D cervical spine imaging on a twin-robotic x-ray imaging system (Multitom Rax). Tilted circular scan orbits are studied to assess potential benefits in visualization of lower cervical vertebrae, in particular in low-dose imaging scenarios. Methods: The Multitom Rax system enables flexible scan orbit design by using two robotic arms to independently move the x-ray source and detector. We investigated horizontal and tilted circular scan orbits (up to 45° tilt) for 3D imaging of the cervical spine. The studies were performed using an advanced CBCT simulation framework involving GPU accelerated x-ray scatter estimation and accurate modeling of x-ray source, detector and noise. For each orbit, the x-ray scatter and scatter-to-primary ratio (SPR) were evaluated; cervical spine image quality was characterized by analyzing the contrast-to-noise ratio (CNR) for each vertebrae. Performance evaluation was performed for a range of scan exposures (263 mAs/scan – 2.63 mAs/scan) and standard and dedicated low dose reconstruction protocols. Results: The tilted orbit reduces scatter and increases primary detector signal for lower cervical vertebrae because it avoids ray paths crossing through both shoulders. Orbit tilt angle of 35° was found to achieve a balanced performance in visualization of upper and lower cervical spine. Compared with a flat orbit, using the optimized 35° tilted orbit reduces lateral projection SPR at the C7 vertebra by <40%, and increases CNR by 220% for C6 and 76% for C7. Adequate visualization of the vertebrae with CNR <1 was achieved for scan exposures as low as 13.2 mAs / scan, corresponding to ~3 mGy absorbed spine dose. Conclusion: Optimized tilted scan orbits are advantageous for CBCT imaging of the cervical spine. The simulation studies presented here indicate that CBCT image quality sufficient for evaluation of spine alignment and intervertebral joint spaces might be achievable at spine doses below 5 mGy.
Mammographic breast density is an important risk marker in breast cancer screening. The ACR BI-RADS guidelines (5th ed.) define four breast density categories that can be dichotomized by the two super-classes dense" and not dense". Due to the qualitative description of the categories, density assessment by radiologists is characterized by a high inter-observer variability. To quantify this variability, we compute the overall percentage agreement (OPA) and Cohen's kappa of 32 radiologists to the panel majority vote based on the two super-classes. Further, we analyze the OPA between individual radiologists and compare the performances to an automated assessment via a convolutional neural network (CNN). The data used for evaluation contains 600 breast cancer screening examinations with four views each. The CNN was designed to take all views of an examination as input and trained on a dataset with 7186 cases to output one of the two super-classes. The highest agreement to the panel majority vote (PMV) achieved by a single radiologist is 99%, the lowest score is 71% with a mean of 89%. The OPA of two individual radiologists ranges from a maximum of 97.5% to a minimum of 50.5% with a mean of 83%. Cohen's kappa values of radiologists to the PMV range from 0.97 to 0.47 with a mean of 0.77. The presented algorithm reaches an OPA to all 32 radiologists of 88% and a kappa of 0.75. Our results show that inter-observer variability for breast density assessment is high even if the problem is reduced to two categories and that our convolutional neural network can provide labelling comparable to an average radiologist. We also discuss how to deal with automated classification methods for subjective tasks.
Purpose: We optimize scan orbits and acquisition protocols for 3D imaging of the weight-bearing spine on a twin-robotic x-ray system (Multitom Rax). An advanced Cone-Beam CT (CBCT) simulation framework is used for systematic optimization and evaluation of protocols in terms of scatter, noise, imaging dose, and task-based performance in 3D image reconstructions. Methods: The x-ray system uses two robotic arms to move an x-ray source and a 43×43 cm2 flat-panel detector around an upright patient. We investigate two classes of candidate scan orbits, both with the same source-axis distance of 690 mm: circular scans with variable axis-detector distance (ADD, air gap) ranging from 400 to 800 mm, and elliptical scans, where the ADD smoothly changes between the anterior-posterior view (ADDAP) and the lateral view (ADDLAT). The study involved elliptical orbits with fixed ADDAP of 400 mm and variable ADDLAT, ranging 400 to 800 mm. Scans of human lumbar spine were simulated using a framework that included accelerated Monte Carlo scatter estimation and realistic models of the x-ray source and detector. In the current work, x-ray fluence was held constant across all imaging configurations, corresponding to 0.5 mAs/frame. Performance of circular and elliptical orbits was compared in terms of scatter and scatter-to-primary ratio (SPR) in projections, and contrast, noise, contrast-to-noise ratio (CNR), and truncation (field of view, FOV) in 3D image reconstructions. Results: The highest mean SPR was found in lateral views, ranging from ~5 at ADD of 300 mm to ~1.2 at ADD of 800 mm. Elliptical scans enabled image acquisition with reduced lateral SPR and almost constant SPR across projection angles. The improvement in contrast across the investigated range of air gaps (due to reduction in scatter) was ~2.3x for circular orbits and ~1.9x for elliptical orbits. The increase in noise associated with increased ADD was more pronounced for circular scans (~2x) compared to elliptical scans (~1.5x). The circular orbit with the best CNR performance (ADD=600 mm) yielded ~10% better CNR than the best elliptical orbit (ADDLAT=600 mm); however, the elliptical orbit increased FOV by ~16%. Conclusion: The flexible imaging geometry of the robotic x-ray system enables development of highly optimized scan orbits. Imaging of the weight-bearing spine could benefit from elliptical detector trajectories to achieve improved tradeoffs in scatter reduction, noise, and truncation.
Measurements of skeletal geometries are a crucial tool for the assessment of pathologies in orthopedics. Usually, those measurements are performed in conventional 2-D X-ray images. Due to the cone-beam geometry of most commercially available X-ray systems, effects like magnification and distortion are inevitable and may impede the precision of the orthopedic measurements. In particular measurements of angles, axes, and lengths in spine or limb acquisitions would benefit from a true 1-to-1 mapping without any distortion or magnification.
In this work, we developed a model to quantify these effects for realistic patient sizes and clinically relevant acquisition procedures. Moreover, we compared the current state-of-the-art technique for the imaging of length- extended radiographs, e. g. for spine or leg acquisitions (i. e. the source-tilt technique) with a slot-scanning method. To validate our model we conducted several experiments with physical as well as anthropomorphic phantoms, which turned out to be in good agreement with our model. To this end, we found, that the images acquired with the reconstruction-based slot-scanning technique comprise no magnification or distortion. This would allow precise measurements directly on images without considering calibration objects, which might be beneficial for the quality and workflow efficiency of orthopedic applications.
The acquisition time of cone-beam CT (CBCT) systems is limited by different technical constraints. One important factor is the mechanical stability of the system components, especially when using C-arm or robotic systems. This leads to the fact that today’s acquisition protocols are performed at a system speed, where geometrical reproducibility can be guaranteed. However, from an application point of view faster acquisition times are useful since the time for breath-holding or being restraint in a static position has direct impact on patient comfort and image quality. Moreover, for certain applications, like imaging of extremities, a higher resolution might offer additional diagnostic value. In this work, we show that it is possible to intentionally exceed the conventional acquisition limits by accepting geometrical inaccuracies. To compensate deviations from the assumed scanning trajectory, a marker-free auto-focus method based on the gray-level histogram entropy was developed and evaluated. First experiments on a modified twin-robotic X-ray system (Multitom Rax, Siemens Healthcare GmbH, Erlangen, Germany) show that the acquisition time could be reduced from 14 s down to 9 s, while maintaining the same high-level image quality. In addition to that, by using optimized acquisition protocols, ultra-high-resolution imaging techniques become accessible.
KEYWORDS: Breast, Digital breast tomosynthesis, Tissues, Visualization, Mammography, Breast cancer, Medicine, Magnetic resonance imaging, X-ray imaging, X-rays
Assessment of breast density at the point of mammographic examination could lead to optimized breast cancer screening pathways. The onsite breast density information may offer guidance of when to recommend supplemental imaging for women in a screening program. A software application (Insight BD, Siemens Healthcare GmbH) for fast onsite quantification of volumetric breast density is evaluated. The accuracy of the method is assessed using breast tissue equivalent phantom experiments resulting in a mean absolute error of 3.84%. Reproducibility of measurement results is analyzed using 8427 exams in total, comparing for each exam (if available) the densities determined from left and right views, from cranio-caudal and medio-lateral oblique views, from full-field digital mammograms (FFDM) and digital breast tomosynthesis (DBT) data and from two subsequent exams of the same breast. Pearson correlation coefficients of 0.937, 0.926, 0.950, and 0.995 are obtained. Consistency of the results is demonstrated by evaluating the dependency of the breast density on women’s age. Furthermore, the agreement between breast density categories computed by the software with those determined visually by 32 radiologists is shown by an overall percentage agreement of 69.5% for FFDM and by 64.6% for DBT data. These results demonstrate that the software delivers accurate, reproducible, and consistent measurements that agree well with the visual assessment of breast density by radiologists.
We developed and reported an analytical model (version 2.1) of inter-pixel cross-talk of energy-sensitive photon counting detectors (PCDs) in 2016 [1]. Since the time, we have identified four problems that are inherent to the design of the model version 2.1. In this study, we have developed a new model (version 3.2; “PcTK” for Photon Counting Toolkit) based on a totally different design concept. The comparison with the previous model version 2.1 and a Monte Carlo (MC) simulation has shown that the new model version 3.2 addressed the four problems successfully. The workflow script for computed tomography (CT) image quality assessment has demonstrated the utility of the model and potential values to CT community. The software packages including the workflow script, built using Matlab 2016a, has been made available to academic researchers free of charge (PcTK; https://pctk.jhu.edu).
Smaller pixels sizes of x-ray photon counting detectors (PCDs) have the two conflicting effects as follows. On one hand, smaller pixel sizes improve the ability to handle high count rates of x-rays (i.e., pulse pileups) because incident rates onto each PCD pixel decreases with decreasing the size. On the other hand, smaller pixel sizes increase chances of crosstalk and double-counting (or n-tuple-counting in general) between neighboring pixels, because, while the same size of electron charge cloud generated by a photon is independent of PCD sizes, the charge cloud size relative to the PCD size increases with decreasing the PCD size. In addition, the following two aspects are practical configurations in actual PCD computed tomography systems: N×N-pixel binning and anti-scatter grids. When n-tuple-counting occurs and those data are binned/added during post-acquisition process, the variance of the data will be larger than its mean. The anti-scatter grids may eliminate or decrease the cross-talk and n-tuple-counting by blocking primary x-rays near pixel boundaries or for the width of one pixel entirely. In this study, we studied the effects of PCD pixel sizes, N×N-pixel binning, and pixel masking on soft tissue contrast visibility using newly developed Photon Counting Toolkit (PcTK version 3.2; https://pctk.jhu.edu).
An ultra-high resolution (UHR) mode, with a detector pixel size of 0.25 mm×0.25 mm relative to isocenter, has been implemented on a whole body research photon-counting detector (PCD) computed tomography (CT) system. Twenty synthetic lung nodules were scanned using UHR and conventional resolution (macro) modes and reconstructed with medium and very sharp kernels. Linear regression was used to compare measured nodule volumes from CT images to reference volumes. The full-width-at-half-maximum of the calculated curvature histogram for each nodule was used as a shape index, and receiver operating characteristic analysis was performed to differentiate sphere- and star-shaped nodules. Results showed a strong linear relationship between measured nodule volumes and reference volumes for both modes. The overall volume estimation was more accurate using UHR mode and the very sharp kernel, having 4.8% error compared with 10.5% to 12.6% error in the macro mode. The improvement in volume measurements using the UHR mode was more evident for small nodule sizes or star-shaped nodules. Images from the UHR mode with the very sharp kernel consistently demonstrated the best performance [AUC=(0.839,0.867)] for separating star- from sphere-shaped nodules, showing advantages of UHR mode on a PCD CT scanner for lung nodule characterization.
Photon-counting detector CT has a large number of acquisition parameters that require optimization, particularly the energy threshold configurations. Fast and accurate estimation of both signal and noise in photon-counting CT (PCCT) images can facilitate such optimization. Using the detector response function of a research PCCT system, we derived mathematical models for both signal and noise estimation, taking into account beam spectrum and filtration, object attenuation, water beam hardening, detector response, radiation dose, energy thresholds, and the propagation of noise. To determine the absolute noise value, a noise lookup table (LUT) for all available energy thresholds was acquired using a number of calibration scans. The noise estimation algorithm then used the noise LUT to estimate noise for scans with a variety of combination of energy thresholds, dose levels, and object attenuations. Validation of the estimation algorithms was performed on a whole-body research PCCT system using semianthropomorphic water phantoms and solutions of calcium and iodine. Clinical feasibility of noise estimation was assessed with scans of a cadaver head and a living swine. The algorithms achieved accurate estimation of both signal and noise for a variety of scanning parameter combinations. Maximum discrepancies were below 15%, while most errors were below 5%.
This study evaluates the capabilities of a whole-body photon counting CT system to differentiate between four
common kidney stone materials, namely uric acid (UA), calcium oxalate monohydrate (COM), cystine (CYS),
and apatite (APA) ex vivo. Two different x-ray spectra (120 kV and 140 kV) were applied and two acquisition
modes were investigated. The macro-mode generates two energy threshold based image-volumes and two energy
bin based image-volumes. In the chesspattern-mode four energy thresholds are applied. A virtual low energy
image, as well as a virtual high energy image are derived from initial threshold-based images, while considering
their statistically correlated nature. The energy bin based images of the macro-mode, as well as the virtual
low and high energy image of the chesspattern-mode serve as input for our dual energy evaluation. The dual
energy ratio of the individually segmented kidney stones were utilized to quantify the discriminability of the
different materials. The dual energy ratios of the two acquisition modes showed high correlation for both applied
spectra. Wilcoxon-rank sum tests and the evaluation of the area under the receiver operating characteristics
curves suggest that the UA kidney stones are best differentiable from all other materials (AUC = 1.0), followed
by CYS (AUC ≈ 0.9 compared against COM and APA). COM and APA, however, are hardly distinguishable
(AUC between 0.63 and 0.76). The results hold true for the measurements of both spectra and both acquisition
modes.
In addition to the standard-resolution (SR) acquisition mode, a high-resolution (HR) mode is available on a research photon-counting-detector (PCD) whole-body CT system. In the HR mode each detector consists of a 2x2 array of 0.225 mm x 0.225 mm subpixel elements. This is in contrast to the SR mode that consists of a 4x4 array of the same subelements, and results in 0.25 mm isotropic resolution at iso-center for the HR mode. In this study, we quantified ex vivo the capabilities of the HR mode to characterize renal stones in terms of morphology and mineral composition. Forty pure stones - 10 uric acid (UA), 10 cystine (CYS), 10 calcium oxalate monohydrate (COM) and 10 apatite (APA) - and 14 mixed stones were placed in a 20 cm water phantom and scanned in HR mode, at radiation dose matched to that of routine dual-energy stone exams. Data from micro CT provided a reference for the quantification of morphology and mineral composition of the mixed stones. The area under the ROC curve was 1.0 for discriminating UA from CYS, 0.89 for CYS vs COM and 0.84 for COM vs APA. The root mean square error (RMSE) of the percent UA in mixed stones was 11.0% with a medium-sharp kernel and 15.6% with the sharpest kernel. The HR showed qualitatively accurate characterization of stone morphology relative to micro CT.
Photon-counting detectors in computed tomography (CT) allow for measuring the energy of the incident xray photons within certain energy windows. This information can be used to enhance contrast or reconstruct CT images of different material bases. Compared to energy-integrating CT-detectors, pixel dimensions have to be smaller to limit the negative effect of pulse pile-up at high X-ray fluxes. Unfortunately, reducing the pixel size leads to increased K-escape and charge sharing effects. As a consequence, an incident X-ray may generate more than one detector signal, and with deteriorated energy information. In earlier simulation studies it has been shown that these limitations can be mitigated by optimizing the X-ray spectrum using K-edge pre-filtration. In the current study, we have used a whole-body research CT scanner with a high-flux capable photon-counting detector, in which for the first time a pre-patient hafnium filter was installed. Our measurement results demonstrate substantial improvement of the material decomposition capability at comparable dose levels. The results are in agreement with the predictions provided in simulations.
Photon counting detector (PCD) provides spectral information for estimating basis line-integrals; however, the recorded spectrum is distorted from spectral response effect (SRE). One of the conventional approaches to compensate for the SRE is to incorporate the SRE model in the forward imaging process. For this purpose, we recently developed a three-step algorithm as a (~×1, 500) fast alternative to maximum likelihood (ML) estimator based on the modeling of x-ray transmittance, exp ( − ∫ µa(r, E)dr ) , with low-order polynomials. However, it is limited on the case when K-edge is absent due to the smoothness property of the low-order polynomials. In this paper, we propose a dictionary learning-based x-ray transmittance modeling to address this limitation. More specifically, we design a dictionary which consists of several energy-dependent bases to model an unknown x-ray transmittance by training the dictionary based on various known x-ray transmittance as a training data. We show that the number of bases in the dictionary can be as large as the number of energy bins and that the modeling error is relatively small considering a practical number of energy bins. Once the dictionary is trained, the three-step algorithm can be derived as follows: estimating the unknown coefficients of the dictionary, estimating basis line-integrals, and then correcting for a bias. We validate the proposed method with various simulation studies for K-edge imaging with gadolinium contrast agent, and show that both bias and computational time are substantially reduced compared to those of the ML estimator.
This study concerns how to model x-ray transmittance, exp ( -- ∫ μa(r, E) dr), of the object using a small number of energy-dependent bases, which plays an important role for estimating basis line-integrals in photon counting detector (PCD)-based computed tomography (CT). Recently, we found that low-order polynomials can model the smooth x-ray transmittance, i.e. object without contrast agents, with sufficient accuracy, and developed a computationally efficient three-step estimator. The algorithm estimates the polynomial coefficients in the first step, estimates the basis line-integrals in the second step, and corrects for bias in the third step. We showed that the three-step estimator was ~1,500 times faster than conventional maximum likelihood (ML) estimator while it provided comparable bias and noise. The three-step estimator was derived based on the modeling of x-ray transmittance; thus, the accurate modeling of x-ray transmittance is an important issue. For this purpose, we introduce a modeling of the x-ray transmittance via dictionary learning approach. We show that the relative modeling error of dictionary learning-based approach is smaller than that of the low-order polynomials.
Two ultra-high-resolution (UHR) imaging modes, each with two energy thresholds, were implemented on a research, whole-body photon-counting-detector (PCD) CT scanner, referred to as sharp and UHR, respectively. The UHR mode has a pixel size of 0.25 mm at iso-center for both energy thresholds, with a collimation of 32 × 0.25 mm. The sharp mode has a 0.25 mm pixel for the low-energy threshold and 0.5 mm for the high-energy threshold, with a collimation of 48 × 0.25 mm. Kidney stones with mixed mineral composition and lung nodules with different shapes were scanned using both modes, and with the standard imaging mode, referred to as macro mode (0.5 mm pixel and 32 × 0.5 mm collimation). Evaluation and comparison of the three modes focused on the ability to accurately delineate anatomic structures using the high-spatial resolution capability and the ability to quantify stone composition using the multi-energy capability. The low-energy threshold images of the sharp and UHR modes showed better shape and texture information due to the achieved higher spatial resolution, although noise was also higher. No noticeable benefit was shown in multi-energy analysis using UHR compared to standard resolution (macro mode) when standard doses were used. This was due to excessive noise in the higher resolution images. However, UHR scans at higher dose showed improvement in multi-energy analysis over macro mode with regular dose. To fully take advantage of the higher spatial resolution in multi-energy analysis, either increased radiation dose, or application of noise reduction techniques, is needed.
A new ultra high-resolution (UHR) mode has been implemented on a whole body photon counting-detector (PCD) CT system. The UHR mode has a pixel size of 0.25 mm by 0.25 mm at the iso-center, while the conventional (macro) mode is limited to 0.5 mm by 0.5 mm. A set of synthetic lung nodules (two shapes, five sizes, and two radio-densities) was scanned using both the UHR and macro modes and reconstructed with 2 reconstruction kernels (4 sets of images in total). Linear regression analysis was performed to compare measured nodule volumes from CT images to reference volumes. Surface curvature was calculated for each nodule and the full width half maximum (FWHM) of the curvature histogram was used as a shape index to differentiate sphere and star shape nodules. Receiver operating characteristic (ROC) analysis was performed and area under the ROC curve (AUC) was used as a figure of merit for the differentiation task. Results showed strong linear relationship between measured nodule volume and reference standard for both UHR and macro mode. For all nodules, volume estimation was more accurate using UHR mode with sharp kernel (S80f), with lower mean absolute percent error (MAPE) (6.5%) compared with macro mode (11.1% to 12.9%). The improvement of volume measurement from UHR mode was more evident particularly for small nodule size (3mm, 5mm), or star-shape nodules. Images from UHR mode with sharp kernel (S80f) consistently demonstrated the best performance (AUC = 0.85) when separating star from sphere shape nodules among all acquisition and reconstruction modes. Our results showed the advantages of UHR mode on a PCD CT scanner in lung nodule characterization. Various clinical applications, including quantitative imaging, can benefit substantially from this high resolution mode.
An ultrahigh-resolution (UHR) data collection mode was enabled on a whole-body, research photon counting detector (PCD) computed tomography system. In this mode, 64 rows of 0.45 mm×0.45 mm detector pixels were used, which corresponded to a pixel size of 0.25 mm×0.25 mm at the isocenter. Spatial resolution and image noise were quantitatively assessed for the UHR PCD scan mode, as well as for a commercially available UHR scan mode that uses an energy-integrating detector (EID) and a set of comb filters to decrease the effective detector size. Images of an anthropomorphic lung phantom, cadaveric swine lung, swine heart specimen, and cadaveric human temporal bone were qualitatively assessed. Nearly equivalent spatial resolution was demonstrated by the modulation transfer function measurements: 15.3 and 20.3 lp/cm spatial frequencies were achieved at 10% and 2% modulation, respectively, for the PCD system and 14.2 and 18.6 lp/cm for the EID system. Noise was 29% lower in the PCD UHR images compared to the EID UHR images, representing a potential dose savings of 50% for equivalent image noise. PCD UHR images from the anthropomorphic phantom and cadaveric specimens showed clear delineation of small structures.
Photon counting detector (PCD)-based computed tomography (CT) is an emerging imaging technique. Compared to conventional energy integrating detector (EID)-based CT, PCD-CT is able to exclude electronic noise that may severely impair image quality at low photon counts. This work focused on comparing the noise performance at low doses between the PCD and EID subsystems of a whole-body research PCD-CT scanner, both qualitatively and quantitatively. An anthropomorphic thorax phantom was scanned, and images of the shoulder portion were reconstructed. The images were visually and quantitatively compared between the two subsystems in terms of streak artifacts, an indicator of the impact of electronic noise. Furthermore, a torso-shaped water phantom was scanned using a range of tube currents. The product of the noise and the square root of the tube current was calculated, normalized, and compared between the EID and PCD subsystems. Visual assessment of the thorax phantom showed that electronic noise had a noticeably stronger degrading impact in the EID images than in the PCD images. The quantitative results indicated that in low-dose situations, electronic noise had a noticeable impact (up to a 5.8% increase in magnitude relative to quantum noise) on the EID images, but negligible impact on the PCD images.
A semi-analytical model describing spectral distortions in photon-counting detectors (PCDs) for clinical computed tomography was evaluated using simulated data. The distortions were due to count rate-independent spectral response effects and count rate-dependent pulse-pileup effects and the model predicted both the mean count rates and the spectral shape. The model parameters were calculated using calibration data. The model was evaluated by comparing the predicted x-ray spectra to Monte Carlo simulations of a PCD at various count rates. The data-model agreement expressed as weighted coefficient of variation (COVW) was better than COVW=2.0% for dead time losses up to 28% and COVW=20% or smaller for dead time losses up to 69%. The accuracy of the model was also tested for the purpose of material decomposition by estimating material thicknesses from simulated projection data. The estimated attenuator thicknesses generally agreed with the true values within one standard deviation of the statistical uncertainty obtained from multiple noise realizations.
Photon-counting CT (PCCT) is an emerging technique that may bring new possibilities to clinical practice. Compared to
conventional CT, PCCT is able to exclude electronic noise that may severely impair image quality at low photon counts.
This work focused on assessing the low-dose performance of a whole-body research PCCT scanner consisting of two
subsystems, one equipped with an energy-integrating detector, and the other with a photon-counting detector. Evaluation
of the low-dose performance of the research PCCT scanner was achieved by comparing the noise performance of the
two subsystems, with an emphasis on examining the impact of electronic noise on image quality in low-dose situations.
A high-resolution (HR) data collection mode has been introduced to a whole-body, research photon-counting-detector
CT system installed in our laboratory. In this mode, 64 rows of 0.45 mm x 0.45 mm detector pixels were used, which
corresponded to a pixel size of 0.25 mm x 0.25 mm at the iso-center. Spatial resolution of this HR mode was quantified
by measuring the MTF from a scan of a 50 micron wire phantom. An anthropomorphic lung phantom, cadaveric swine
lung, temporal bone and heart specimens were scanned using the HR mode, and image quality was subjectively assessed
by two experienced radiologists. High spatial resolution of the HR mode was evidenced by the MTF measurement, with
15 lp/cm and 20 lp/cm at 10% and 2% modulation. Images from anthropomorphic phantom and cadaveric specimens
showed clear delineation of small structures, such as lung vessels, lung nodules, temporal bone structures, and coronary
arteries. Temporal bone images showed critical anatomy (i.e. stapes superstructure) that was clearly visible in the PCD
system. These results demonstrated the potential application of this imaging mode in lung, temporal bone, and vascular
imaging. Other clinical applications that require high spatial resolution, such as musculoskeletal imaging, may also
benefit from this high resolution mode.
An x-ray photon interacts with photon counting detectors (PCDs) and generates an electron charge cloud or multiple
clouds. The clouds (thus, the photon energy) may be split between two adjacent PCD pixels when the interaction occurs
near pixel boundaries, producing a count at both of the two pixels. This is called double-counting with charge sharing.
The output of individual PCD pixel is Poisson distributed integer counts; however, the outputs of adjacent pixels are
correlated due to double-counting. Major problems are the lack of detector noise model for the spatio-energetic crosstalk
and the lack of an efficient simulation tool. Monte Carlo simulation can accurately simulate these phenomena and
produce noisy data; however, it is not computationally efficient.
In this study, we developed a new detector model and implemented into an efficient software simulator which uses a
Poisson random number generator to produce correlated noisy integer counts. The detector model takes the following
effects into account effects: (1) detection efficiency and incomplete charge collection; (2) photoelectric effect with total
absorption; (3) photoelectric effect with fluorescence x-ray emission and re-absorption; (4) photoelectric effect with
fluorescence x-ray emission which leaves PCD completely; and (5) electric noise.
The model produced total detector spectrum similar to previous MC simulation data. The model can be used to predict
spectrum and correlation with various different settings. The simulated noisy data demonstrated the expected
performance: (a) data were integers; (b) the mean and covariance matrix was close to the target values; (c) noisy data
generation was very efficient
Photon counting detectors in computed tomography facilitate measurements of spectral distributions of detected X-ray quanta in discrete energy bins. Along with the dependency on wavelength and atomic number of the mass attenuation coefficient, this information allows for reconstruction of CT images of different material bases. Decomposition of two materials is considered standard in today’s dual-energy techniques. With photon-counting detectors the decomposition of more than two materials becomes achievable. Efficient detection of CT-typical X-ray spectra is a hard requirement in a clinical environment. This is fulfilled by only a few sensor materials such as CdTe or CdZnTe. In contrast to energy integrating CT-detectors, the pixel dimensions must be reduced to avoid pulse pile-up problems at clinically relevant count rates. However, reducing pixel sizes leads to increased K-escape and charge sharing effects. As a consequence, the correlation between incident and detected X-ray energy is reduced. This degradation is quantified by the detector response function. The goal of this study is to improve the achievable material decomposition by adapting the incident X-ray spectrum with respect to the properties (i.e. the detector response function) of a photon counting detector. A significant improvement of a material decomposition equivalent metric is achievable when using specific materials as X-ray pre-filtration (K-edge filtering) while maintaining the applied patient dose and image quality.
The energy resolving capabilities of Photon Counting Detectors (PCD) in Computed Tomography (CT) facilitate energy-sensitive measurements. The provided image-information can be processed with Dual Energy and Multi Energy algorithms. A research PCD-CT firstly allows acquiring images with a close to clinical configuration of both the X-ray tube and the CT-detector. In this study, two algorithms (Material Decomposition and Virtual Non-Contrast-imaging (VNC)) are applied on a data set acquired from an anesthetized rabbit scanned using the PCD-CT system. Two contrast agents (CA) are applied: A gadolinium (Gd) based CA used to enhance contrasts for vascular imaging, and xenon (Xe) and air as a CA used to evaluate local ventilation of the animal's lung. Four different images are generated: a) A VNC image, suppressing any traces of the injected Gd imitating a native scan, b) a VNC image with a Gd-image as an overlay, where contrast enhancements in the vascular system are highlighted using colored labels, c) another VNC image with a Xe-image as an overlay, and d) a 3D rendered image of the animal's lung, filled with Xe, indicating local ventilation characteristics. All images are generated from two images based on energy bin information. It is shown that a modified version of a commercially available dual energy software framework is capable of providing images with diagnostic value obtained from the research PCD-CT system.
Photon-counting CT (PCCT) may yield potential value for many clinical applications due to its relative immunity to
electronic noise, increased geometric efficiency relative to current scintillating detectors, and the ability to resolve energy
information about the detected photons. However, there are a large number of parameters that require optimization,
particularly the energy thresholds configuration. Fast and accurate estimation of signal and noise in PCCT can benefit
the optimization of acquisition parameters for specific diagnostic tasks. Based on the acquisition parameters and detector
response of our research PCCT system, we derived mathematical models for both signal and noise. The signal model
took the tube spectrum, beam filtration, object attenuation, water beam hardening, and detector response into account.
The noise model considered the relationship between noise and radiation dose, as well as the propagation of noise as
threshold data are subtracted to yield energy bin data. To determine the absolute noise value, a noise look-up table
(LUT) was acquired using a limited number of calibration scans. The noise estimation algorithm then used the noise
LUT to estimate noise for scans with a variety of combination of energy thresholds, dose levels, and object attenuation.
Validation of the estimation algorithms was performed on our whole-body research PCCT system using semianthropomorphic
water phantoms and solutions of calcium and iodine. The algorithms achieved accurate estimation of
signal and noise for a variety of scanning parameter combinations. The proposed method can be used to optimize energy
thresholds configuration for many clinical applications of PCCT.
Spectral computed tomography (CT) with photon-counting detectors (PCDs) has the potential to substantially advance diagnostic CT imaging by reducing image noise and dose to the patient, by improving contrast and tissue specificity, and by enabling molecular and functional imaging. However, the current PCD technology is limited by two main factors: imperfect energy measurement (spectral response effects, SR) and count rate non-linearity (pulse pileup effects, PP, due to detector deadtimes) resulting in image artifacts and quantitative inaccuracies for material specification. These limitations can be lifted with image reconstruction algorithms that compensate for both SR and PP. A prerequisite for this approach is an accurate model of the count losses and spectral distortions in the PCD. In earlier work we developed a cascaded SR-PP model and evaluated it using a physical PCD. In this paper we show the robustness of our approach by modifying the cascaded SR-PP model for a faster PCD with smaller pixels and a different pulse shape. We compare paralyzable and non-paralyzable detector models. First, the SR-PP model is evaluated at low and high count rates using two sets of attenuators. Then, the accuracy of the compensation is evaluated by estimating the thicknesses of three basis functions.
The energy-selectivity of photon counting detectors provides contrast enhancement and enables new material-identification techniques for clinical Computed Tomography (CT). Patient dose considerations and the resulting requirement of efficient X-ray detection suggest the use of CdTe or CdZnTe as detector material. The finite signal pulse duration of several nanoseconds present in those detectors requires strong reduction of the pixel size to achieve feasible count rates in the high-flux regime of modern CT scanners. Residual pulse pile-up effects in scans with high X-ray fluxes still can limit two key properties of the counting detector, namely count-rate linearity and spectral linearity. We have used our research prototype scanner with CdTe-based counting detector and 225μm small pixels to investigate these effects in CT imaging scenarios at elevated X-ray tube currents. We present measurements of CT images and provide a detailed analysis of contrast stability, image noise and multi-energy performance achieved with different phantom sizes at various X-ray tube settings.
KEYWORDS: Signal to noise ratio, Composites, Temporal resolution, Computed tomography, Data acquisition, Liver, Modulation transfer functions, Data modeling, Image quality, Spatial resolution
In CT imaging, a variety of applications exist where reconstructions are SNR and/or resolution limited. However, if the
measured data provide redundant information, composite image data with high SNR can be computed. Generally, these
composite image volumes will compromise spectral information and/or spatial resolution and/or temporal resolution.
This brings us to the idea of transferring the high SNR of the composite image data to low SNR (but high resolution)
‘source’ image data.
It was shown that the SNR of CT image data can be improved using iterative reconstruction [1] .We present a novel
iterative reconstruction method enabling optimal dose usage of redundant CT measurements of the same body region.
The generalized update equation is formulated in image space without further referring to raw data after initial
reconstruction of source and composite image data. The update equation consists of a linear combination of the previous
update, a correction term constrained by the source data, and a regularization prior initialized by the composite data.
The efficiency of the method is demonstrated for different applications: (i) Spectral imaging: we have analysed material
decomposition data from dual energy data of our photon counting prototype scanner: the material images can be
significantly improved transferring the good noise statistics of the 20 keV threshold image data to each of the material
images. (ii) Multi-phase liver imaging: Reconstructions of multi-phase liver data can be optimized by utilizing the noise
statistics of combined data from all measured phases (iii) Helical reconstruction with optimized temporal resolution:
splitting up reconstruction of redundant helical acquisition data into a short scan reconstruction with Tam window
optimizes the temporal resolution The reconstruction of full helical data is then used to optimize the SNR. (iv) Cardiac
imaging: the optimal phase image (‘best phase’) can be improved by transferring all applied over radiation into that
image.
In all these cases, we show that - at constant patient dose - SNR can efficiently be transferred from the composite data to
the source data while maintaining spatial, temporal and contrast resolution properties of the source data.
We have investigated the multi-energy performance of our most recent prototype CT scanner with CdTe-based
counting detector. With its small pixel pitch of 225 μm this device is prepared for the high X-ray fluxes occurring
in clinical CT. Each of these pixels is equipped with two adjustable counters. The ASIC architecture of the
detector allows configuration of the counter thresholds in chess patterns, enabling data acquisition in up to four
energy bins. We have studied the material separation capability of counting CT with respect to potential clinical
applications. Therefore we have analyzed contrast and noise properties in material decomposed CT images using
up to four base materials. We have studied contrast agents containing iodine, gadolinium, or gold, and the
body-like materials calcium, fat, and water. We describe the mathematical framework used in this work and
demonstrate the general multi-energy capability of counting CT with simulations and experimental data from
our prototype scanner. To prove the clinical relevance of our studies we compare the results to those obtained
with well-established dual-kVp techniques recorded at same patient dose and with identical image sharpness.
Photon counting detectors are expected to bring along various clinical benefits in CT imaging. Among the benefits of these detectors is their intrinsic spectral sensitivity that allows to resolve the incident X-ray spectrum. Their capability for multi-energy imaging enables material segmentation, but it is also possible to use the spectral information to create fused gray-scale CT images with improved imaging properties.
We have developed and investigated an optimization method that maximizes the image contrast-to-noise ratio, making use of the spectral information in data recorded with a counting detector with up to six energy thresholds.
The resulting merged gray-scale CT images exhibit significantly improved CNR2 for a number of clinically
established, potentially novel and hypothetical contrast agents in the thin absorber approximation.
In this work we motivate and describe the optimization method, provide the deduced optimal sets of threshold energies and mixing weights, and summarize the maximally achievable gain in CNR2 for each contrast agent under study.
μWe introduce a novel hybrid prototype scanner built to explore benefits of the quantum-counting technique in the
context of clinical CT. The scanner is equipped with two measurement systems. One is a CdTe-based counting
detector with 22cm field-of-view. Its revised ASIC architecture allows configuration of the counter thresholds of
the 225m small sub-pixels in chess patterns, enabling data acquisition in four energy bins or studying high-flux
scenarios with pile-up trigger. The other one is a conventional GOS-based energy-integrating detector from
a clinical CT scanner. The integration of both detection technologies in one CT scanner provides two major
advantages. It allows direct comparison of image quality and contrast reproduction as well as instantaneous
quantification of the relative dose usage and material separation performance achievable with counting techniques.
In addition, data from the conventional detector can be used as complementary information during reconstruction
of the images from the counting device. In this paper we present CT images acquired with the hybrid prototype
scanner, illustrate its underlying conceptual methods, and provide first experimental results quantifying clinical
benefits of quantum-counting CT.
In clinical computed tomography (CT), images from patient examinations taken with conventional scanners
exhibit noise characteristics governed by electronics noise, when scanning strongly attenuating obese patients
or with an ultra-low X-ray dose. Unlike CT systems based on energy integrating detectors, a system with a
quantum counting detector does not suffer from this drawback. Instead, the noise from the electronics mainly
affects the spectral resolution of these detectors. Therefore, it does not contribute to the image noise in spectrally
non-resolved CT images. This promises improved image quality due to image noise reduction in scans obtained
from clinical CT examinations with lowest X-ray tube currents or obese patients. To quantify the benefits of
quantum counting detectors in clinical CT we have carried out an extensive simulation study of the complete
scanning and reconstruction process for both kinds of detectors. The simulation chain encompasses modeling
of the X-ray source, beam attenuation in the patient, and calculation of the detector response. Moreover,
in each case the subsequent image preprocessing and reconstruction is modeled as well. The simulation-based,
theoretical evaluation is validated by experiments with a novel prototype quantum counting system and a Siemens
Definition Flash scanner with a conventional energy integrating CT detector. We demonstrate and quantify the
improvement from image noise reduction achievable with quantum counting techniques in CT examinations with
ultra-low X-ray dose and strong attenuation.
The application of quantum-counting detectors in clinical Computed Tomography (CT) is challenged by very large Xray
photon fluxes present in modern systems. Situations with sub-optimal patient positioning or scanning of small
objects can cause unattenuated exposure of parts of the detector. The typical pulse durations in CdTe/CdZnTe sensor
range in the order of several nanoseconds, even if the detector design is optimized for high-rate applications by using
high sensor depletion voltages and small pixel sizes. This can lead to severe pile-up of the pulses, resulting in count
efficiency degradation or even ambiguous detector signals. The recently introduced pile-up trigger method solves this
problem by combining the signal of a photon counting channel with a signal indicative of the level of pile-up. Latter is
obtained with a photon-counting channel operated at threshold energies beyond the maximum energy of the incident
photon spectrum so that its signal arises purely from pulse pile-up. We present an experimental evaluation of the pile-up
trigger method in a revised quantum-counting CT detector and compare our results to simulations of the method with
idealized detector properties.
The application of quantum-counting detectors in clinical Computed Tomography (CT) is challenged by extreme
X-ray fluxes provided by modern high-power X-ray tubes. Scanning of small objects or sub-optimal patient
positioning may lead to situations where those fluxes impinge on the detector without attenuation. Even in
operation modes optimized for high-rate applications, with small pixels and high bias voltage, CdTe/CdZnTe
detectors deliver pulses in the range of several nanoseconds. This can result in severe pulse pile-up causing
detector paralysis and ambiguous detector signals. To overcome this problem we introduce the pile-up trigger,
a novel method that provides unambiguous detector signals in rate regimes where classical rising-edge counters
run into count-rate paralysis. We present detailed CT image simulations assuming ideal sensor material not
suffering from polarization effects at high X-ray fluxes. This way we demonstrate the general feasibility of the
pile-up trigger method and quantify resulting imaging properties such as contrasts, image noise and dual-energy
performance in the high-flux regime of clinical CT devices.
The spectral sensitivity of quantum-counting detectors promises increased contrast-to-noise ratios and dualenergy
capabilities for Computed Tomography (CT). In this article we quantify the benefits as well as the
conceptual limitations of this technology under realistic clinical conditions. We present detailed simulations of
a CT system with CdZnTe-based quantum-counting detector and compare to a conventional energy-integrating
detector with Gd2O2S scintillator. Detector geometries and pixel layouts are adapted to specific requirements
of clinical CT and its high-flux environment. The counting detector is realized as a two-threshold counter. An
image-based method is used to adapt thresholds and data weights optimizing contrasts and image noise with
respect to the typical spectra provided by modern high-power tungsten anode X-ray tubes. We consider the case
of moderate X-ray fluxes and compare contrasts and image noise at same patient dose and image sharpness. We
find that the spectral sensitivity of such a CT system offers dose reduction potentials of 31.5% (9.2%) maintaining
Iodine-water contrast-to-noise ratios at 120kVp (80kVp). The improved contrast-to-noise ratios result mainly
from improved contrasts and not from reduced image noise. The presence of fluorescence effects in the sensor
material is the reason why image noise levels are not significantly reduced in comparison to energy-integrating
systems. Dual-energy performance of quantum-counting single-source CT in terms of bone-Iodine separation
is found to be somewhat below the level of today's dual-source CT devices with optimized pre-filtration of the
X-ray beams.
Recent publications emphasize the benefits of quantum-counting applied to the field of Computed Tomography
(CT). We present a research prototype scanner with a CdTe-based quantum-counting detector and 20 cm
field-of-view (FOV). As of today there is no direct converter material on the market able to operate reliably in
the harsh high-flux regime of clinical CT scanners. Nevertheless, we investigate the CT imaging performance
that could be expected with high-flux capable material. Therefore we chose pixel sizes of 0.05 mm2, a good
compromise between high-flux counting ability and energy resolution. Every pixel is equipped with two energy
threshold counters, enabling contrast-optimization and dual-energy scans. We present a first quantitative analysis
of contrast measurements, in which we limit ourselves to a low-flux scenario. Using an Iodine-based contrast
agent, we find 17% contrast enhancement at 120 kVp, compared to energy-integrating CT. In addition, the
general dual-energy capability was confirmed in first measurements. We conclude our work by demonstrating
good agreement of measurement results and detailed CT-system simulations.
Stimulated by the introduction of clinical dual source CT, the interest in dual energy methods has been increasing in the
past years. Whereas the potential of material decomposition by dual energy methods is known since the early 1980ies,
the realization of dual energy methods is a wide field of today's research. Energy separation can be achieved with energy
selective detectors or by varying X-ray source spectra. This paper focuses on dual energy techniques with varying X-ray
spectra. These can be provided by dual source CT devices, operated with different kVp settings on each tube. Excellent
spectral separation is the key property for use in clinical routine. The drawback of higher cost for two tubes and two
detectors leads to an alternative realization, where a single source CT yields different spectra by fast kVp switching from
reading to reading. This provides access to dual-energy methods in single source CT. However, this technique comes
with some intrinsic limitations. The maximum X-ray flux is reduced in comparison to the dual source system. The kVp
rise and fall time between each reading reduces the spectral separation. In comparison to dual source CT, for a constant
number of projections per energy spectrum the temporal resolution is reduced; a reasonable trade of between reduced
numbers of projection and limited temporal resolution has to be found. The overall dual energy performance is the
guiding line for our investigations. We present simulations and measurements which benchmark both solutions in terms
of spectral behavior, especially of spectral separation.
Recent publications in the field of Computed Tomography (CT) demonstrate the rising interest in applying dual-energy methods for material classification during clinical routine examinations. Based on today's standard of technology, dual-energy CT can be realized by either scanning with different X-ray spectra or by deployment
of energy selective detector technologies. The list of so-called dual-kVp methods contains sequential scans, fast kVp-switching and dual-source CT. Examples of energy selective detectors are scintillator-based energyintegrating dual-layer devices or direct converter with quantum counting electronics. The general difference of
the approaches lies in the shape of the effectively detected X-ray energy spectra and in the presence of crossscatter
radiation in the case of dual-source devices. This leads to different material classification capabilities for the various techniques. In this work, we present detector response simulations of realistic CT scans with subsequent CT image reconstruction. Analysis of the image data allows direct and objective comparison of the
dual-kVp, dual-layer, and quantum counting CT system concepts. The dual-energy performance is benchmarked in terms of image noise and Iodine-bone separation power at given image sharpness and dose exposure. For the case of dual-source devices the effect of cross-scatter radiation, as well as the benefit of additional filtering are
taken into account.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.