Data-driven approaches to the quantification problem in photoacoustic imaging have shown great potential in silico, but the inherent lack of labelled ground truth data in vivo currently restricts their application and translation into clinics. In this study we leverage Fourier Neural Operator networks as surrogate models to synthesize multispectral photoacoustic human forearm images in order to replace time-consuming and not inherently differentiable state-of-the-art Monte Carlo and k-Wave simulations. We investigate the accuracy and efficiency of these surrogate models for the optical and acoustic simulation step.
The generation of realistically simulated photoacoustic (PA) images with ground truth labels for optical and acoustic properties has become a critical method for training and validating neural networks for PA imaging. As state-of-the-art model-based simulations often suffer from various inaccuracies, unsupervised domain transfer methods have been recently proposed to enhance the quality of model-based simulations. The validation of these methods, however, is challenging as there are no reliable labels for absorption or oxygen saturation in vivo. In this work, we examine various domain shifts between simulations and real images such as simulating the wrong noise model, inaccuracies in modeling the digital device twin or erroneous assumptions on tissue composition. We show in silico how a Cycle GAN, unsupervised image-to-image translation networks (UNIT) and a conditional invertible neural network handle these domain shifts and what their consequences are for blood oxygen saturation estimation.
This study delves into the largely uncharted domain of biases in photoacoustic imaging, spotlighting potential shortcut learning as a key issue in reliable machine learning. Our focus is on hardware variation biases. We identify device-specific traits that create detectable fingerprints in photoacoustic images, demonstrate machine learning's capability to use these discrepancies to determine the device that acquired the image, and highlight their potential impact on machine learning model predictions in downstream tasks, such as disease classification.
Optical and acoustic imaging techniques enable noninvasive visualization of structural and functional tissue properties. Data-driven approaches for quantification of these properties are promising, but they rely on highly accurate simulations due to the lack of ground truth knowledge. We recently introduced the open-source simulation and image processing for photonics and acoustics (SIMPA) Python toolkit that has quickly been adopted by the community in the context of the IPASC consortium for standardized reconstruction. We present new developments in the toolkit including e.g. improved tissue and device modeling and provide an outlook on future directions aiming at improving the realism of simulations.
Significance: Optical and acoustic imaging techniques enable noninvasive visualisation of structural and functional properties of tissue. The quantification of measurements, however, remains challenging due to the inverse problems that must be solved. Emerging data-driven approaches are promising, but they rely heavily on the presence of high-quality simulations across a range of wavelengths due to the lack of ground truth knowledge of tissue acoustical and optical properties in realistic settings.
Aim: To facilitate this process, we present the open-source simulation and image processing for photonics and acoustics (SIMPA) Python toolkit. SIMPA is being developed according to modern software design standards.
Approach: SIMPA enables the use of computational forward models, data processing algorithms, and digital device twins to simulate realistic images within a single pipeline. SIMPA’s module implementations can be seamlessly exchanged as SIMPA abstracts from the concrete implementation of each forward model and builds the simulation pipeline in a modular fashion. Furthermore, SIMPA provides comprehensive libraries of biological structures, such as vessels, as well as optical and acoustic properties and other functionalities for the generation of realistic tissue models.
Results: To showcase the capabilities of SIMPA, we show examples in the context of photoacoustic imaging: the diversity of creatable tissue models, the customisability of a simulation pipeline, and the degree of realism of the simulations.
Conclusions: SIMPA is an open-source toolkit that can be used to simulate optical and acoustic imaging modalities. The code is available at: https://github.com/IMSY-DKFZ/simpa, and all of the examples and experiments in this paper can be reproduced using the code available at: https://github.com/IMSY-DKFZ/simpa_paper_experiments.
KEYWORDS: Data modeling, Monte Carlo methods, Scattering, Photoacoustic imaging, Optical simulations, Neural networks, Machine learning, Light scattering, In vivo imaging, Imaging spectroscopy
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.