|
1.IntroductionPhotoacoustic tomography (PAT) is a noninvasive imaging modality combining benefits of optical contrast and ultrasonic resolution.1–6 This modality uses a short pulsed laser as the radiation source to irradiate the biological tissue. The tissue absorbs the optical energy resulting in small temperature rise, in turn generates pressure waves by photon absorption and thermoelastic expansion. These pressure (acoustic) waves are detected by the ultrasound transducers placed at the boundary of the imaging domain. The major strength of the photoacoustic (PA) imaging is that the detected acoustic waves have less scattering and attenuation in biological tissues, making them propagate through thick biological tissues easily. The detected PA waves on the boundary are typically utilized to generate an initial pressure distribution inside the tissue. The initial pressure distribution is proportional to the product of optical fluence and absorption coefficient. The optical absorption provides information about tissue components, such as melanin, bilirubin, lipids, water, hemoglobin, and oxyhemoglobin.7,8 The biochromophores are helpful in assessing the pathophysiological condition of the tissue. The applications of PA imaging include monitoring tissue health condition in the fields of cardiology,9 ophthalmology,10 oncology,11,12 dermatology,13,14 and neurosciences.15 The important step in the PA tomographic imaging is the image reconstruction, which is an initial value problem. In this, the aim is to estimate the initial pressure at time , given the boundary pressure data at time “.” Several reconstruction methods are available for estimating the initial pressure. The standard reconstruction algorithms include analytical algorithms [backprojection (BP) and delay-and-sum] and time-reversal (TR) algorithms.16–19 These are computationally efficient as compared to the model-based reconstruction algorithms but require a large amount of data to obtain meaningful results. The requirement of a large amount of data results in either increased scan time or expensive instrumentation setups. Moreover, typical setups used for the PA tomographic measurements record the ultrasound waves over an aperture, which may not cover the object,20–22 resulting in only limited data. Reconstruction of PA images in these limited data cases using either analytical or TR methods results in poor quality images, often having limited ability in terms of much required contrast. The model-based image reconstruction algorithms were proposed in the literature for these limited data cases, which improve the quantitative accuracy of reconstructed images.23–26 These algorithms especially iterative in nature also provide robustness to noise in the boundary data,16,27 making them attractive in real-time. These model-based iterative reconstruction algorithms tend to be computationally complex compared to analytical algorithms like linear backprojection (LBP) and require utilization of regularization to constrain the solution space. Often, the reconstructed image characteristics obtained by analytical algorithms are complimentary to the model-based reconstruction algorithms.28–30 To improve the PA imaging performance, the earlier attempts included applying signal enhancement methods on the PA data collected.31–35 These methods typically apply deconvolution on the raw data collected to improve recorded acoustic signal.31–35 These approaches have limited utility (will also be shown with an example using the method attempted in Ref. 33) compared to postprocessing of reconstructed PA images. In addition, applying an enhancement method for the recorded PA data may be computationally expensive, especially in cases where large amounts of data are collected.36 It will be ideal to develop a method that work independent of the amount of the boundary data collected. Postprocessing of reconstructed medical images is part of standard clinical diagnosis, which typically involves enhancing the contrast of images.37 In this step, the reconstructed images are typically either normalized in terms of the histogram or applied a filter to enhance the diagnostic accuracy. Specific to PA imaging, there were attempts earlier to perform blind deconvolution to improve the resolution in optical-resolution PA microscopy (OR-PAM).38 This blind deconvolution process involves estimation of the blur kernel or point spread function using a Wiener filter type estimator, which needs prior information about the signal-to-noise ratio (SNR) of the input image.38 In continuation of these efforts, assuming that regularization in the model-based image reconstruction can be used to provide the blurring matrix, the image reconstruction was considered as a two step process.28,39 The first step being model-based image reconstruction and the second one being the deblurring using the estimated model-resolution matrix via sparse recovery-based methods that utilized the -based regularization.28 Often, these postprocessing methods, especially like deblurring/deconvolution steps, are computationally demanding requiring computational times on par with the initial step of model-based reconstruction.28,39 The recent advances in image fusion in computer vision enabled researchers to effectively combine the best features from the input and guiding images to provide an output image that is superior to both of them.40 Especially, in areas such as multicamera as well as multifocus fusion, the results have been encouraging with utility of guided filtering, which can be seen as a state-of-the-art image fusion method with little to no computational complexity.41 In this work, the same guided filtering42 based approach has been used to improve the reconstruction results obtained from various reconstruction schemes that are typically used in PA image reconstruction. The proposed approach also can be seen as a two step process, initially, one performs reconstruction using two reconstruction methods [examples could be LBP28,30,43 and total variational (TV) based regularization method]44,45 and the reconstruction results are combined using guided filtering to provide superior reconstruction result compared to both of them. It is also demonstrated that the standard denoising methods, such as wavelet thresholding and nonlocal means (NLM), which are typically applied in standard postprocessing of reconstructed images, provides very limited utility in cases discussed here. It is shown using both numerical and experimental phantom along with in-vivo results that the guided filtering approach provides superior results compared to the established basis pursuit deconvolution (BPD) method.28,39 More importantly, these results also establish the utility of the image fusion methods to improve PA image reconstruction. All discussion in this work is limited to two-dimensional imaging as the aim is to show the utility of the proposed guided filtering approach in PA imaging. 2.Photoacoustic Image ReconstructionThe forward problem in the PAT involves collecting the pressure data by ultrasound detectors on the boundary of the imaging domain, given an initial pressure distribution. The PA wave equation can be written as follows:6 where is the pressure at position and time , is the speed of sound in the medium, is the thermal expansion coefficient, is the specific heat capacity, and represents the energy deposited per unit time per unit volume. The reconstruction (inverse) problem involves an estimate of the initial pressure [ at ] inside the imaging domain, given the boundary measurements at time , making the reconstruction (inverse) problem equivalent to an initial value problem.2.1.System Matrix BuildingThe numerical experiments setup carried out in this work is the same as in Refs. 29, 30, and 46. For completeness, it is briefly reviewed here. The imaging domain has a dimension of (herein, ). It is vectorized by stacking the columns into a vector and represented as . The system matrix has a dimension of . Here, is the number of detectors multiplied with number of time samples. Each column of the system matrix represents the impulse response corresponding to every entry in the vectorized image. The columns of the data are also stacked into a long vector (dimension ).29,30 The forward model for PA imaging can be written as follows: where is a long column vector having a dimension of () and is the measurement vector with a dimension of .LBP image can be obtained as follows:28,43 where indicates the transpose of the matrix. BP methods are noniterative in nature making them computationally efficient. The drawback of BP methods is that they provide only qualitative results especially in limited data cases, and hence avoided for quantitative imaging.27 This BP image () is typically used as an initial guess for iterative model-based image reconstruction methods.26,30 In this work, was used as guiding image to improve the reconstruction results obtained from model-based reconstruction methods.2.2.-Wave Time Reversal MethodThe TR is a standard one-step image reconstruction method that can be performed using the open-source k-wave toolbox.47 In this method, the solution of the wave equation vanishes outside the time point (the longest time for which the PA wave is traveling inside the domain).48 The zero initial conditions are imposed and the model is solved backward in time, thus giving the solution [] at . Even though TR provides a model-based solution, often the reconstruction results are dependent on the amount of available data as well as bandwidth of acoustic detectors.18,48 As the boundary data considered here are limited, the interpolated data were used to estimate the initial pressure inside the imaging domain.47 This method was utilized to reconstruct the initial pressure distribution using both limited bandwidth as well as full bandwidth data. Note that as all experimental acoustic detectors are always band-limited, the full bandwidth results are limited to numerical phantom cases considered here. 2.3.Lanczos Tikhonov Regularization MethodIn cases of limited data, the PA tomographic image reconstruction problem becomes ill-conditioned. The Tikhonov regularization method is the most commonly used regularization method for solving these ill-conditioned problems. Tikhonov regularization works on the principle of minimizing the residual of the linear equations along with smooth regularization term. The objective function for Tikhonov regularization can be written as follows: where is the regularization parameter, which provides a balance between the residual and the expected initial pressure distribution. The function to be minimized is and the -norm is represented by . Equation (4) minimized with respect to results inThe regularization parameter controls the reconstructed initial pressure distribution characteristics. Higher regularization tends to oversmooth the image, while a lower value of regularization parameter amplifies the noise in the image. Solving these linear systems of equations [Eq. (5)] requires operations with for the problem at hand, making it computationally demanding. The Lanczos bidiagonalization49 provides dimensionality reduction of the problem, making the reconstruction problem more tractable in terms of required number of operations as well as storage. This method was previously described in Ref. 29 for estimation of the optimal regularization parameter () in the Tikhonov-framework that utilizes the QR decomposition. The dimensionality reduction is performed on system matrix, , via Lanczos bidiagonalization, as given in Ref. 49. The left and right Lanczos matrices and the bidiagonal matrix are related to the system matrix as follows:29,46 where represents the lower bidiagonal matrix, and represent the left and right orthogonal Lanczos matrices, and is the -norm of . The dimensions of left and right orthogonal Lanczos matrices are and , respectively, with representing the number of iterations for which bidiagonalization was performed. The unit vector of dimension is represented as (=1 at the ’th and 0 otherwise).Equations (6)–(8) can be utilized to rewrite as follows: where represents the dimensionality reduced version of with . Substituting Eq. (9) in Eq. (4) and using the property , the cost function can be rewritten as follows:29,46Considering the first-order condition, the solution becomes where is the estimate of for a fixed and a given regularization parameter .50 Note that only if for a given value of . For other cases, becomes an approximation of . There exists an optimal , which reduces the error between and within the precision limits.46,49 Since dimensionality reduction is obtained using the Lanczos algorithm, this method offers computationally efficient solutions as compared to the other methods, including the solution given in Eq. (5).The performance of model-based reconstruction scheme depends on the choice of reconstruction parameter, such as and .51,52 Error estimate methods have been utilized for finding the number of iterations () for the Lanczos method51–53 along with (readers are encouraged to refer to Bhatt et al.53 for more details, it is briefly reviewed here). The residual of the system is defined as follows: The norm of the error estimate can be written as follows:53 where , , , and . The error estimate for becomes53The regularization parameter is found by minimizing Eq. (14): For the minimum value of , the number of steps is calculated. This error estimate approach53 was utilized to get the optimal values for and in this work and the reconstructed image is represented by . For the case of and being chosen heuristically, it is represented by . 2.4.Basis Pursuit Deconvolution MethodIn Ref. 28, BPD was utilized to deblur the solution obtained using the Lanczos-Tikhonov regularization method (discussed in Sec. 2.2). As regularization blurs the solution, the effect of regularization can be overcome by the BPD method, so it is typically used for deblurring the . It utilizes the split augmented Lagrangian shrinkage algorithm (SALSA)54 to minimize the following objective function, which uses -type regularization to promote sharp features: where is the model resolution matrix, which represents the blur matrix, defined asThe solution obtained by minimizing Eq. (16) with respect to is denoted as (solved by utilizing Algorithm 1) and the final deblurred solution becomes Algorithm 1Algorithm of SALSA.
2.5.Total Variational Regularization MethodThe Tikhonov regularization [Eq. (4) or Eq. (10)] promotes a smooth solution due to the quadratic nature of regularization. The TV regularization method is used for solving the ill-posed problem by introducing a regularization term minimizing the variation in the solution (). The cost function for the same can be written as follows:44,45 where is the regularization parameter. Here, denotes the isotropic TV utilizing the form in Ref. 55. For TV regularization, the alternating direction method of the multipliers framework given in Ref. 56 was utilized, and the algorithm known as SALSA54 (given in Algorithm 1) was deployed in this work.2.6.Denoising Using Wavelet Thresholding (Hard and Soft)One of the standard techniques used for denoising of images is wavelet thresholding. Its application ranges from noise reduction, signal and image compression up to signal recognition.57 Thresholding works by setting the high frequency sub-band coefficients that are less than a threshold to zero.57 Both hard thresholding (HT) and soft thresholding (ST) have been used in this work.58 The advantage of this method is that the denoising approach is model-free and can be applied as a postprocessing step. As the aim of this work was to show that the guided filtering performs better than the standard denoising method, here, the input image was chosen to be (which has best image quality among the reconstructed results). The input image is transformed into the wavelet domain: where represents the image in the wavelet domain and DWT stands for discrete wavelet transform (here, Haar wavelets were utilized).58,59 The coefficients in the wavelet domain are represented by . After thresholding is performed on all wavelet coefficients, the coefficients are represented as . The image is again transformed from wavelet domain to domain, which is represented as : with IDWT representing inverse DWT. As it is expected that noise is present in the higher order wavelet coefficients, this thresholding will eliminate the noise.2.6.1.Hard thresholdingIt works on the rule that if a wavelet coefficient value is less than a threshold, it is set to zero:57 where is the threshold value, is the input wavelet coefficient, and is the output wavelet coefficient. A large value of threshold leads to a large bias error while a smaller value increases the variance.2.6.2.Soft thresholdingIt is defined as follows:57 where is a linear function, which is a straight line with slope being dependent on 57 and is defined as follows:The ST provides a better alternative to HT. In simple terms, the ST can be considered as linear transformed version of HT results with an aim to enhance the signal content in the reconstructed image along with suppression of noise. Note that was chosen in this work based on the SNR of the raw/boundary data. 2.7.Denoising Using Nonlocal Mean FilterThe other standard method for denoising of images is NLM filtering.60 Similar to the wavelet denoising method, this also does not rely on any imaging model and can be applied as a postprocessing method to denoise images that are corrupted with Gaussian noise. It is defined as follows: where the weights depend on the similarity between pixels and satisfy conditions and . The similarity between two pixels depend on the closeness of the intensity gray level vectors and . Here, denotes a grid of fixed size having center at pixel . The similarity is a decreasing function of the weighted Euclidean distance, , where is the standard deviation of the Gaussian kernel.60 The weights in Eq. (25) are defined as follows: with being the normalizing constant, given as where is the degree of filtering, which controls the decay of the exponential function. The filter parameters are chosen as described in Ref. 60. The NLM-based filtering is considered as the state-of-the-art denoising method that is purely image driven and has been widely applied for medical images.2.8.Image-Guided Filtering (Proposed Method)In many applications of computer vision, filtering is used to enhance the signal content and suppress the noise in images. Various linear translation-invariant (LTI) filters, such as Gaussian, Laplacian, and NLM (discussed above), have been used in image restoration, deblurring/sharpening of images, etc.61 These filters (LTI) are spatially invariant and independent of the imaging model. They are deployed depending on the application and the performance of these filters varies depending on the extra information that could be used in the filtering process, which can be provided by a different image. For example, in anisotropic diffusion,62 gradient information has been utilized as extra information, which helps in avoiding smooth edges. The weighted least squares filter63 utilizes the filtering input as the guidance. The guiding/guidance image can be another image instead of the filtering image as in image matting,64 haze removal,65 and colorization.66 This process optimizes a quadratic function involving large sparse matrices and is computationally demanding,63–66 making their utility limited in real-time. Another way to make use of the guiding image is by explicitly introducing it in the filter kernels as in bilateral filter.67–69 Here, the output becomes a weighted average of the nearby pixels and the weights are calculated using the intensity in the guiding image. The bilateral filter can smooth small fluctuations in the image and preserve edges, but it is often prone to gradient reversal artifacts.63 Thus, there is a need for a filter that can perform image fusion, which is computationally efficient and simultaneously enhance the information of the input image based on guiding image. A recently proposed guided filter produces the output by combining the content of an input (to be filtered) image and guiding image, by performing a linear transform of the guiding image.42 It has better behavior compared to the bilateral filter (no gradient reversal) and it is not a smoothing filter.42 It can transfer structures of the guiding image to the input (to be filtered) image, which found applications in filtering techniques, such as dehazing and guided feathering. Also, it uses a fast linear time algorithm, which is invariant to the kernel size or the intensity range of the image. The guided filter has become a state-of-the-art method in computer vision and computer graphics fields with specific applications, including image fusion,41 edge aware smoothing, structure transferring, high dynamic range compression, image feathering/matting, dehazing,70 detail enhancement,71 flash/no-flash denoising,72 and joint upsampling. These applications have been encouraging to apply the guided filtering in the PA image reconstruction framework. In PA imaging, the reconstructed image characteristics are dependent on reconstruction technique and there are both analytical and model-based reconstruction methods that are being widely used. Moreover, each reconstruction scheme has variable computational complexity, for example, the analytical-based ones being fast and model-based ones being relatively computationally complex. It will be ideal to develop a fusion method that can combine the features obtained by two different reconstruction methods to result in a superior fused/filtered image. In this work, with the help of guided filtering, the reconstructed image details were enhanced by combining results from two different reconstruction methods that provide distinct image characteristics. For example, the LBP image typically tends to be smooth but is prone to streak artifacts and also lacks sharp features, especially in limited data cases. The TV or any other sparse reconstruction method (such as Lanczos Tikhonov) will be able to provide these sharp features, but often they also enhance the noise present in the boundary data. It will be ideal to develop a image fusion method that can effectively fuse BP (analytical type result) and model-based (here either TV or Lanczos Tikhonov) reconstructed images, which is proposed here with utility of image-guided filter. The joint image filtering (involving guiding and to-be-filtered image) frameworks, including the current one, is derived from minimizing an objective function involving a fidelity term and a regularization term. The regularization in the guided filtering case is implicitly imposed on the objective function with the fidelity term only. With addition of the smoothness term, the regularization scheme becomes explicit. There are methods that utilize the explicit regularization73 in the joint image filtering, but are often computationally more expensive compared to implicit methods. In the implicit regularization case (applicable to the proposed method), the guiding image is utilized within a local window to implicitly result in a weight/filter function to be applied on the input image. This allows the structure of the guiding image to be transferred to the input image (similar to bilateral filter67) with important features like edges as well as corners to be preserved while allowing noise suppression. This results in a spatially variant filtering kernel, depending on the local window, that is utilized in the guided filtering approach. Simply, the weighted averaging regularizes the filtering process, resulting in a regularized output image. As this process is applied locally, all implicit methods are easy to implement involving little to no computational burden. The detailed discussion about implicit and explicit image filtering approaches is present in Ref. 73 for the interested readers. Specifically, the input image for image-guided filter can be the Lanczos Tikhonov image obtained either using the heuristic () or optimal (), or the TV image (), which needs to be filtered (also known as to be guided image, represented by ). The guiding image represented by obtained via Eq. (3) can be used to make more structured and less noisy to result in the filtered image (). In this approach, the output image () is a transformation of in a window centred at pixel and is given as follows:41,42 where and are linear coefficients that are constant in that needs to be determined. The model ensures that has an edge only if has an edge, as . Suppose is the amount of noise present in the image, then the reconstructed image can be represented as follows:The determination of coefficients ( and ) were achieved by minimizing the cost function obtained after combining Eqs. (28) and (29) and is written as follows:41 where is the regularization parameter penalizing larger values of coefficient . The coefficients obtained after minimizing Eq. (30) are given as follows:41 with and being the mean and variance of , respectively, in , is the number of pixels in , and is the mean of in . For the modified filter, Eqs. (31) and (32) are changed as in Ref. 74. Since a pixel is involved in all the overlapping windows, is computed using an average over all the windows and the output thus becomes41Due to the symmetry, , Eq. (33) can be written as follows: Algorithm 2 lists all important steps including variations for the original guided filter. Note that in Algorithm 2, acts as the control parameter and is the parameter helping in making a sharper transition from the low-pass to high-pass filtering mode. Algorithm 2Modified guided filter (xTG,xG,r,ε,α,β) with fa(.) representing performing operation “a” on the arguments.
The guided filter has distinct advantages of improving image quality as it is an edge preserving, gradient preserving, and structure transferring filter.41,42 This method is applied as a postprocessing step after reconstruction and its performance is compared against wavelet-based ST as well as HT techniques and NLM filtering-based denoising methods. The edge preserving can be explained, when one assumes that the guiding image is the same as the input (to be guided) image. In this case, the guiding filter coefficients can be written as and . When , two cases arise:
The guided filter performs gradient preserving since it uses a patchwise model. For the self-guiding image, and are constant. Suppose the detail layer is given as and utilizing , one can write which implies that and are always in the same direction, thus, the gradient is always preserved.2.9.Automated Wavelet Denoising of Recorded Photoacoustic DataOne of the standard denoising method in the data domain is the wavelet denoising,33 using maximum overlap DWT (MODWT),75 which is a non-orthogonal transform. The benefit of using the MODWT is that it is applicable for any sample size (not restricted to powers of 2) and hence zero padding is not necessary along with eliminating arbitrary truncation. The MODWT contrast to DWT forms a zero-phase filter, resulting in lining up features with original signal. For complete discussion of MODWT, please see Ref. 75, and with respect to application to the PA image, the readers are encouraged to see Ref. 33. In this work, we have utilized the MODWT to implement wavelet smoothing on the noisy PA signals, which automatically sets a denoising threshold using universal threshold criteria.76 Earlier attempts of using this method have been shown to improve the image quality by 22%.33 3.Figures of MeritTo quantitatively assess reconstructed image quality using methods discussed till now, image metrics, such as root mean square error (RMSE), contrast-to-noise ratio (CNR), and SNR, were utilized in this study. The first two were utilized in cases of numerical phantoms, where the target/expected initial pressure distribution is available. The latter one is applied in experimental phantom cases, where the ground truth is not available. 3.1.Root Mean Square ErrorThe RMSE is a common metric for assessing the performance of reconstructed images in PAT23,77–79 and it is defined as follows: where is the total number of pixels of the imaging domain, is the expected initial pressure distribution, and is the reconstructed (or filtered) initial pressure distribution. The lesser value of RMSE indicates the closeness of the reconstructed/denoised image with the expected image.3.2.Contrast-to-Noise RatioCNR is defined as follows:36,80,81 where and represent the mean and the variance corresponding to the region of interest (roi) and the background (back) in the reconstructed/filtered initial pressure. The roi refers to the region of the phantom, where initial pressure values are greater than zero and the background refers to the region of the phantom, where it is 0. The and represent the area ratio.80 represents area of the region of interest and represents area of the background. is the sum of areas of the region of interest and the background. and are the noise weights for the region of interest and the background, respectively. Higher value of CNR represents the better contrast recovery of the reconstructed/filtering method.813.3.Signal-to-Noise RatioThe SNR is expressed as follows:82 where denotes the peak initial pressure value and corresponds to standard deviation of the whole image. This is standard metric for defining the image quality.61 Higher SNR represents the lesser noise in the reconstructed image and thus better performance of the reconstruction/filtering method. To differentiate it from the SNR of the boundary data, this figure of merit for the reconstructed image was denoted by .4.Numerical and Experimental StudiesThe measurement of actual initial pressure distribution in real experiments is challenging. Comparing of performance of different reconstruction algorithms in these cases is challenging, so, typically, numerical phantoms are deployed for effective comparison. In this work, four such numerical phantoms were considered. The computational imaging grid has a dimension of () and the detectors were placed on a circle of radius 22 mm. The schematic diagram showing the data acquisition setup along with acoustic detectors placement is given in Fig. 1. The experimental data were generated on a high dimensional grid having a dimension of and the reconstruction was performed on a lower dimensional grid having a dimension of to mimic real experimental scenario. The numerical phantoms spans the imaging domain of size . The data generated using the high dimensional grid were added with white Gaussian noise to result in SNR levels ranging from 20 to 60 dB. An open source toolbox -wave47 in MATLAB was used for generating the data. The sampling frequency used for the data collection is 20 MHz and the number of time samples for each detector was 500. A hundred acoustic detectors placed on the circle at equidistance (as depicted in Fig. 1) with center frequency of 2.25 MHz and 70% bandwidth collected the boundary data. The system matrix () thus formed has a dimension of . The speed of sound was assumed to be and the medium was assumed to be homogeneous with no dispersion or absorption of sound. Initially, three numerical phantoms with maximum initial pressure distribution of 1 kPa [blood vessel (BV), modified Derenzo, and PAT, as given in Figs. 3(a)–3(c), respectively] were used to show the effectiveness of the proposed method. The numerical BV phantom [Fig. 3(a)] consists of thick and thin BV mimicking the typical BV structures. The modified Derenzo phantom [Fig. 3(b)], which consists of circular objects with varying diameter grouped according to size, was also used to assess the performance of the reconstruction method in terms of size. Another phantom consisting of the letters “PAT” [Fig. 3(c)] was also used to determine the ability of the reconstruction method in terms of recovering sharp edges. The fourth numerical phantom mimics the single slice of real breast, which was created using contrast-enhanced imaging data.83,84 In this phantom, the initial pressure was varied from 0 to 5 and the expected initial pressure distribution is given in Fig. 3(d). The experiments were also performed using 50 detectors with the BV phantom to assess the ability to recover the structures with sparse data (here, the dimension of and is and , respectively). The schematic of the experimental setup for the collection of the PA data is shown in Fig. 2(a). It is similar to Fig. 1(e) of Ref. 85 (as well as Fig. 2 in Ref. 30). A Q-switched Nd:YAG laser (Continuum, Surelite Ex) was used to deliver a laser pulse of 5-ns duration at 10-Hz repetition rate having wavelength of 532 nm. Four right-angle uncoated prisms (PS911, Thorlabs) and one uncoated planoconcave lens L1 (LC1715,Thorlabs) were utilized to deliver the laser pulses to the sample. The laser energy density on the phantom was , which is within the safety limit (: ANSI safety limit86). A triangular shaped horse hair phantom was utilized [photograph is shown in Fig. 2(c)] having side-length and diameter of 10 and 0.15 mm, respectively. The horse hair phantom was attached to the pipette tips adhered on acrylic slab.87 The PA data were collected continuously around the horse hair phantom for full 360 deg using a 2.25-MHz flat ultrasound transducer (Olympus NDT, V306-SU) with a 13-mm diameter active area and 70% nominal bandwidth. Another experimental phantom was used consisting of circular tubes. It was made using low density polyethylene tubes (5-mm inner diameter) and were filled with Indian black ink [photograph is shown in Fig. 2(b)]. The tubes were placed at 0 and 15 mm from the scanning center and affixed at the bottom of the acrylic slab. The reconstructed slice in these cases is the middle slice of the imaging phantom. Using the same set-up, as in Fig. 2, in vivo rat brain imaging (healthy one) was carried out at 1064-nm wavelength as the light penetration is deeper in the near-infrared region. The laser energy density on the animal head (skin) was , which was well below the ANSI safety limit of at 1064 nm. Healthy female rats weighing were procured from InVivos Pte Ltd. to conduct this in vivo animal experiment. All animal experiments were carried out according to the guidelines and regulations approved by the institutional Animal Care and Use committee of Nanyang Technological University, Singapore (Animal Protocol Number ARF-SBS/NIE-A0263). Before performing the experiments, rats were anesthetized using a mixture containing ketamine () and xylazine () by injecting a dosage of intraperitoneally. The animals were placed in the PA scanner after trimming the hair on the head and epilating using hair removal cream. The animal was maintained under anesthesia during the experiments using 0.75% isoflurane gas (Medical Plus Pte Ltd., Singapore) along with oxygen (). The anesthesia mixture was delivered through a nose cone with a breathing mask covering the nose and mouth of the animal. A custom made animal holder was used to place the animal in a sitting position on its abdomen and surgical tapes were used to hold the animal in this position. Then, the animal along with the animal holder was mounted on a translational stage during the experiment. Height of the animal was adjusted using this translational stage in order to adjust the scanning plane of the rat brain. The photograph of the lateral plane with and without skin layer is shown in Figs. 2(d) and 2(e), respectively. Animals were euthanized after the experiment by injecting an overdose of Valabarb (sodium pentobarbitone) intraperitoneally. The acquired PA signals were first amplified and filtered using a pulse amplifier (Olympus-NDT, 5072PR) and then recorded using a data acquisition (DAQ) card (GaGe, compuscope 4227) using a single channel with a sampling frequency of 25 MHz inside a desktop (Intel Xeon 3.7 GHz 64-bit processor, 16 GB RAM, running windows 10 operating system). The synchronization of data acquisition with laser illumination was achieved using a sync signal from the laser. The reconstructed PA imaging region contains and has a size of . The data collected have 2400 A-lines averaged over six times resulting in 400 detected signals collected by the ultrasound transducers acquiring the data continuously around the hair phantom and tube phantom in full 360 deg for an acquisition time of 240 s with a rotational speed of . The data collected for in vivo animal brain had 4800 A-lines averaged over 48 times resulting in 100 detected signals acquired by the ultrasound transducer continuously in full 360 deg for an acquisition time of 480 s with a rotational speed of . This averaging of A-lines reduced the energy fluctuations during multiple firings of the laser. The ultrasound transducer and the phantom were immersed in water to enable ultrasound coupling. The distance from the center of the system to the face of the ultrasonic transducer is 37.02 mm for hair phantom, 38.22 mm for tube phantom, and 50.76 mm for in vivo imaging. The speed of sound was considered to be in all cases. The PA data were acquired with sampling frequency of 25 MHz (1024 samples), and subsampled at half the rate at 12.5 MHz to result in 512 time samples. The system matrix had a dimension of (51,200: number of detectors are 100 with each collecting 512 time samples, 40,000: dimensions of the imaging domain being ). Determining the actual initial pressure rise (target values) in these experiments is not plausible, so the performance of each reconstruction method was quantitatively compared with SNR_r [Eq. (38)] being the figure of merit. Note that a Linux workstation with 32 cores of Intel Xeon processor having a speed of 3.1 GHz with 128 GB RAM was used for all computations performed in this work. 5.Results and DiscussionFor completeness, in numerical phantom cases, initially, the TR method (based on k-wave) was utilized on full bandwidth data collected from 100 detectors with SNR of 40 dB and these results are compiled in Figs. 3(e)–3(h). As the number of detectors is still 100 (limited), the corners of these images have a blurring effect due to partial volume effect. The figures of merit—RMSE and CNR—for these results along with data SNR being 20 and 60 dB, are compiled in Table 2. Note that as real acoustic detectors are always band-limited, it is not possible to obtain these results in case of experimental and in-vivo phantom cases. The reconstructed initial pressure distributions for the numerical BV phantom [Fig. 3(a)] limited bandwidth data collected from 100 detectors with SNR of 40 dB using k-wave TR method is shown in Fig. 4(a) and for LBP in Fig. 4(b). The result obtained using the Lanczos Tikhonov method using the heuristic parameters ( and ), , is shown in Fig. 4(c). Note that the same and values were used for other experiments in obtaining . The same effort in terms of performing reconstruction using the Lanczos Tikhonov method with the reconstruction parameters chosen by the error estimate method, known as , is given in Fig. 4(d). One of the advantage of the BPD method lies with its ability to improve the reconstruction results that are obtained by heuristic choice of reconstruction parameters,28 so the reconstruction result presented in Fig. 4(c) was deconvolved using BPD and the output is shown in Fig. 4(e). The reconstruction result using TV regularization, , is provided in Fig. 4(f). The proposed image-guided filtering results with input images being , , and are given correspondingly in Figs. 4(j)–4(l) with guiding image being [Fig. 4(b)]. For obtaining these results, the value of was chosen to be . The patch size was taken as one (corresponding to neighborhood), while the value of and being 1.05. The results using standard denoising methods applied to [Fig. 4(d)] using NLM filtering and wavelet thresholding-based methods are shown in Figs. 4(g)–4(i), respectively. The computational time recorded for reconstructing these images is listed in Table 1. The third column of Table 2 provides recorded figures of merit, RMSE and CNR, for these results. Table 1Recorded computational time (in seconds) for the results presented in Fig. 4.
Table 2The figures of merit—RMSE and CNR—for the methods discussed in this work with SNR of data being 20, 40, and 60 dB, utilizing four numerical phantoms (BV, Derenzo, PAT, and realistic breast). The proposed method, guided filter, results are given as last three rows with input image (to be filtered) being given as argument. The entries in bold corresponds to highest RMSE and CNR observed for each column excluding the TR method that utilizes full bandwidth data (indicated in italics). Note that the entries in parenthesis corresponding to TR indicate the bandwidth of detected signals.
From these results, it is obvious that the guided filtering approach improves the reconstructed images using either Tikhonov or TV regularization methods without adding any significant computational burden. Even though the standard denoising methods (NLM and wavelet thresholding) were able to improve marginally these results, the computational complexity for methods like NLM is far superior to be utilized in real-time. Also, the bipolar effect is more pronounced for the TR case mainly due to band-limited data utilized here [compare with the result shown in Fig. 3(e)]. From these results (given in Table 2), the guided filter results with method, the improvement obtained in the RMSE is 45% while for the CNR, the improvement obtained is 79% for 40 dB SNR case compared to the TV method, which gives the best RMSE and CNR without filtering. The performance compared to methods like BPD is superior, improving the figures of merit by 100% [comparing guided filter (TV) result with BPD result in Table 2]. The automated wavelet denoising of recorded PA data using MODWT (described in Sec. 2.9) was utilized to denoise the signals recorded on the numerical BV phantom [Fig. 3(a)]. The SNR of the recorded data was 40 dB. An example of denoised and recorded noisy PA signals are given in Fig. 5. The RMSE between the noise-free PA signal and noisy signal (dashed line) is 0.744. The RMSE in the case of denoised PA signal (solid line) is 0.114. The computational time required for denoising the recorded signals from 100 detectors is 0.3 s. These denoised PA signals were utilized as input to the reconstruction methods discussed in this work and results of the same are given in Fig. 6. The figures of merit—RMSE and CNR—are also reported in the same figure below each image. Note that the corresponding results with utilization of recorded raw data are given in Fig. 4 and figures of merit in Table 2. The improvement of utilizing the wavelet denoising on the raw data is less than 2% in all cases in terms of figures of merit and in comparison to the proposed guided filter approach. Note that for the data having SNR of 20 dB, the improvement is as high as 25% (results are not shown). It is also important to note that there are sophisticated methods proposed in the literature,34,35 which could improve the reconstruction performance significantly (latest one being based on deep learning35), but the computational complexity of these methods is at least one order higher than the wavelet denoising method utilized here, which has the computational complexity similar to the proposed guided filter approach. To understand changes in the reconstructed image frequency spectrum with the utilization of the guided filtering approach proposed here, the frequency spectrum plots pertaining to these along with input and guided images are shown in Fig. 7. The center of these plots represents the low spatial frequencies and the edges of these will provide high-frequency components. One observation from these plots is that the guided filtering approach significantly alters the frequency spectrum of both input and guiding image to give an output that is an improved version. More importantly, as the noise is known to be existing in the high frequency components, it is able to eliminate noise effectively. In these cases, the Pearson correlation88,89 (PC) between (a) and others was utilized to correlate these spectrums, which is known to be invariant to scale in the frequency spectrums to provide required normalization. The PC values given in Fig. 7 indicate that the guided filter applied to was having the highest correlation, thus in turn, giving the closest approximation to the target/expected initial pressure distribution. To study the effect of parameters utilized in the guided filter, patch size, and epsilon () in Algorithm 2, a study was performed to record CNR by varying them. These results are reported in Fig. 8 for the case of the BV phantom. The patch size defines the neighborhood around the pixel used in the guided filtering. The defines the degree of smoothing (a threshold value on the variance). If a small value is specified, only neighborhoods with small variance will be smoothed and neighborhoods with large variance will not be smoothed. If a large value is specified, the neighborhoods with large variance will also get smoothed. The values of and in Algorithm 2 were chosen to be 1.05. From the results presented in Fig. 8, it is obvious that a patch size of 1 and of provide the optimal results, these values were utilized for obtaining the results presented in this work. The figures of merit—RMSE and CNR—for the cases of data with SNR of 20 and 60 dB (reconstructed images are not shown here) were compiled and listed in Table 2. Even for these results, it is obvious that the guided filtering approach provides the best performance with input image being either or . The computational time recorded for obtaining these results was in the similar orders as given in Table 1. It is also important to note that the computational time required for obtaining is at least five times more compared to with taking the least amount of computational time among the input images for the guided filter (refer to Table 1). The performance of being the input image for the guided filter compared to other input images ( or ) is comparable with the added advantage of computation of being efficient. The reconstruction results from the similar effort utilizing data from 100 detectors with SNR being 40 dB for the modified Derenzo phantom are provided in Fig. 9. The reconstructed initial pressure distribution using TR and LBP () are shown in Figs. 9(a) and 9(b), respectively. The reconstruction using Lanczos-Tikhonov-Heuristic (LTH), Lanczos Tikhonov optimal (LTO), BPD, and TV is shown in Figs. 9(c)–9(f). The guided filtering approach results for the input image being LTH, LTO, and TV are shown in Figs. 9(j)–9(l), respectively. The denoised results of LTO using NLM and wavelet (ST and HT) thresholding methods are given in Figs. 9(g)–9(i), respectively. The required computational time for obtaining these results is similar to the values given in Table 1. Similar to the BV phantom, the figures of merit corresponding to these results are given in Table 2 along with results pertaining to SNR of data being 20 and 60 dB. A similar trend as observed in the BV phantom case was observed here with the guided filtering approach performing superior to all other approaches including BPD. The improvement obtained for the guided filtering approach for the SNR of 40 dB with as the input images were 40% for RMSE and 83% for the CNR compared to the TV case. Similarly, the improvement obtained for SNR of 60 dB was 47% for RMSE and 131% compared to the TV method, which provides best results without filtering. As observed in the earlier case, the guided filtering approach with as the input provides the optimal performance in terms of improvement in figures of merit as well as required computational time. It should also be observed that bipolar (donut shaped objects in place of filled circles) objects in TR [Fig. 9(a)], LBP [Fig. 9(b)], and LTH [Fig. 9(c)] are mainly due to the limited bandwidth of the boundary data and in case of utilizing full bandwidth data, these bipolar objects become unipolar [see Fig. 3(f)]. Another phantom consisting of letters “PAT” was also considered to compare the performance of the discussed methods in terms of recovering sharp features. These results corresponding to SNR of 40 dB (100 detectors) were given in Fig. 10. The TR and LBP reconstruction results are given in Figs. 10(a) and 10(b) correspondingly. The results from LTH, LTO, BPD, and TV are shown in Figs. 10(c)–10(f) correspondingly. The proposed guided filter approach results with input images being (c), (d), and (f) are, respectively, presented in (j)–(l). The denoised results of LTO using NLM and wavelet (ST and HT) thresholding methods are given in Figs. 10(g)–10(i), respectively. The figures of merit, similar to earlier cases, including results from data with SNR of 20 and 60 dB, are compiled in Table 2. The computational times observed were similar to those reported in Table 1. The improvement obtained in RMSE and CNR using the guided filter with TV results as the input image as compared to the TV method was 40% and 55%, respectively for SNR of 40 dB. For SNR of 60 dB, the improvement was 64% and 157% for RMSE and CNR, respectively, compared to the TV method, which gives the best result for the SNR of 60 dB. As observed earlier, the guided filter with as the input image provides the optimal performance in terms of improvement in figures of merit as well as required computational time. To assess the observed trends with sparse data, having only 50 detectors, the experiments were repeated on the BV phantom. The results pertaining to this experiment with SNR of data being 40 dB are shown in Fig. 11. These results are more prone to streak artifacts as the data available is half compared to the ones presented in Fig. 4. The TR method in this case has significant aliasing artifacts as results presented in Fig. 4. Even the denoised results using standard methods (NLM and wavelet thresholding) show significant artifacts compared to the guided filter results. Similar to earlier cases, the RMSE and CNR were compiled for these results along with SNR of data being 20 and 60 dB in Table 2. The results show similar trends as observed earlier, with the guided filtering approach providing the best results with input the image being providing optimal results. This experiment also showed that the guided filtering approach can work well in the sparse data cases as well without adding any significant computational burden. The reconstruction results pertaining to the realistic breast phantom [Fig. 3(d)] corresponding to the PA data with SNR of 40 dB (100 detectors) are given in Fig. 12. The TR and LBP reconstruction results are given in Figs. 12(a) and 12(b) correspondingly. The results from LTH, LTO, BPD, and TV are shown in Figs. 12(c)–12(f), respectively. The proposed guided filter approach results with input images being (c), (d), and (f) are, respectively, presented in (j)–(l). The denoised results of LTO using NLM and wavelet (ST and HT) thresholding methods are given in Figs. 12(g)–12(i), respectively. The figures of merit, similar to earlier cases, including results from data with SNR of 20 and 60 dB, were compiled in Table 2. The computational times observed were similar to those reported in Table 1. The improvement obtained in RMSE and CNR using the guided filter with TV results as the input image as compared to the TV method was 26% and 201%, respectively, for SNR of 40 dB. For SNR of 60 dB, the improvement was 31% and 152% for RMSE and CNR, respectively, compared to the TV method, which gives the best result for the SNR of 60 dB. These results, which were performed on a more realistic phantom, with more than two grayscale values of initial pressure show the improved performance with the guided filtering approach, showing its utility in real time. It should also be noted that, overall, the figures of merit are lower for this case compared to the remaining numerical phantom cases as the structures in these case are heterogeneous and complex. The reconstructed results for the experimental horse-hair phantom are shown in Fig. 13. The LBP result is shown as Fig. 13(a) with LTO and TV results being Figs. 13(b) and 13(c), respectively. The guided filtering approach results using (b) and (c) as the input images are given in Figs. 13(d) and 13(e) correspondingly. Note that the results pertaining to LTH and BPD were not considered here, as they were proven to be not as effective as LTO and TV in numerical phantom studies. Similarly, the standard denoising methods were also not utilized as they were proven to be inferior compared to the guided filter results in the numerical phantom cases. The figure of merit, , for these reconstruction results is given at the bottom of each result. The improvement in terms of with the guided filter is at least 5.5 dB with the guided filter with LTO [Fig. 13(d)], as the input image showing the best performance with improvement of 10.6 dB. As TV is computationally efficient, the guided filter result with TV as the input image provides optimal performance. Results from similar efforts for the case of the experimental tube phantom are shown in Fig. 14. As observed earlier, the guided filter approach provides the best performance in terms of of the reconstructed image. In the guided filter approach, the result pertaining to the input image being is superior compared to the other result ( improvement of 9.38 dB). Even though the original result without the guided filter shows suboptimal performance compared to others [comparing Fig. 14(b) with 14(a) and 14(c)], the guided filter improves the Fig. 14(b) by at least 9.38 dB [Fig. 14(d)] in terms of observed . The results pertaining to the in-vivo rat brain imaging to show the effectiveness of the proposed method in preclinical imaging were presented in Fig. 15. The LBP reconstruction is shown in Fig. 15(a) and the reconstructions using LTO and TV are shown in Figs. 15(b) and 15(c), respectively. Similar to the previous observations, the guided filter out performs others in terms of of the reconstructed image. Among the guided filter reconstructions, the result obtained using the input image being is superior compared to the other result ( improvement of 11.23 dB). Overall, the guided filter approach has shown in all numerical, experimental, and in-vivo cases that it provides superior performance. Specifically, the observed figures of merit for the output (filtered) image are exceeding the input (to be filtered) and guiding image figures of merit (Table 2, Figs. 13–15). This process adds little to no computational time as reported in Table 1, making it highly attractive. Also note that the guiding image in all cases discussed here was chosen to be the LBP image (), which is used as an input image to the TV-regularized method. Thus, the guiding image is readily available after the reconstruction process using TV-regularized solution. It is also important to note that it is possible to use other images (such as using ) as the guiding image rather than , our experience shows that the improvement in these cases is minimal. The guided filtering approach provides optimal performance, when different characteristics in the input and guiding image have to be combined to provide a filtered image. The BP image being more smooth and or promoting sharp edges due to their sparse recovery nature, this combination leads to superior results. The frequency spectrum presented in Fig. 7 gives an insight into this choice of being the guiding image as its frequency spectrum is complimentary to others, such as and . This method becomes ineffective if the frequency spectrum of both input and guiding images is similar (like and ). Moreover, if the choice is not proper, like other explicit filters, the guided filter can exhibit artifacts of unwanted smoothing near edges.41 In this work, the BP image, which is typically used as an initial guess for most model-based iterative reconstruction algorithms, was utilized as the guiding image. This may not be an optimal choice and exploration of optimal choice of guide and/or input image along with development of automated methods for an optimal choice will be taken up as future extension to this work. It may be feasible to utilize a model-free solution either as a guiding or input image, such as delay-and-sum,90 but these tend to have arbitrary units for the initial pressure requiring normalization for each case making the comparison of reconstruction results among discussed methods difficult. Also, for this work, the limited data refer to the data availability in comparison to the state-of-the-art clinical PAT setups. The current PAT scanners with advances in instrumentation are capable of acquiring boundary data with the number of detectors being 512 and number of time samples for each detector being 2048.36 Here, the maximum number of detectors utilized was 100 (maximum time samples were 512), making the boundary data available for image reconstruction still limited compared to these setups. The figures of merit, such as RMSE and CNR, were utilized extensively in the literature to assess the performance of the reconstructed image quality in PA imaging.36,77,78,81 The task-based image quality metrics can better assess the reconstructed image quality,91,92 but use of RMSE and CNR as metrics has been a common practice across tomographic problems.36,77,78,81 As the main aim of this work has been to introduce a image-guided filter as a fast postprocessing step to improve the image characteristics of the reconstructed image, which was clearly demonstrated through numerical and experimental phantom as well as in-vivo cases, the other metrics were not considered. The figures of merit—RMSE and CNR—are also robust to different realizations of noise. For example, the results presented in Table 2 differed by less than 1% within 10 noise realizations for obtaining the required SNR of boundary data. The bipolar nature of the objects (even though expected in uni-polar) in the reconstructed images (Figs. 4 and 9–15) can be corrected using the Hilbert transform (which essentially eliminates the negative initial pressure values).82,93,94 Utilization of this might leads to inaccurate representation of the reconstructed images, overshadowing the performance of postprocessing methods utilized here (including wavelet thresholding, NLM, and the proposed guided filter). As the dynamic range of the reconstructed image can be utilized as a metric for knowing the performance of algorithms used in this work, these transforms that alter the raw performance of these methods were not attempted here. The postprocessing reconstructed images to improve their utility are common among other imaging modalities, such as emission computed tomography.37 The guided filter approach, even though it is a postprocessing step, attempts to filter the input image using another image as a guiding image. Here, this guiding image is chosen to be the BP image, which is also the initial guess for the iterative reconstruction methods. The methods of this type are very simple to integrate into the iterative image reconstruction framework of PAT, as the computational complexity is insignificant, and can work with any set of images in terms of improving them. This work showed that standard denoising filters, such as wavelet thresholding as well as NLM based ones, have limited utility in PA imaging. The proposed guided filter can also be seen as an adaptive sharpening filter, which increases the local contrast with removal of noise based on guiding image.95,96 The reference sharpness level of the guiding image acts as edge preserving to result in this adaptive sharpening (see Fig. 7). Approaches such as the proposed guided filter that improves the reconstructed images are of high value as they leverage the best features of the input and guiding images to provide high quality filtered image. The developed algorithms were provided as an open source97 for enthusiastic users. 6.ConclusionsThe PA image reconstruction in case of limited data is ill-conditioned, often necessitating the usage of regularization to provide meaningful results. There are smooth and nonsmooth regularization schemes that are available, which can promote different characteristics in the reconstructed images. A simple guided filtering approach was proposed here that combines the best features in the input and guiding images (obtained from different reconstruction methods). Utilizing both numerical and experimental data, it was shown that the guided filtering approach improves the reconstructed images substantially (as high as 11.23 dB in terms of SNR of reconstructed image) with very little to no computational burden. It was also shown that the performance of the proposed guided filter approach even with sparse data (half of the original) is superior compared to others. As this approach can be applied as a postprocessing step for the image reconstruction, it is easily integratable into the existing reconstruction algorithmic framework. ReferencesL. V. Wang and S. Hu,
“Photoacoustic tomography: in vivo imaging from organelles to organs,”
Science, 335
(6075), 1458
–1462
(2012). https://doi.org/10.1126/science.1216210 SCIEAS 0036-8075 Google Scholar
M. Pramanik et al.,
“Design and evaluation of a novel breast cancer detection system combining both thermoacoustic (TA) and photoacoustic (PA) tomography,”
Med. Phys., 35
(6), 2218
–2223
(2008). https://doi.org/10.1118/1.2911157 MPHYA6 0094-2405 Google Scholar
L. V. Wang,
“Tutorial on photoacoustic microscopy and computed tomography,”
IEEE J. Sel. Top. Quantum Electron., 14
(1), 171
–179
(2008). https://doi.org/10.1109/JSTQE.2007.913398 IJSQEN 1077-260X Google Scholar
P. K. Upputuri and M. Pramanik,
“Recent advances toward preclinical and clinical translation of photoacoustic tomography: a review,”
J. Biomed. Opt., 22
(4), 041006
(2017). https://doi.org/10.1117/1.JBO.22.4.041006 JBOPFO 1083-3668 Google Scholar
L. Li et al.,
“Single-impulse panoramic photoacoustic computed tomography of small-animal whole-body dynamics at high spatiotemporal resolution,”
Nat. Biomed. Eng., 1
(5), 0071
(2017). https://doi.org/10.1038/s41551-017-0071 Google Scholar
Y. Zhou, J. Yao and L. V. Wang,
“Tutorial on photoacoustic tomography,”
J. Biomed. Opt., 21
(6), 061007
(2016). https://doi.org/10.1117/1.JBO.21.6.061007 JBOPFO 1083-3668 Google Scholar
A. E. Cerussi et al.,
“Sources of absorption and scattering contrast for near-infrared optical mammography,”
Acad. Radiol., 8
(3), 211
–218
(2001). https://doi.org/10.1016/S1076-6332(03)80529-9 Google Scholar
S. Srinivasan et al.,
“Interpreting hemoglobin and water concentration, oxygen saturation, and scattering measured in vivo by near-infrared breast tomography,”
Proc. Natl. Acad. Sci. U. S. A., 100
(21), 12349
–12354
(2003). https://doi.org/10.1073/pnas.2032822100 Google Scholar
K. Jansen et al.,
“Intravascular photoacoustic imaging of human coronary atherosclerosis,”
Opt. Lett., 36
(5), 597
–599
(2011). https://doi.org/10.1364/OL.36.000597 OPLEDP 0146-9592 Google Scholar
S. Jiao et al.,
“Photoacoustic ophthalmoscopy for in vivo retinal imaging,”
Opt. Express, 18
(4), 3967
–3972
(2010). https://doi.org/10.1364/OE.18.003967 OPEXFF 1094-4087 Google Scholar
S. A. Ermilov et al.,
“Laser optoacoustic imaging system for detection of breast cancer,”
J. Biomed. Opt., 14
(2), 024007
(2009). https://doi.org/10.1117/1.3086616 JBOPFO 1083-3668 Google Scholar
M. Heijblom, W. Steenbergen and S. Manohar,
“Clinical photoacoustic breast imaging: the Twente experience,”
IEEE Pulse, 6
(3), 42
–46
(2015). https://doi.org/10.1109/MPUL.2015.2409102 Google Scholar
S. J. Ford et al.,
“Structural and functional analysis of intact hair follicles and pilosebaceous units by volumetric multispectral optoacoustic tomography,”
J. Invest. Dermatol., 136
(4), 753
–761
(2016). https://doi.org/10.1016/j.jid.2015.09.001 JIDEAE 0022-202X Google Scholar
J.-T. Oh et al.,
“Three-dimensional imaging of skin melanoma in vivo by dual-wavelength photoacoustic microscopy,”
J. Biomed. Opt., 11
(3), 034032
(2006). https://doi.org/10.1117/1.2210907 JBOPFO 1083-3668 Google Scholar
J. Yao et al.,
“High-speed label-free functional photoacoustic microscopy of mouse brain in action,”
Nat. Methods, 12
(5), 407
–410
(2015). https://doi.org/10.1038/nmeth.3336 1548-7091 Google Scholar
K. Wang et al.,
“Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography,”
Phys. Med. Biol., 57
(17), 5399
–5423
(2012). https://doi.org/10.1088/0031-9155/57/17/5399 PHMBA7 0031-9155 Google Scholar
A. Rosenthal, V. Ntziachristos and D. Razansky,
“Acoustic inversion in optoacoustic tomography: a review,”
Curr. Med. Imaging Rev., 9
(4), 318
–336
(2013). https://doi.org/10.2174/15734056113096660006 Google Scholar
C. Huang et al.,
“Full-wave iterative image reconstruction in photoacoustic tomography with acoustically inhomogeneous media,”
IEEE Trans. Med. Imaging, 32
(6), 1097
–1110
(2013). https://doi.org/10.1109/TMI.2013.2254496 ITMID4 0278-0062 Google Scholar
K. Wang et al.,
“An imaging model incorporating ultrasonic transducer properties for three-dimensional optoacoustic tomography,”
IEEE Trans. Med. Imaging, 30
(2), 203
–214
(2011). https://doi.org/10.1109/TMI.2010.2072514 ITMID4 0278-0062 Google Scholar
Y. Xu et al.,
“Reconstructions in limited-view thermoacoustic tomography,”
Med. Phys., 31
(4), 724
–733
(2004). https://doi.org/10.1118/1.1644531 MPHYA6 0094-2405 Google Scholar
S. R. Arridge et al.,
“On the adjoint operator in photoacoustic tomography,”
Inverse Probl., 32
(11), 115012
(2016). https://doi.org/10.1088/0266-5611/32/11/115012 INPEEY 0266-5611 Google Scholar
S. Arridge et al.,
“Accelerated high-resolution photoacoustic tomography via compressed sensing,”
Phys. Med. Biol., 61
(24), 8908
–8940
(2016). https://doi.org/10.1088/1361-6560/61/24/8908 PHMBA7 0031-9155 Google Scholar
X. L. Dean-Ben, V. Ntziachristos and D. Razansky,
“Acceleration of optoacoustic model-based reconstruction using angular image discretization,”
IEEE Trans. Med. Imaging, 31
(5), 1154
–1162
(2012). https://doi.org/10.1109/TMI.2012.2187460 ITMID4 0278-0062 Google Scholar
X. L. Dean-Ben et al.,
“Accurate model-based reconstruction algorithm for three-dimensional optoacoustic tomography,”
IEEE Trans. Med. Imaging, 31
(10), 1922
–1928
(2012). https://doi.org/10.1109/TMI.2012.2208471 ITMID4 0278-0062 Google Scholar
A. Buehler et al.,
“Model-based optoacoustic inversions with incomplete projection data,”
Med. Phys., 38
(3), 1694
–1704
(2011). https://doi.org/10.1118/1.3556916 MPHYA6 0094-2405 Google Scholar
S. Gutta et al.,
“Modeling errors compensation with total least squares for limited data photoacoustic tomography,”
IEEE J. Sel. Top. Quantum Electron.,
(2019). https://doi.org/10.1109/JSTQE.2017.2772886 IJSQEN 1077-260X Google Scholar
G. Paltauf et al.,
“Iterative reconstruction algorithm for optoacoustic imaging,”
J. Acoust. Soc. Am., 112
(4), 1536
–1544
(2002). https://doi.org/10.1121/1.1501898 JASMAN 0001-4966 Google Scholar
J. Prakash et al.,
“Basis pursuit deconvolution for improving model-based reconstructed images in photoacoustic tomography,”
Biomed. Opt. Express, 5
(5), 1363
–1377
(2014). https://doi.org/10.1364/BOE.5.001363 BOEICL 2156-7085 Google Scholar
C. B. Shaw et al.,
“Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography,”
J. Biomed. Opt., 18
(8), 080501
(2013). https://doi.org/10.1117/1.JBO.18.8.080501 JBOPFO 1083-3668 Google Scholar
N. Awasthi et al.,
“Vector extrapolation methods for accelerating iterative reconstruction methods in limited-data photoacoustic tomography,”
J. Biomed. Opt., 23
(7), 071204
(2018). https://doi.org/10.1117/1.JBO.23.7.071204 JBOPFO 1083-3668 Google Scholar
C. Li and L. V. Wang,
“Photoacoustic tomography and sensing in biomedicine,”
Phys. Med. Biol., 54
(19), R59
–R97
(2009). https://doi.org/10.1088/0031-9155/54/19/R01 PHMBA7 0031-9155 Google Scholar
L. Zeng et al.,
“High antinoise photoacoustic tomography based on a modified filtered backprojection algorithm with combination wavelet,”
Med. Phys., 34
(2), 556
–563
(2007). https://doi.org/10.1118/1.2426406 MPHYA6 0094-2405 Google Scholar
S. H. Holan and J. A. Viator,
“Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction,”
Phys. Med. Biol., 53
(12), N227
–N236
(2008). https://doi.org/10.1088/0031-9155/53/12/N01 PHMBA7 0031-9155 Google Scholar
N. A. Rejesh, H. Pullagurla and M. Pramanik,
“Deconvolution-based deblurring of reconstructed images in photoacoustic/thermoacoustic tomography,”
J. Opt. Soc. Am. A, 30
(10), 1994
–2001
(2013). https://doi.org/10.1364/JOSAA.30.001994 JOAOD6 0740-3232 Google Scholar
S. Gutta et al.,
“Deep neural network-based bandwidth enhancement of photoacoustic data,”
J. Biomed. Opt., 22
(11), 116001
(2017). https://doi.org/10.1117/1.JBO.22.11.116001 JBOPFO 1083-3668 Google Scholar
R. A. Kruger et al.,
“Dedicated 3D photoacoustic breast imaging,”
Med. Phys., 40
(11), 113301
(2013). https://doi.org/10.1118/1.4824317 MPHYA6 0094-2405 Google Scholar
R. M. Lewitt and S. Matej,
“Overview of methods for image reconstruction from projections in emission computed tomography,”
Proc. IEEE, 91
(10), 1588
–1611
(2003). https://doi.org/10.1109/JPROC.2003.817882 Google Scholar
J. Chen et al.,
“Blind-deconvolution optical-resolution photoacoustic microscopy in vivo,”
Opt. Express, 21
(6), 7316
–7327
(2013). https://doi.org/10.1364/OE.21.007316 OPEXFF 1094-4087 Google Scholar
J. Prakash et al.,
“Model-resolution-based basis pursuit deconvolution improves diffuse optical tomographic imaging,”
IEEE Trans. Med. Imaging, 33
(4), 891
–901
(2014). https://doi.org/10.1109/TMI.2013.2297691 ITMID4 0278-0062 Google Scholar
H. B. Mitchell, Image Fusion: Theories, Techniques and Applications, Springer Science and Business Media, Heidelberg
(2010). Google Scholar
S. Li, X. Kang and J. Hu,
“Image fusion with guided filtering,”
IEEE Trans. Image Process., 22
(7), 2864
–2875
(2013). https://doi.org/10.1109/TIP.2013.2244222 IIPRE4 1057-7149 Google Scholar
K. He, J. Sun and X. Tang,
“Guided image filtering,”
IEEE Trans. Pattern Anal. Mach. Intell., 35
(6), 1397
–1409
(2013). https://doi.org/10.1109/TPAMI.2012.213 ITPIDJ 0162-8828 Google Scholar
G. L. Zeng, Medical Image Reconstruction: A Conceptual Tutorial, Springer, New York
(2010). Google Scholar
M. Tao, J. Yang and B. He,
“Alternating direction algorithms for total variation deconvolution in image reconstruction,”
(2009). Google Scholar
Y. Wang et al.,
“A new alternating minimization algorithm for total variation image reconstruction,”
SIAM J. Imaging Sci., 1
(3), 248
–272
(2008). https://doi.org/10.1137/080724265 Google Scholar
J. Prakash and P. K. Yalavarthy,
“A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography,”
Med. Phys., 40
(3), 033101
(2013). https://doi.org/10.1118/1.4792459 MPHYA6 0094-2405 Google Scholar
B. E. Treeby and B. T. Cox,
“k-wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields,”
J. Biomed. Opt., 15
(2), 021314
(2010). https://doi.org/10.1117/1.3360308 JBOPFO 1083-3668 Google Scholar
Y. Hristova, P. Kuchment and L. Nguyen,
“Reconstruction and time reversal in thermoacoustic tomography in acoustically homogeneous and inhomogeneous media,”
Inverse Probl., 24
(5), 055006
(2008). https://doi.org/10.1088/0266-5611/24/5/055006 INPEEY 0266-5611 Google Scholar
C. C. Paige and M. A. Saunders,
“LSQR: an algorithm for sparse linear equations and sparse least squares,”
ACM Trans. Math. Software, 8
(1), 43
–71
(1982). https://doi.org/10.1145/355984.355989 ACMSCU 0098-3500 Google Scholar
M. E. Kilmer and D. P. O’Leary,
“Choosing regularization parameters in iterative methods for ill-posed problems,”
SIAM J. Matrix Anal. Appl., 22
(4), 1204
–1221
(2001). https://doi.org/10.1137/S0895479899345960 SJMAEL 0895-4798 Google Scholar
C. Brezinski, G. Rodriguez and S. Seatzu,
“Error estimates for the regularization of least squares problems,”
Numer. Algorithms, 51
(1), 61
–76
(2009). https://doi.org/10.1007/s11075-008-9243-2 NUALEG 1017-1398 Google Scholar
L. Reichel, G. Rodriguez and S. Seatzu,
“Error estimates for large-scale ill-posed problems,”
Numer. Algorithms, 51
(3), 341
–361
(2009). https://doi.org/10.1007/s11075-008-9244-1 NUALEG 1017-1398 Google Scholar
M. Bhatt, A. Acharya and P. K. Yalavarthy,
“Computationally efficient error estimate for evaluation of regularization in photoacoustic tomography,”
J. Biomed. Opt., 21
(10), 106002
(2016). https://doi.org/10.1117/1.JBO.21.10.106002 JBOPFO 1083-3668 Google Scholar
M. V. Afonso, J. M. Bioucas-Dias and M. A. Figueiredo,
“Fast image recovery using variable splitting and constrained optimization,”
IEEE Trans. Image Process., 19
(9), 2345
–2356
(2010). https://doi.org/10.1109/TIP.2010.2047910 IIPRE4 1057-7149 Google Scholar
A. Chambolle,
“An algorithm for total variation minimization and applications,”
J. Math. Imaging Vision, 20
(1), 89
–97
(2004). https://doi.org/10.1023/B:JMIV.0000011325.36760.1e JIMVEC 0924-9907 Google Scholar
J. Eckstein and D. P. Bertsekas,
“On the douglasrachford splitting method and the proximal point algorithm for maximal monotone operators,”
Math. Program., 55
(1), 293
–318
(1992). https://doi.org/10.1007/BF01581204 MHPGA4 1436-4646 Google Scholar
J. C. Goswami and A. K. Chan, Fundamentals of Wavelets: Theory, Algorithms, and Applications, John Wiley and Sons, Hoboken, New Jersey
(2011). Google Scholar
K. R. Castleman, Digital Image Processing, Prentice-Hall, Englewood Cliffs, New Jersey
(1979). Google Scholar
Y. Qiang,
“Image denoising based on Haar wavelet transform,”
in Proc. of Int. Conf. on Electronics and Optoelectronics,
129
–132
(2011). https://doi.org/10.1109/ICEOE.2011.6013318 Google Scholar
A. Buades, B. Coll and J. M. Morel,
“A non-local algorithm for image denoising,”
in IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR),
60
–65
(2005). https://doi.org/10.1109/CVPR.2005.38 Google Scholar
R. C. Gonzalez, R. E. Woods and S. L. Eddins, Digital Imaging Processing Using MATLAB, Prentice Hall, Upper Saddle River, New Jersey
(2004). Google Scholar
P. Perona and J. Malik,
“Scale-space and edge detection using anisotropic diffusion,”
IEEE Trans. Pattern Anal. Mach. Intell., 12
(7), 629
–639
(1990). https://doi.org/10.1109/34.56205 ITPIDJ 0162-8828 Google Scholar
Z. Farbman et al.,
“Edge-preserving decompositions for multi-scale tone and detail manipulation,”
ACM Trans. Graphics, 27
(3), 67
(2008). https://doi.org/10.1145/1360612 ATGRDF 0730-0301 Google Scholar
A. Levin, D. Lischinski and Y. Weiss,
“A closed-form solution to natural image matting,”
IEEE Trans. Pattern Anal. Mach. Intell., 30
(2), 228
–242
(2008). https://doi.org/10.1109/TPAMI.2007.1177 ITPIDJ 0162-8828 Google Scholar
K. He, J. Sun and X. Tang,
“Single image haze removal using dark channel prior,”
IEEE Trans. Pattern Anal. Mach. Intell., 33
(12), 2341
–2353
(2011). https://doi.org/10.1109/TPAMI.2010.168 ITPIDJ 0162-8828 Google Scholar
A. Levin, D. Lischinski and Y. Weiss,
“Colorization using optimization,”
ACM Trans. Graphics, 23
(3), 689
–694
(2004). https://doi.org/10.1145/1015706 ATGRDF 0730-0301 Google Scholar
C. Tomasi and R. Manduchi,
“Bilateral filtering for gray and color images,”
in Sixth Int. Conf. on Computer Vision,
839
–846
(1998). https://doi.org/10.1109/ICCV.1998.710815 Google Scholar
V. Aurich, J. Weule,
“Non-linear Gaussian filters performing edge preserving diffusion,”
Mustererkennung, 538
–545 Springer, Berlin, Germany
(1995). Google Scholar
S. M. Smith and J. M. Brady,
“Susana new approach to low level image processing,”
Int. J. Comput. Vision, 23
(1), 45
–78
(1997). https://doi.org/10.1023/A:1007963824710 IJCVEQ 0920-5691 Google Scholar
C. Xiao and J. Gan,
“Fast image dehazing using guided joint bilateral filter,”
Visual Comput., 28
(6–8), 713
–721
(2012). https://doi.org/10.1007/s00371-012-0679-y VICOE5 0178-2789 Google Scholar
J.-H. Kim et al.,
“Optimized contrast enhancement for real-time image and video dehazing,”
J. Visual Commun. Image Represent., 24
(3), 410
–425
(2013). https://doi.org/10.1016/j.jvcir.2013.02.004 JVCRE7 1047-3203 Google Scholar
H.-J. Seo and P. Milanfar,
“Robust flash denoising/deblurring by iterative guided filtering,”
EURASIP J. Adv. Signal Process., 2012 3
(2012). https://doi.org/10.1186/1687-6180-2012-3 Google Scholar
B. Ham, M. Cho and J. Ponce,
“Robust guided image filtering using nonconvex potentials,”
IEEE Trans. Pattern Anal. Mach. Intell., 40
(1), 192
–207
(2018). https://doi.org/10.1109/TPAMI.2017.2669034 ITPIDJ 0162-8828 Google Scholar
G. L. Zeng,
“A fast method to emulate an iterative POCS image reconstruction algorithm,”
Med. Phys., 44
(10), e353
–e359
(2017). https://doi.org/10.1002/mp.2017.44.issue-10 MPHYA6 0094-2405 Google Scholar
D. B. Percival and A. T. Walden, Wavelet Methods for Time Series Analysis, 4 Cambridge University Press, Cambridge
(2006). Google Scholar
D. L. Donoho and J. M. Johnstone,
“Ideal spatial adaptation by wavelet shrinkage,”
Biometrika, 81
(3), 425
–455
(1994). https://doi.org/10.1093/biomet/81.3.425 BIOKAX 0006-3444 Google Scholar
N. Gandhi et al.,
“Photoacoustic-based approach to surgical guidance performed with and without a da Vinci robot,”
J. Biomed. Opt., 22
(12), 121606
(2017). https://doi.org/10.1117/1.JBO.22.12.121606 JBOPFO 1083-3668 Google Scholar
P. P. Pai, A. De and S. Banerjee,
“Accuracy enhancement for noninvasive glucose estimation using dual-wavelength photoacoustic measurements and kernel-based calibration,”
IEEE Trans. Instrum. Meas., 67
(1), 126
–136
(2018). https://doi.org/10.1109/TIM.2017.2761237 IEIMAO 0018-9456 Google Scholar
B. T. Cox and B. E. Treeby,
“Artifact trapping during time reversal photoacoustic imaging for acoustically heterogeneous media,”
IEEE Trans. Med. Imaging, 29
(2), 387
–396
(2010). https://doi.org/10.1109/TMI.2009.2032358 ITMID4 0278-0062 Google Scholar
J. Prakash et al.,
“Sparse recovery methods hold promise for diffuse optical tomographic image reconstruction,”
IEEE J. Sel. Top. Quantum Electron., 20
(2), 74
–82
(2014). https://doi.org/10.1109/JSTQE.2013.2278218 IJSQEN 1077-260X Google Scholar
X. Song et al.,
“Automated region detection based on the contrast-to-noise ratio in near-infrared tomography,”
Appl. Opt., 43
(5), 1053
–1062
(2004). https://doi.org/10.1364/AO.43.001053 Google Scholar
L. Li et al.,
“Multiview Hilbert transformation in full-ring transducer array-based photoacoustic computed tomography,”
J. Biomed. Opt., 22
(7), 076017
(2017). https://doi.org/10.1117/1.JBO.22.7.076017 JBOPFO 1083-3668 Google Scholar
Y. Lou et al.,
“Generation of anatomically realistic numerical phantoms for optoacoustic breast imaging,”
Proc. SPIE, 9708 97084O
(2016). https://doi.org/10.1117/12.2217609 PSISDG 0277-786X Google Scholar
Y. Lou et al.,
“Generation of anatomically realistic numerical phantoms for photoacoustic and ultrasonic breast imaging,”
J. Biomed. Opt., 22
(4), 041015
(2017). https://doi.org/10.1117/1.JBO.22.4.041015 JBOPFO 1083-3668 Google Scholar
S. K. Kalva and M. Pramanik,
“Experimental validation of tangential resolution improvement in photoacoustic tomography using modified delay-and-sum reconstruction algorithm,”
J. Biomed. Opt., 21
(8), 086011
(2016). https://doi.org/10.1117/1.JBO.21.8.086011 JBOPFO 1083-3668 Google Scholar
“American national standard for safe use of lasers (ansi z136. 1-2000),”
(2000). Google Scholar
P. K. Upputuri and M. Pramanik,
“Pulsed laser diode based optoacoustic imaging of biological tissues,”
Biomed. Phys. Eng. Express, 1
(4), 045010
(2015). https://doi.org/10.1088/2057-1976/1/4/045010 Google Scholar
Y. Matsumoto et al.,
“Label-free photoacoustic imaging of human palmar vessels: a structural morphological analysis,”
Sci. Rep., 8
(1), 786
(2018). https://doi.org/10.1038/s41598-018-19161-z SRCEC3 2045-2322 Google Scholar
M. Xu et al.,
“Photoacoustic characteristics of lipid-rich plaques under ultra-low temperature and formaldehyde treatment,”
Chin. Opt. Lett., 16
(3), 031702
(2018). https://doi.org/10.3788/COL CJOEE3 1671-7694 Google Scholar
S. K. Kalva and M. Pramanik,
“Modified delay-and-sum reconstruction algorithm to improve tangential resolution in photoacoustic tomography,”
Proc. SPIE, 10064 100643W
(2017). https://doi.org/10.1117/12.2250487 PSISDG 0277-786X Google Scholar
A. Petschke and P. J. La Rivière,
“Comparison of photoacoustic image reconstruction algorithms using the channelized hotelling observer,”
J. Biomed. Opt., 18
(2), 026009
(2013). https://doi.org/10.1117/1.JBO.18.2.026009 JBOPFO 1083-3668 Google Scholar
K. Wang et al.,
“Discrete imaging models for three-dimensional optoacoustic tomography using radially symmetric expansion functions,”
IEEE Trans. Med. Imaging, 33 1180
–1193
(2014). https://doi.org/10.1109/TMI.2014.2308478 ITMID4 0278-0062 Google Scholar
G. Li et al.,
“Multiview Hilbert transformation for full-view photoacoustic computed tomography using a linear array,”
J. Biomed. Opt., 20
(6), 066010
(2015). https://doi.org/10.1117/1.JBO.20.6.066010 JBOPFO 1083-3668 Google Scholar
Q. Sheng et al.,
“A constrained variable projection reconstruction method for photoacoustic computed tomography without accurate knowledge of transducer responses,”
IEEE Trans. Med. Imaging, 34
(12), 2443
–2458
(2015). https://doi.org/10.1109/TMI.2015.2437356 ITMID4 0278-0062 Google Scholar
H. S. Kam, M. Hanmandlu and W. H. Tan,
“An improved image enhancement combining smoothing and sharpening,”
in Conf. on Convergent Technologies for Asia-Pacific Region,
36
–40
(2003). https://doi.org/10.1109/TENCON.2003.1273209 Google Scholar
I. V. Safonov et al.,
“Adaptive sharpening of photos,”
Proc. SPIE, 6807 68070U
(2008). https://doi.org/10.1117/12.758613 PSISDG 0277-786X Google Scholar
N. Awasthi et al.,
“Image-guided filtering for improving photoacoustic tomographic image reconstruction,”
(2018) https://sites.google.com/site/sercmig/home/patguided March ). 2018). Google Scholar
BiographyNavchetan Awasthi received MTech degree in computational science from Indian Institute of Science (IISc), Bangalore in 2016. He also received BTech degree in electronics and communication engineering from National Institute of Technology (NIT), Jalandhar in 2011. He is currently a PhD student in the Department of Computational and Data Sciences at Indian Institute of Science, Bangalore. His research interests include inverse problems in biomedical optics and medical image reconstruction. Sandeep Kumar Kalva received his Masters degree in biomedical engineering from the Indian Institute of Technology (IIT), Hyderabad, India, in 2015. He is currently a PhD student in the School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore. His research area is on functional photoacoustic imaging for various clinical applications. Manojit Pramanik received his PhD in biomedical engineering from Washington University in St. Louis, Missouri, USA. Currently, he is an assistant professor of the School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore. His research interests include the development of photoacoustic/thermoacoustic imaging systems, image reconstruction methods, clinical application areas such as breast cancer imaging, molecular imaging, contrast agent development, and Monte Carlo simulation of light propagation in biological tissue. Phaneendra K. Yalavarthy received his MSc degree in engineering from the Indian Institute of Science, Bangalore and PhD degree in biomedical computation from Dartmouth College, Hanover, NH, USA, in 2007. He is an associate professor with the Department of Computational and Data Sciences at Indian Institute of Science, Bangalore. His research interests include medical image computing, medical image analysis, and biomedical optics. He is a senior member of IEEE and serves as an associate editor of IEEE Transactions on Medical Imaging. |