Open Access
2 June 2022 Computational multifocus fluorescence microscopy for three-dimensional visualization of multicellular tumor spheroids
Julia R. Alonso, Alejandro Silva, Ariel Fernández, Miguel Arocena
Author Affiliations +
Abstract

Significance: Three-dimensional (3D) visualization of multicellular tumor spheroids (MCTS) in fluorescence microscopy can rapidly provide qualitative morphological information about the architecture of these cellular aggregates, which can recapitulate key aspects of their in vivo counterpart.

Aim: The present work is aimed at overcoming the shallow depth-of-field (DoF) limitation in fluorescence microscopy while achieving 3D visualization of thick biological samples under study.

Approach: A custom-built fluorescence microscope with an electrically focus-tunable lens was developed to optically sweep in-depth the structure of MCTS. Acquired multifocus stacks were combined by means of postprocessing algorithms performed in the Fourier domain.

Results: Images with relevant characteristics as extended DoF, stereoscopic pairs as well as reconstructed viewpoints of MCTS were obtained without segmentation of the focused regions or estimation of the depth map. The reconstructed images allowed us to observe the 3D morphology of cell aggregates.

Conclusions: Computational multifocus fluorescence microscopy can provide 3D visualization in MCTS. This tool is a promising development in assessing the morphological structure of different cellular aggregates while preserving a robust yet simple optical setup.

1.

Introduction

Three-dimensional (3D) culture of cancer cells mimics the in vivo microenvironment more closely compared to two-dimensional (2D) monolayer cell culture (e.g., in a petri dish). In this regard, imaging of cell aggregates known as 3D multicellular tumor spheroids (MCTS) is of high relevance.1 MCTS recapitulate key parameters of the tumor microenvironment, such as gradients of hypoxia and extracellular pH, which makes them a more realistic model of the early tumor environment than the standard methodology of 2D cell culture.2 Since most cellular components are colorless, to observe for example the nuclei in MCTS, cells are usually stained with DNA binding fluorescent probes such as 4’,6-diamidino-2-phenylindole (DAPI) and observed through a fluorescence microscope.3 Fluorescent staining of DNA by DAPI then allows to visualize of cell nuclei within 3D MCTS4 to provide morphological information about the architecture of the MCTS.5

However, limited depth-of-field (DoF) emerges as an optical limitation which makes it impossible for the all-in-focus visualization of the 3D structure of a thick sample in a single image. One way to achieve 3D fluorescence imaging is by means of optical sectioning in confocal,6 in structured illumination,7 or in light-sheet microscopy8 at the cost of rather complex optical setup and calibration. Digital holography has also been proposed for fluorescence microscopy9 to retrieve 3D information by incorporating a reference beam into the setup with a consequent extra alignment in the system. Transport of irradiance equation 10 is an alternative, noninterferometric technique, where phase distribution needs also to be retrieved from defocused fluorescence images to estimate, after inverse Fresnel propagation, focused images at different planes. Other methods are based on acquiring spatially multiplexed information from the sample. This is the case for light-field microscopy where a microlens array is inserted in front of the microscope’s image sensor to simultaneously capture 2D spatial and 2D angular information,1113 integral imaging,14,15 plenoptic projection fluorescence tomography,16 or 3D autocorrelation reconstruction in combination with phase retrieval tomography.17 Also a diffuser in the pupil plane consisting of randomly placed microlenses with varying focal lengths has been implemented; in this case, the random positions provide a larger field of view compared to a conventional microlens array, and the diverse focal lengths improve the axial depth range.18 Another interesting approach is Fourier ptychographic microscopy, which iteratively stitches together a number of different angles illuminated, low-resolution intensity images in Fourier space to produce a wide-field, high-resolution complex sample image.19,20

On the other hand, multifocus (focus-stacking or z-stacking) microscopy21,22 is a simple technique where a scanning mechanism is introduced into a wide-field microscope in order to allow the acquisition of a set of differently focused images along the optical axis. Extended DoF (or all-in-focus) image is usually recovered using focus-recognition algorithms and depth-map retrieval.

As a way to overcome DoF limitation while achieving 3D visualization in fluorescence microscopy, in the present paper we propose a method based on multifocus sensing where a custom-built fluorescence microscope incorporating an electrically focus-tunable lens (EFTL) is employed to optically sweep in-depth the structure of MCTS. The EFTL allows a non-mechanical scanning in order to avoid lateral displacements between acquired images (neither the sample nor the optics are moved)23,24 and once multifocus images are taken, image registration is performed to match the different fields of view. Then a Fourier domain post-processing approach25,26—which does not require depth-map estimation or segmentation of in-focus regions—is applied and the acquired information is reorganized through algorithms to allow DoF extension, synthesis of novel viewpoints as well as reconstruction of stereoscopic pairs which can serve as 3D visualization tools of a thick biological sample. Validation experiments corresponding to 3D visualization of MCTS are presented.

2.

Methodology

2.1.

Multifocus Image Acquisition and Field-of-View Correction

The scheme of the custom-built fluorescence microscope is shown in Fig. 1. The main components include a camera sensor (Thorlabs CCD 8051-C USB 3.0, 3296×2472  pixels resolution, 5.5  μm pixel pitch), an EFTL (Optotune EL-16-40-TC-VIS-5D-C), a LED with an emission peak at 385 nm and FWHM of 12 nm (Thorlabs LED M385LP1), a dichroic mirror (reflection 375 to 393 nm, transmission 414 to 450 nm) and a water immersion microscope objective lens (Olympus UMPLFLN 20×, numerical aperture NA=0.5, focal length fMO=9.00  mm, working distance WD=3.5  mm).

Fig. 1

Scheme of the multifocus custom built fluorescence microscope. LED: Light emission diode as excitation source with center wavelength at 385 nm, L: illumination lens system, DM: dichroic mirror, MO: microscope objective, EFTL: electrically focus-tunable lens. MCTS: multicellular tumor spheroid stained with DAPI (emission peak centered at 461 nm).

JBO_27_6_066501_f001.png

The biological sample used in this work is from human prostatic carcinoma cell line (LNCaP)27 and was cultivated to form MCTS by means of the hanging drop method in which cells are suspended in droplets of medium where they develop into coherent 3D aggregates and are readily accessed for analysis.28 Cells were then stained with DAPI with a broadband excitation centered at 358 nm and emission at 461 nm. An extra filter centered at 457 nm and bandwidth 22 nm was placed before the sensor in order to enhance the contrast of the images.

Parts of the sample to be captured in-focus are those placed at the conjugate image plane of the sensor, which is shifted from the working distance plane of the microscope objective by an amount z given as

Eq. (1)

z=fMOP(DP1)P1fMO1+(DP1)feq1,
where feq is the equivalent focal length of the combination of the microscope objective and the EFTL and verifies

Eq. (2)

feq1=fMO1+PfMO1Pd,
while D is the distance between the EFTL and the sensor and d the distance between the back principal plane of the microscope objective and the EFTL. Optical power P of the EFTL can be varied between 3 and 2 diopters for currents between 270 and 230  mA. Since in our setup D10  cm, d5  cm, the maximum focusing (or z) range of the system is 210  μm.

In-focus parts of the sample are obtained at the sensor with lateral magnification M given as

Eq. (3)

M=P1fMO1+(DP1)feq1,
which varies along the focusing range with a maximum relative change of 15%. This change is reflected in turn in a change in the FoV along the images obtained while the current through the EFTL varies.

The multifocus stack was acquired for a set of currents in the EFTL jk,k=1,,N between 265 and 125 mA in steps of 10  mA. The N=15 image stack is shown in Fig. 2. Note that field of view is not constant along with the stack of images and needs to be corrected before the synthesis of novel viewpoints from multifocus stack is performed.29

Fig. 2

(a)–(o) 15 multifocus images acquired (z-stack) for currents in the EFTL between 265 and 125 mA in steps of 10  mA. See Video 1 for a visualization of the stack (Video 1, MP4, 0.2 MB [URL: https://doi.org/10.1117/1.JBO.27.6.066501.1]).

JBO_27_6_066501_f002.png

Since in the present work the EFTL is positioned in the set-up in a way that total intensity remains constant between the acquired images (illumination path is not affected by the change in focus of the EFTL) this allows us to use conservation of energy (i.e., integral of intensity values in a given image of the stack should be constant) to implement registration between images of the stack (note that in other works including an EFTL,24 illumination intensity changes between the acquired images due to the position of the EFTL in the set-up, so conservation of radiant energy does not hold and registration needs to be performed following different approaches). Energy is evaluated for a given reference image (in our case k=1 image) and the rest of the images in the stack are rescaled to give the same value. Then the captured visual information is reorganized through a Fourier domain postprocessing approach which does not require depth-map estimation or segmentation of in-focus regions. Image reconstruction is accomplished considering only, besides an effective parameter, the current through the EFTL for each image in the stack.

2.2.

Image Formation Model and Novel Viewpoint Synthesis

Once the multifocus image stack is acquired, postcapture processing algorithms enable the synthesis of images with novel viewpoints of the scene.25 Let ik be the intensity distribution of the k’th image of a stack of N images. [For color images in RGB space ik=(ikR,ikG,ikB), k=1,,N]. The ik image taken with current jk through the EFTL can be described, neglecting noise, and chromatic aberrations, by the following equation:

Eq. (4)

ik(x,y)=fk(x,y)+kkhkk(x,y)*fk(x,y),
where fk is the in-focus region of ik. The part of the scene that is out-of-focus in ik comes from the 2D convolution between fk (in-focus part of ik) and the 2D intensity PSF hkk(x,y) associated with the currents jk and jk

Eq. (5)

hkk(x,y)=1πrkk2circ(x2+y2rkk).
where

Eq. (6)

rkkp=R0|jkjk|,
and

Eq. (7)

R0=Rαp,
where R is the aperture of the imaging system, α is the linear coefficient for the relation between lateral magnification and current through EFTL and p is the pixel pitch of the camera. For the stack of images in Fig. 2 effective parameter R00.67  mA1.

Let us consider the Fourier transform (FT) of the set of N coupled Eq. (4), and arrange them in vector form as

Eq. (8)

I(u,v)=H(u,v)F(u,v),
where (u,v) are spatial frequencies and N-element column vectors I, F and N×N symmetric matrix H are given as

Eq. (9)

I(u,v)=(I1(u,v)I2(u,v)IN(u,v)),F(u,v)=(F1(u,v)F2(u,v)FN(u,v)),H(u,v)=(1H12(u,v)H1  N(u,v)H12(u,v)1HN1  N(u,v)H1  N(u,v)HN1  N(u,v)1).

If H(u,v) is invertible, then the solution to the linear system given by Eq. (8) is F(u,v)=H1(u,v)I(u,v), but if H(u,v) is not invertible (as for the DC frequency components), then a solution to the system may be found through the Moore–Penrose pseudoinverse H.30 The Moore–Penrose pseudoinverse provides the set of vectors that minimize the Euclidean norm H(u,v)F(u,v)I(u,v) in the least squares sense. Thus, the minimal norm vector is given as

Eq. (10)

F(u,v)=H(u,v)I(u,v).

The reconstruction of an arbitrary horizontal viewpoint of the scene is accomplished by simulating the displacement bx of a pinhole camera in the horizontal direction with respect to the center of the original pupil (similarly for a by displacement in the vertical direction). The horizontal disparity dk between the images of a given point of the in-focus component fk as seen by the sensor of a centered pinhole camera and a pinhole camera displaced to the left is given as

Eq. (11)

dk=bxαjk,

(aside from a constant factor independent of k and related to the magnification at zero current). Then, in the piecewise planar approximation of the 3D scene, to obtain a shifted viewpoint sbx(x,y), each focus slice fk(x,y) should be shifted in an amount according to the disparity associated with the jk current through EFTL and the baseline displacement (bx) of the camera

Eq. (12)

sbx(x,y)=k=1Nfk(xbxαjk,y).

In particular, s0(x,y) recovers the image as captured with a pinhole camera in the center of the original circular pupil (i.e., extended-DoF or all-in-focus image reconstruction of the scene23).

By means of the FT shift theorem, which states that translation in the space domain introduces a linear phase shift in the frequency domain,31 and by using Eq. (10) for F(u,v), we obtain the FT of Eq. (12):

Eq. (13)

Sbx(u,v)=k=1Nej2παjk(bxu)(H(u,v)I(u,v))k.

Let us now consider the baseline displacement bx as a fraction βx (|βx|1) of the pupil R (since displacements outside of the aperture have no physical meaning)

Eq. (14)

bx=βxR,
so by means of Eq. (7), Eq. (13) can be rewritten as

Eq. (15)

Sβx(u,v)=k=1Nej2πjkβxR0(pu)(H(u,v)I(u,v))k.

Finally, by Fourier inverse transform of Eq. (15) we obtain the new scene perspective as seen from a pinhole camera, translated a fraction βx to the left of the center of the original circular pupil25

Eq. (16)

sβx=F1{Sβx}.

In order to achieve visualization with full parallax, it is straightforward to extend Eq. (15) to the case of vertical motion simulation

Eq. (17)

Sβy(u,v)=k=1Nej2πjkβyR0(pv)(H(u,v)I(u,v))k,
and consider the synthesis of new scene perspective as seen from a pinhole camera, translated a fraction βy upward of the center of the original circular pupil by Fourier inverse transform of Eq. (17)

Eq. (18)

sβy=F1{Sβy}.

The proposed method is then able to reconstruct the extended DoF and allows the visualization of the reconstructed scene from different perspectives without previous segmentation of the focused regions from the images in the stack. However, it is possible to retrieve the depth map by combining this method with other schemes.32

3.

Results

3.1.

Extend Depth-of-Field and Viewpoint Synthesis

Reconstruction of the extended DoF or all-in-focus image corresponds to βx=0, (i.e., as seen with a centered pinhole camera) and is shown in Fig. 3. Note how, unlike original images of the stack in Fig. 2, individual cell nuclei from different depths of the aggregate can be clearly seen in a single image.

Fig. 3

Extended DoF for a virtual centered pinhole view (βx=0, βy=0); see Video 2 for a complete set of novel viewpoints corresponding to 0.25βx0.25 and 0.25βy0.25 (Video 2, MP4, 1.0 MB [URL: https://doi.org/10.1117/1.JBO.27.6.066501.2]).).

JBO_27_6_066501_f003.png

If we instead consider arbitrary fractional displacements βx, βy, the corresponding viewpoints can be synthesized from Eqs. (16) and (18), respectively (combination of horizontal and vertical viewpoints is straightforward). A complete set of novel viewpoints for 0.25βx0.25, 0.25βy0.25 is available in Video 2.

3.2.

Stereoscopic Pairs for 3D Visualization

Binocular vision is based on the fact that 3D objects are perceived from two different perspectives due to the horizontal separation between our left and right eyes. As a result, the left and right images of a 3D scene in our retinas are slightly different. This retinal disparity between the images provides the observer with information about the relative distances and depth structure of 3D objects. Both perspectives of the same 3D scene are fused by the brain to give the perception of depth.33,34

In a similar way, a pair of stereoscopic images can be generated by considering a virtual stereocamera35 formed by a left pinhole camera displaced to the left of the center of the original pupil, bx=B/2, and a right pinhole camera displaced to the right of the center of the original pupil, bx=B/2, where the separation B between the left and right virtual pinhole cameras is known as the baseline. Since points of view from outside of the aperture have no physical meaning, B2R.

Then, it is straightforward to reconstruct the left and right views according to

Eq. (19)

iL(x,y)=sB/2R(x,y),

Eq. (20)

iR(x,y)=sB/2R(x,y),
where each r.h.s. is to be calculated by means of Eq. (16). Once the stereoscopic pair is generated, the left and right images can be displayed in different ways.36 In Fig. 4, the cross-eye stereo pair for B=R/2 is presented. With some practice, the fused image is perceived by deliberately crossing one’s eyes until the two images come together.

Fig. 4

Stereoscopic pair for cross eyed visualization. (a) Reconstructed and (b) perspective images [images corresponding to pinhole virtually displaced to the right and to the left, respectively].

JBO_27_6_066501_f004.png

3.3.

Performance Assessment

Quantitative comparison can be performed with the help of a synthetic multifocus stack since, unlike the real stack, a ground-truth reference for each point of view of interest can be constructed. Figure 5(a) shows the synthetic 3D scene representing three rings of fluorescent beads with each ring lying on a different plane at distances zi,i=1,2,3. Images of the stack corresponding to the system focusing on each of these planes (for currents ji,i=1,2,3) are shown in Fig. 5(b13), respectively.

Fig. 5

Synthetic stack of fluorescent beads and reconstruction of viewpoints. (a) 3D scene. (b13) Images of the stack corresponding to R012.7  mA1 and the system focusing for currents through the EFTL 50, 33.3, 0 mA, respectively. (c13) Ground-truth for fractional displacements βx=0.5,0,0.5, respectively [vertical dashed guideline passing through a bead in the central viewpoint (b2) added to visualize more clearly the displacements]. (d13) Reconstructed viewpoints for fractional displacements βx=0.5,0,0.5, respectively.

JBO_27_6_066501_f005.png

Figure 5(c13) shows the ground truth for the scene as viewed from a pinhole camera displaced to the left, center, and right, for relative displacements βx=0.5,0,0.5, respectively. The multifocus stack of Fig. 5(b13) is used to render the same viewpoints and the results obtained by means of Eq. (16) are presented in Fig. 5(d13), respectively. Table 1 shows the mean square error resulting from the comparison (of luminances) against the ground truth for each relative displacement, showing a very good agreement between the reconstructed viewpoint and its corresponding ground truth.

Table 1

MSE values from comparison against ground-truth for different horizontal relative displacements βx.

βxMSE
+0.50.4865
00.7065
−0.50.4799

4.

Conclusion

We have developed a custom-built fluorescence microscope that incorporates an electrically focus-tunable lens and allows us to acquire sets of multifocus images from thick biological samples, in particular MCTS.

Our algorithms operated then over the acquired stacks to accomplish extended DoF by multifocus image fusion without depth-map estimation or segmentation of the in-focus regions.

Besides all in focus reconstruction along the optical axis, viewpoint synthesis with shifts in perspective can be performed to provide a stereoscopic pair of images of the sample as well as 3D visualization of the 3D structure of the cell aggregates.

Our proof-of-principle experimental results show the potential of the present approach, which could serve in a wide range of biological and biomedical applications where 3D visualization of a biological sample might be useful. As a future line of work, it might be interesting to include more fluorescent channels to assess different cellular structures.

Disclosures

The authors have no conflicts of interest to disclose.

Acknowledgments

This work was supported by Comisión Sectorial de Investigación Científica (CSIC) (grant number ID 237) and Programa de Desarrollo de las Ciencias Básicas (PEDECIBA). Significant parts of this paper were presented at Three-Dimensional Imaging, Visualization, and Display 201937 and Unconventional Optical Imaging II.38

References

1. 

J. Laurent et al., “Multicellular tumor spheroid models to explore cell cycle checkpoints in 3D,” BMC cancer, 13 73 (2013). https://doi.org/10.1186/1471-2407-13-73 BCMACL 1471-2407 Google Scholar

2. 

L. E. Jamieson, D. J. Harrison and C. Campbell, “Chemical analysis of multicellular tumour spheroids,” Analyst, 140 (12), 3910 –3920 (2015). https://doi.org/10.1039/C5AN00524H ANLYAG 0365-4885 Google Scholar

3. 

J. W. Lichtman and J.-A. Conchello, “Fluorescence microscopy,” Nat. Methods, 2 (12), 910 –919 (2005). https://doi.org/10.1038/nmeth817 1548-7091 Google Scholar

4. 

K. Olofsson et al., “Single cell organization and cell cycle characterization of DNA stained multicellular tumor spheroids,” Sci. Rep., 11 17076 (2021). https://doi.org/10.1038/s41598-021-96288-6 SRCEC3 2045-2322 Google Scholar

5. 

S. J. Han, S. Kwon and K. S. Kim, “Challenges of applying multicellular tumor spheroids in preclinical phase,” Cancer Cell Int., 21 152 (2021). https://doi.org/10.1186/s12935-021-01853-8 Google Scholar

6. 

M. Egeblad et al., “Visualizing stromal cell dynamics in different tumor microenvironments by spinning disk confocal microscopy,” Dis. Models Mech., 1 (2–3), 155 –167 (2008). https://doi.org/10.1242/dmm.000596 Google Scholar

7. 

X. Zhou et al., “Double-exposure optical sectioning structured illumination microscopy based on Hilbert transform reconstruction,” PLoS One, 10 (3), e0120892 (2015). https://doi.org/10.1371/journal.pone.0120892 POLNCL 1932-6203 Google Scholar

8. 

B.-C. Chen et al., “Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution,” Science, 346 (6208), 1257998 (2014). https://doi.org/10.1126/science.1257998 SCIEAS 0036-8075 Google Scholar

9. 

J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics, 2 (3), 190 –195 (2008). https://doi.org/10.1038/nphoton.2007.300 NPAHBY 1749-4885 Google Scholar

10. 

S. K. Rajput et al., “Three-dimensional fluorescence imaging using the transport of intensity equation,” J. Biomed. Opt., 25 (3), 032004 (2019). https://doi.org/10.1117/1.JBO.25.3.032004 JBOPFO 1083-3668 Google Scholar

11. 

M. Levoy et al., “Light field microscopy,” in ACM SIGGRAPH 2006 Pap., 924 –934 (2006). Google Scholar

12. 

R. Prevedel et al., “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods, 11 (7), 727 –730 (2014). https://doi.org/10.1038/nmeth.2964 1548-7091 Google Scholar

13. 

L. Galdón et al., “Fourier lightfield microscopy: a practical design guide,” Appl. Opt., 61 (10), 2558 –2564 (2022). https://doi.org/10.1364/AO.453723 APOPAI 0003-6935 Google Scholar

14. 

G. Scrofani et al., “Fimic: design for ultimate 3D-integral microscopy of in-vivo biological samples,” Biomed. Opt. Express, 9 (1), 335 –346 (2018). https://doi.org/10.1364/BOE.9.000335 BOEICL 2156-7085 Google Scholar

15. 

M. Martnez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics, 10 (3), 512 –566 (2018). https://doi.org/10.1364/AOP.10.000512 AOPAC7 1943-8206 Google Scholar

16. 

I. Iglesias and J. Ripoll, “Plenoptic projection fluorescence tomography,” Opt. Express, 22 (19), 23215 –23225 (2014). https://doi.org/10.1364/OE.22.023215 OPEXFF 1094-4087 Google Scholar

17. 

D. Ancora et al., “Phase-retrieved tomography enables mesoscopic imaging of opaque tumor spheroids,” Sci. Rep., 7 11854 (2017). https://doi.org/10.1038/s41598-017-12193-x SRCEC3 2045-2322 Google Scholar

18. 

F. L. Liu et al., “Fourier diffuserscope: single-shot 3D Fourier light field microscopy with a diffuser,” Opt. Express, 28 (20), 28969 –28986 (2020). https://doi.org/10.1364/OE.400876 OPEXFF 1094-4087 Google Scholar

19. 

G. Zheng, R. Horstmeyer and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics, 7 (9), 739 –745 (2013). https://doi.org/10.1038/nphoton.2013.187 NPAHBY 1749-4885 Google Scholar

20. 

L. Tian et al., “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica, 2 (10), 904 –911 (2015). https://doi.org/10.1364/OPTICA.2.000904 Google Scholar

21. 

S. Abrahamsson et al., “Fast multicolor 3D imaging using aberration-corrected multifocus microscopy,” Nat. Methods, 10 (1), 60 –63 (2013). https://doi.org/10.1038/nmeth.2277 1548-7091 Google Scholar

22. 

S. Liu and H. Hua, “Extended depth-of-field microscopic imaging with a variable focus microscope objective,” Opt. Express, 19 (1), 353 –362 (2011). https://doi.org/10.1364/OE.19.000353 OPEXFF 1094-4087 Google Scholar

23. 

J. R. Alonso et al., “All-in-focus image reconstruction under severe defocus,” Opt. Lett., 40 (8), 1671 –1674 (2015). https://doi.org/10.1364/OL.40.001671 OPLEDP 0146-9592 Google Scholar

24. 

J. Jiang et al., “Fast 3-D temporal focusing microscopy using an electrically tunable lens,” Opt. Express, 23 (19), 24362 –24368 (2015). https://doi.org/10.1364/OE.23.024362 OPEXFF 1094-4087 Google Scholar

25. 

J. R. Alonso, A. Fernández and J. A. Ferrari, “Reconstruction of perspective shifts and refocusing of a three-dimensional scene from a multi-focus image stack,” Appl. Opt., 55 (9), 2380 –2386 (2016). https://doi.org/10.1364/AO.55.002380 APOPAI 0003-6935 Google Scholar

26. 

J. R. Alonso, “Fourier domain post-acquisition aperture reshaping from a multi-focus stack,” Appl. Opt., 56 (9), D60 –D65 (2017). https://doi.org/10.1364/AO.56.000D60 APOPAI 0003-6935 Google Scholar

27. 

M. Arocena et al., “Using a variant of coverslip hypoxia to visualize tumor cell alterations at increasing distances from an oxygen source,” J. Cell. Physiol., 234 16671 –16678 (2019). https://doi.org/10.1002/jcp.28507 JCLLAX 0021-9541 Google Scholar

28. 

N. E. Timmins, L. K. Nielsen, “Generation of multicellular tumor spheroids by the hanging-drop method,” Tissue Engineering, 141 –151 Springer(2007). Google Scholar

29. 

A. Kubota, K. Kodama and K. Aizawa, “Registration and blur estimation methods for multiple differently focused images,” in Proc. Int. Conf. Image Process., ICIP 99, 447 –451 (1999). https://doi.org/10.1109/ICIP.1999.822936 Google Scholar

30. 

A. Ben-Israel and T. N. Greville, Generalized Inverses: Theory and Applications, 15 Springer Science & Business Media(2003). Google Scholar

31. 

J. W. Goodman, Introduction to Fourier Optics, Roberts and Company Publishers(1996). Google Scholar

32. 

J. R. Alonso, A. Fernández and B. Javidi, “Augmented reality three-dimensional visualization with multifocus sensing,” Opt. Contin., 1 (2), 355 –365 (2022). https://doi.org/10.1364/OPTCON.445068 Google Scholar

33. 

I. P. Howard and B. J. Rogers, Binocular Vision and Stereopsis, Oxford University Press(1995). Google Scholar

34. 

M. Lambooij et al., “Visual discomfort and visual fatigue of stereoscopic displays: a review,” J. Imaging Sci. Technol., 53 (3), 30201 (2009). https://doi.org/10.2352/J.ImagingSci.Technol.2009.53.3.030201 JIMTE6 1062-3701 Google Scholar

35. 

J. Alonso, “Stereoscopic 3D scene synthesis from a monocular camera with an electrically tunable lens,” Proc. SPIE, 9970 99700J (2016). https://doi.org/10.1117/12.2237086 PSISDG 0277-786X Google Scholar

36. 

K. Iizuka, Engineering Optics, 35 Springer Science & Business Media(2013). Google Scholar

37. 

J. R. Alonso, A. Silva and M. Arocena, “3D visualization in multifocus fluorescence microscopy,” Proc. SPIE, 10997 109970Q (2019). https://doi.org/10.1117/12.2520067 PSISDG 0277-786X Google Scholar

38. 

J. R. Alonso, A. Silva and M. Arocena, “Computational multimodal and multifocus 3D microscopy,” Proc. SPIE, 11351 1135110 (2020). https://doi.org/10.1117/12.2556008 PSISDG 0277-786X Google Scholar

Biography

Julia R. Alonso is an assistant professor at Physics Institute of the Engineering Faculty, Universidad de la República, Uruguay. She received her MA and PhD degrees in physics from this university. Her areas of research are in applied optics, computational optical imaging, and microscopy. She is a member of SPIE.

Alejandro Silva is a teaching assistant at the Physics Institute of the Engineering Faculty, Universidad de la República, Uruguay. He is currently pursuing a PhD in physical engineering from this university.

Ariel Fernández is an assistant professor at Physics Institute of the Engineering Faculty, Universidad de la República, Uruguay. He received his MA and PhD degrees in physics from this university. His areas of research are in applied optics, image processing, and pattern recognition. He is a member of SPIE.

Miguel Arocena is an assistant professor at the Biochemistry Department of the Odontology Faculty, Universidad de la República, Uruguay. He received his MA degree from this university, and his PhD from the University of Aberdeen, UK. His areas of research are in cancer cell biology and cell culture models of the tumor microenvironment.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Julia R. Alonso, Alejandro Silva, Ariel Fernández, and Miguel Arocena "Computational multifocus fluorescence microscopy for three-dimensional visualization of multicellular tumor spheroids," Journal of Biomedical Optics 27(6), 066501 (2 June 2022). https://doi.org/10.1117/1.JBO.27.6.066501
Received: 18 October 2021; Accepted: 23 May 2022; Published: 2 June 2022
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Luminescence

Microscopy

Visualization

3D image processing

3D visualizations

Microscopes

Coded apertures

Back to Top