Diffuse optical tomography (DOT) has shown promise in biomedical research, such as breast cancer diagnostics and brain imaging, by reconstructing hidden objects within scattering media. However, the conventional reconstruction framework faces challenges due to the highly ill-posed inverse problem of reconstructing optical properties. This work introduces a novel approach, neural field-based diffuse optical tomography (NeuDOT), which leverages a multi-layer perceptron (MLP) to learn an implicit function that maps spatial coordinates to their corresponding optical absorption coefficients. The performance of the NeuDOT method has been evaluated through several phantom studies, demonstrating its potential for high spatial resolution DOT reconstruction
The depth of field (DoF) effect is a useful tool in photography and cinematography because of its aesthetic value. However, capturing and displaying dynamic DoF effect were until recently a quality unique to expensive and bulky movie cameras. A computational approach to generate realistic DoF effects for mobile devices such as tablets is proposed. We first calibrate the rear-facing stereo cameras and rectify the stereo image pairs through FCam API, then generate a low-res disparity map using graph cuts stereo matching and subsequently upsample it via joint bilateral upsampling. Next, we generate a synthetic light field by warping the raw color image to nearby viewpoints, according to the corresponding values in the upsampled high-resolution disparity map. Finally, we render dynamic DoF effect on the tablet screen with light field rendering. The user can easily capture and generate desired DoF effects with arbitrary aperture sizes or focal depths using the tablet only, with no additional hardware or software required. The system has been examined in a variety of environments with satisfactory results, according to the subjective evaluation tests.
Multi-flash (MF) photography offers a number of advantages over regular photography including removing the
effects of illumination, color and texture as well as highlighting occlusion contours. Implementing MF photography
on mobile devices, however, is challenging due to their restricted form factors, limited synchronization
capabilities, low computational power and limited interface connectivity. In this paper, we present a novel mobile
MF technique that overcomes these limitations and achieves comparable performance as conventional MF. We
first construct a mobile flash ring using four LED lights and design a special mobile flash-camera synchronization
unit. The mobile device’s own flash first triggers the flash ring via an auxiliary photocell. The mobile flashes are
then triggered consecutively in sync with the mobile camera’s frame rate, to guarantee that each image is captured
with only one LED flash on. To process the acquired MF images, we further develop a class of fast mobile
image processing techniques for image registration, depth edge extraction, and edge-preserving smoothing. We
demonstrate our mobile MF on a number of mobile imaging applications, including occlusion detection, image
thumbnailing, image abstraction and object category classification.
Recent realizations of hand-held plenoptic cameras have given rise to previously unexplored effects in photography.
Designing a mobile phone plenoptic camera is becoming feasible with the significant increase of computing
power of mobile devices and the introduction of System on a Chip. However, capturing high numbers of views is
still impractical due to special requirements such as ultra-thin camera and low costs. In this paper, we analyze a
mobile plenoptic camera solution with a small number of views. Such a camera can produce a refocusable high
resolution final image if a depth map is generated for every pixel in the sparse set of views. With the captured
multi-view images, the obstacle to recovering a high-resolution depth is occlusions. To robustly resolve these, we
first analyze the behavior of pixels in such situations. We show that even under severe occlusion, one can still
distinguish different depth layers based on statistics. We estimate the depth of each pixel by discretizing the
space in the scene and conducting plane sweeping. Specifically, for each given depth, we gather all corresponding
pixels from other views and model the in-focus pixels as a Gaussian distribution. We show how it is possible to
distinguish occlusion pixels, and in-focus pixels in order to find the depths. Final depth maps are computed in
real scenes captured by a mobile plenoptic camera.
We present a novel stereo imaging technique called dual-focus stereo imaging or DFSI. DFSI uses a pair of images captured from different viewpoints and at different foci, but with identical wide aperture size. Each image in an DFSI pair exhibits different defocus blur, and the two images form a defocused stereo pair. To model defocus blur, we introduce a defocus kernel map (DKM) that computes the size of the blur disk at each pixel. We derive a novel disparity defocus constraint for computing the DKM in DFSI, and integrate DKM estimation with disparity map estimation to simultaneously recover both maps. We show that the recovered DKMs provide useful guidance for segmenting the in-focus regions. We demonstrate using DFSI in a variety of imaging applications, including low-light imaging, automatic defocus matting, and multifocus photomontage.
Conference Committee Involvement (9)
Optoelectronic Imaging and Multimedia Technology X
15 October 2023 | Beijing, China
Optoelectronic Imaging and Multimedia Technology IX
5 December 2022 | Online Only, China
Optoelectronic Imaging and Multimedia Technology VIII
10 October 2021 | Nantong, JS, China
Optoelectronic Imaging and Multimedia Technology VII
12 October 2020 | Online Only, China
Optoelectronic Imaging and Multimedia Technology VI
21 October 2019 | Hangzhou, China
Optoelectronic Imaging and Multimedia Technology V
11 October 2018 | Beijing, China
Optoelectronic Imaging and Multimedia Technology IV
12 October 2016 | Beijing, China
Digital Photography X
3 February 2014 | San Francisco, California, United States
Mobile Computational Photography
4 February 2013 | Burlingame, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.