Multispectral imagers collect a hypercube of data, where the spatial image is along two-dimensions and the spectral information is in the third. Two main technologies are used for multispectral imaging: sweeping, where the hypercube is built by scanning through different wavelengths or spatial positions and snapshot multispectral spectral imaging, where the 3D cube of images is taken in one shot. Sweeping imaging systems tend to have more lines and better spectral resolutions whilst snapshot cameras are often used for dynamic analysis of scenes. A common method to obtain the hypercube in snapshot imagers is by pixel level filtering on the sensor chip. Pixel level filtering, where the filter is placed directly on the pixels are intergrated into the wafer-level making processing making them difficult to customize. Therefore, these sensors tend to aim for equally spaced spectral lines in order to cover many applications. This results in an often in an unnecessarily large data cube when only a few spectral lines are needed, moreover the spectral lines are not adapted to the specific application. In this work we propose a multispectral camera based on plenoptic imaging, where the filtering is done in a front-end optics module. Our camera has the usual advantages of a snapshot imager, and the added advantage that the spectral lines can be both reduced and tailored to the specific application by customizing the filter. This procedure reduces the hypercube whilst keeping performance by selecting the relevant data. Moreover, the filter is interchangeable for different applications The camera presented here is built with off-the-shelf components, shows >40 spectral channels, image sizes are 260x260 pixels, with pixel limited spatial resolution. We demonstrate this technology by fruit quality control using machine learning algorithms.
Hyperspectral imaging allows the collection of both spectral and spatial information. This modality is naturally fitted for object and material identification or detection processes, and has encountered a large success in the agriculture and food industries to name a few.
In snapshot spectral imaging, the 3D cube of images is taken in one shot, with the advantage that dynamic scenes can be analyzed. The simplest way to make a hyperspectral camera is to put an array of wavelength filters on the detector and then integrate this detector with standard camera objectives. The technical challenge is to make arrays of N wavelength filters and repeat this sequence up to 100‘000 times across the detector array, where each individual filter is matched to the pixel size and can be as small as a few microns.
In this work, we generate the same effect with just one N wavelength filter array which is then multiplied and imaged optically onto the detector to achieve the same effective filter array. This was first outlined by Levoy and Hoystmeyer using microlens arrays in a light field camera (plenoptics 1.0). Instead of building our own light field camera we used an existing commercial camera, Lytro™ and used it as the engine for our telecentric hyperspectral camera. In addition, the tools to extract and rebuild the raw data from the Lytro™ camera were developed.
We demonstrate reconstructed hyperspectral images with 9 spectral channels and show how this can be increased to achieve 81 spectral channels in a single snapshot.
We propose two new fast algorithms for the computation of the continuous Fourier series and the continuous Haar transform of rectilinear polygons such as those of mask layouts in optical lithography. These algorithms outperform their discrete counterparts traditionally used. Not only are continuous transforms closer to the underlying continuous physical reality, but they also avoid the inherent inaccuracies introduced by the sampling or rasterization of the polygons in the discrete case. Moreover, massive amounts of data and the intense processing methods used in lithography require efficient algorithms at every step of the process. We derive the complexity of each algorithm and compare it to that of the corresponding discrete transform. For the practical very-large-scale integration (VLSI) layouts, we find significant reduction in the complexity because the number of polygon vertices is substantially smaller than the corresponding discrete image. This analysis is completed by an implementation and a benchmark of the continuous algorithms and their discrete counterparts. We run extensive experiments and show that on tested VLSI layouts the pruned continuous Haar transform is 5 to 25 times faster, while the fast continuous Fourier series is 1.5 to 3 times faster than their discrete counterparts.
We propose a design procedure for the real, equal-norm, lapped tight frame transforms (LTFTs). These transforms
have been recently proposed as both a redundant counterpart to lapped orthogonal transforms and an
infinite-dimensional counterpart to harmonic tight frames. In addition, LTFTs can be efficiently implemented
with filter banks. The procedure consists of two steps. First, we construct new lapped orthogonal transforms
designed from submatrices of the DFT matrix. Then we specify the seeding procedure that yields real equal-norm
LTFTs. Among them we identify the subclass of maximally robust to erasures LTFTs.
We survey our work on adaptive multiresolution (MR) approaches to the classification of biological and fingerprint
images. The system adds MR decomposition in front of a generic classifier consisting of feature computation
and classification in each MR subspace, yielding local decisions, which are then combined into a global decision
using a weighting algorithm. The system is tested on four different datasets, subcellular protein location images,
drosophila embryo images, histological images and fingerprint images. Given the very high accuracies obtained for
all four datasets, we demonstrate that the space-frequency localized information in the multiresolution subspaces
adds significantly to the discriminative power of the system. Moreover, we show that a vastly reduced set of
features is sufficient. Finally, we prove that frames are the class of MR techniques that performs the best in this
context. This leads us to consider the construction of a new family of frames for classification, which we term
lapped tight frame transforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.