Thanks to some technical progress in filter array technologies, capturing multimodal data of a scene, such as polarization and/or spectral information, in a single acquisition is possible. Nevertheless, a reconstruction procedure referred to as demosaicing is required to produce the various full definition images in each band. The computational imaging community often needs full-reference images to assess the performance of these reconstruction algorithms. Nevertheless, these multidimensional data are increasingly complex to capture, as the number of channels increases in the image. This often leads to misalignment among channels or noise introduced by imperfect optics. In this work, we propose a study on the use of these imperfect data in the context of demosaicing. The impact of misalignment is assessed on an existing Color Polarization Filter Array database, from which we demosaic the data using three types of demosaicing algorithms and use either pre-processed or raw dataset. We found that denoising and registration do not modify the hierarchy of best performing algorithms in case of sub-pixel shifts. We also show that visual artifacts, usually attributed to drawbacks of training-based demosaicing algorithms, may instead be due to the use of unregistered images during the training stage of the algorithms.
A few polarization image datasets depicting real world scenes have been reported. Some of them are available on an open-data basis. Some databases contain color images, often with color bands reconstructed from a sensor equipped with a Bayer filter. Unfortunately, even if these real-world images depict a variety of objects and situations and have a good overall quality (ie the spectral bands and the various polarization channels are or can be registered, noise is reduced), they often have a low definition (smaller than 1 Mp), a low bit depth and are captured with a large lens aperture, resulting in very band-limited images. Moreover, the demosaicing procedure used to reconstruct the various color bands has a smoothing effect, reducing their resolution. This latter point proves detrimental when it comes to use these images as references for demosaicing algorithms, especially for RGB images: since each channel combining polarization direction and spectral band is very sparse in the base mosaic pattern, artifacts likely to appear are considerably underestimated with band-limited images. In this work, we review existing polarization image databases, focus on non-mosaiced datasets and propose a technique to produce HD polarization images with superior quality.
Two typical instruments can be employed for linear polarization imaging: a rotating polarizer in front of a classical monochrome camera (division of time), or a dedicated sensor with a polarization filter array (division of focal-plane). The last method enables the snapshot acquisition of the linear polarization properties of the light with a compact and affordable instrument. The rotating polarizer method has until now been preferred when good polarimetric precision is required. It is still unclear how these two techniques perform comparatively in terms of polarimetric accuracy. This paper provides a practical comparison between the two methods, and evaluates the effect of pre-processing applied on raw images to counterbalance the differences.
With advances in unmanned and autonomous vehicles, camera-based navigation is increasingly being used. A new low-cost navigation solution based on monochrome polarization filter arrays cameras is presented. For this purpose, we have developed our own acquisition pipeline and an image processing algorithm to find the relative heading of an Unmanned Ground Vehicle (UGV) thanks to the skylight polarization. The precision of the method has been quantified using a rotary stage. Then the system has been mounted on the UGV and the estimated heading is compared to a reference given by a GPS/Inertial Navigation System.
Haze is an undesirable effect in images caused when atmospheric particles, such as water droplets, ice crystals, dust, or smoke, are lit directly or indirectly by the sun. This effect can be counteracted by image processing, to bring back the details of a hazy image. Unfortunately, the execution time is often long, which prevents deployment in some video or real-time applications. In this paper, we propose to tune several parameters of the Dark Channel Prior method (DCP) algorithm combined with the fast guided filter. We evaluate the optimization in terms of execution time, and quantify the output image quality using different image quality metrics.
Spectral and polarization imaging (SPI) is an emerging sensing method that permits the analysis of both spectral and polarization information of a scene. The existing acquisition systems are limited by several factors, such as the space requirement, the inability to capture quickly the information, and the high cost. We propose an SPI acquisition system with a high spatial resolution that combines six spectral channels and four polarization channels. The optical setup employs two color-polarization filter array cameras and a pair of bandpass filters. We define the processing pipeline, consisting of preprocessing, geometric calibration, spectral calibration, and data transformation. We show that, around specular highlights, the spectral reconstruction can be improved by filtering the polarized intensity. We provide a database of 28 spectropolarimetric scenes with different materials for future simulation and analysis by the research community.
Spectral and Polarization Imaging (SPI) is an emerging sensing method that combines the acquisition of both spectral and polarization information of a scene. It could benefit for various applications like appearance characterization from measurement, reflectance property estimation, diffuse/specular component separation, material classification, etc. In this paper, we present a review of recent SPI systems from the literature. We propose a description of the existing SPI systems in terms of technology employed, imaging conditions, and targeted application.
A polarization filter array (PFA) camera is an imaging device capable of analyzing the polarization state of light in a snapshot manner. These cameras exhibit spatial variations, i.e., nonuniformity, in their response due to optical imperfections introduced during the nanofabrication process. Calibration is done by computational imaging algorithms to correct the data for radiometric and polarimetric errors. We reviewed existing calibration methods and applied them using a practical optical acquisition setup and a commercially available PFA camera. The goal of the evaluation is first to compare which algorithm performs better with regard to polarization error and then to investigate both the influence of the dynamic range and number of polarization angle stimuli of the training data. To our knowledge, this has not been done in previous work.
A Polarization Filter Array (PFA) camera is an imaging device capable of analyzing the polarization state of light in a snapshot way. These cameras exhibit spatial variations, i.e. nonuniformity, in their response due to optical imperfections introduced during the nanofabrication process. Calibration is done by computational imaging algorithms to correct the data for radiometric and polarimetric errors. In this paper, we review existing calibration procedures, and show a practical implementation result of one of these methods applied to a commercially available PFA camera.
Multi-band polarization imaging, by mean of analyzing spectral and polarimetric data simultaneously, is a good way to improve the quantity and quality of information recovered from a scene. Therefore, it can enhance computer vision algorithms as it permits to recover more statistical information about a surface than color imaging. This work presents a database of polarimetric and multispectral images that combine visible and near-infrared (NIR) information. An experimental setup is built around a dual-sensor camera. Multispectral images are reconstructed from the dual-RGB method. The polarimetric feature is achieved using rotating linear polarization filters in front of the camera at four different angles (0, 45, 90 and 135 degrees). The resulting imaging system outputs 6 spectral/polarimetric channels. We demonstrate 10 different scenes composed of several materials like color checker, high reflecting metallic object, plastic, painting, liquid, fabric and food. Our database of images is provided online as supplementary material for further simulation and data analysis. This work also discusses several issues about the multi-band imaging technique described.
KEYWORDS: High dynamic range imaging, Image sensors, High dynamic range image sensors, Cameras, Image processing, Video, Digital cameras, Image resolution, Sensors, Raster graphics, High dynamic range imaging, Detection and tracking algorithms, Digital cameras, Field programmable gate arrays, Human-machine interfaces, Range imaging
High dynamic range (HDR) imaging generation from a set of low dynamic range images taken in different exposure times is a low cost and an easy technique. This technique provides a good result for static scenes. Temporal exposure bracketing cannot be applied directly for dynamic scenes, since camera or object motion in bracketed exposures creates ghosts in the resulting HDR image. In this paper we describe a real-time ghost removing hardware implementation on high dynamic range video ow added for our HDR FPGA based smart camera which is able to provide full resolution (1280 x 1024) HDR video stream at 60 fps. We present experimental results to show the efficiency of our implemented method in ghost removing.
KEYWORDS: High dynamic range imaging, Cameras, Video, Sensors, Image sensors, Video acceleration, High dynamic range image sensors, Imaging systems, Field programmable gate arrays, Image compression
Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.
KEYWORDS: High dynamic range imaging, Video, Cameras, Sensors, Video surveillance, Image quality, Video acceleration, Image sensors, Field programmable gate arrays, Time multiplexed optical shutter
In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.