We design and analyze a high-speed document sensing and misprint detection system for real-time monitoring of printed
pages. We implemented and characterized a prototype system, comprising a solid-state line sensor and a high-quality imaging
lens, that measures in real time the light reflected from a printed page. We use sensor simulation software and signal
processing methods to create an expected sensor response given the page that is being printed. The measured response is
compared with the predicted response based on a system simulation. A computational misprint detection system measures
differences between the expected and measured responses, continuously evaluating the likelihood of a misprint. We describe
several algorithms to identify rapidly any significant deviations between the expected and actual sensor response.
The parameters of the system are determined by a cost-benefit analysis.
Most fine art reproduction workflows to date have been based on hyperspectral devices. These devices capture,
process and print more than three channels of spectral data to produce spectrally accurate reproductions. While
these workflows have unique advantages over standard three-channel workflows, such as the ability to produce
reproductions that are colorimetrically accurate across many illuminants, they usually require custom hardware.
Such hardware can be expensive, time-consuming to setup, and may require a full-time trained operator.
We describe the challenges and issues in constructing a colorimetrically accurate fine art reproduction work-
flow based on standard three-channel hardware. The workflow was designed to be as automated as possible,
simple to use, and device-independent. The heart of the workflow is a software application that takes as input
camera characterization data, reflectance statistics of the artwork, an image of the artwork, and an image of a reference
card, and it outputs a properly exposed, uniformly illuminated and colorimetrically accurate reproduction.
We describe the methods used to compute the exposure level, to compensate for illumination non-uniformities,
and to generate a per-image color correction matrix. Finally, we present reproduction results and error statistics
obtained using a workflow comprising a 4x5” Sinar camera, a Betterlight digital back, and an HP DesignJet 5500
printer.
Digital imager sensor responses must be transformed to calibrated (human) color representations for display or print reproduction. Errors in these color rendering transformations can arise from a variety of sources, including (a) noise in the acquisition process (including photon noise and sensor noise) and (b) sensor spectral responsivities inconsistent with those of the human cones. These errors can be summarized by the mean deviation and variance of the reproduced values. It is desirable to select a color transformation that produces both low mean deviations and low noise variance. We show that in some conditions there is an inherent trade-off between these two measures: when selecting a color rendering transformation either the mean deviation or the variance (caused by imager noise) can be minimized. We describe this trade-off mathematically, and we describe a methodology for choosing an appropriate transformation for different applications. We illustrate the methodology by applying it to the problem of color filter selection (CMYG vs. RGGB) for digital cameras. We find that under moderate illumination conditions photon noise alone introduces an uncertainty in the estimated CIELAB coordinates on the order of 1-2 ΔE units for RGGB sensors and in certain cases even higher uncertainty levels for CMYG sensors. If we choose color transformations that equate this variance, the color rendering accuracy of the CMYG and RGGB filters are similar.
KEYWORDS: RGB color model, Sensors, Cameras, Space sensors, Calibration, Visualization, Tungsten, Light sources and illumination, Digital photography, Photography
When rendering photographs, it is important to preserve the gray tones despite variations in the ambient illumination. When the illuminant is known, white balancing that preserves gray tones can be performed in many different color spaces; the choice of color space influences the renderings of other colors. In this behavioral study, we ask whether users have a preference for the color space where white balancing is performed. Subjects compared images using a white balancing transformation that preserved gray tones, but the transformation was applied in one of the four different color spaces: XYZ, Bradford, a camera sensor RGB and the sharpened RGB color space. We used six scenes types (four portraits, fruit, and toys) acquired under three calibrated illumination environments (fluorescent, tungsten, and flash). For all subjects, transformations applied in XYZ and sharpened RGB were preferred to those applied in Bradford and device color space.
With the development of high-speed CMOS imagers, it is possible to acquire and process multiple images within the imager, prior output. We refer to an imaging architecture that acquires a collection of images and produces a single result as multiple capture single image (MCSI). In this paper we describe some applications of the MCSI architecture using a monochrome sensor and modulation light sources. By using active light sources. By using active light sources, it is possible to measure object information in a manner that is independent of the passive illuminant. To study this architecture, we have implemented a test system using a monochrome e CMOS sensor and several arrays of color LEDs whose temporal modulation can be precisely controlled. First, we report on experimental measurement that evaluate how well the active and passive illuminant can be separated as a function of experimental variables, including passive illuminant intensity, temporal sampling rate and modulation amplitude. Second, we describe two applications of this technique: (a) creating a color image from a monochrome sensor, and (b) measuring the spatial distribution of the passive illuminant.
In this paper, we review several algorithms that have been proposed to transform a high dynamic range image into a reduced dynamic range image that matches the general appearance of the original. We organize these algorithms into two categories: tone reproduction curves (TRCs) and tone reproduction operators (TROs). TRCs operate pointwise on the image data, making the algorithms simple and efficient. TROs use the spatial structure of the image data and attempt to preserve local image contrast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.