KEYWORDS: Computer programming, High dynamic range imaging, Quantization, Phase transfer function, Video compression, Image processing, Video, Visualization, Distortion, Human vision and color perception
Traditional Low Dynamic Range (LDR) color spaces encode a small fraction of the visible color gamut, which does not encompass the range of colors produced on upcoming High Dynamic Range (HDR) displays. Future imaging systems will require encoding much wider color gamut and luminance range. Such wide color gamut can be represented using floating point HDR pixel values but those are inefficient to encode. They also lack perceptual uniformity of the luminance and color distribution, which is provided (in approximation) by most LDR color spaces. Therefore, there is a need to devise an efficient, perceptually uniform and integer valued representation for high dynamic range pixel values. In this paper we evaluate several methods for encoding colour HDR pixel values, in particular for use in image and video compression. Unlike other studies we test both luminance and color difference encoding in a rigorous 4AFC threshold experiments to determine the minimum bit-depth required. Results show that the Perceptual Quantizer (PQ) encoding provides the best perceptual uniformity in the considered luminance range, however the gain in bit-depth is rather modest. More significant difference can be observed between color difference encoding schemes, from which YDuDv encoding seems to be the most efficient.
KEYWORDS: High dynamic range imaging, Visualization, Calibration, Imaging systems, Distortion, Signal processing, Video processing, Image quality, Visibility, Standards development
With the emergence of high-dynamic range (HDR) imaging, the existing visual signal processing systems will need to deal with both HDR and standard dynamic range (SDR) signals. In such systems, computing the objective quality is an important aspect in various optimization processes (e.g., video encoding). To that end, we present a newly calibrated objective method that can tackle both HDR and SDR signals. As it is based on the previously proposed HDR-VDP-2 method, we refer to the newly calibrated metric as HDR-VDP-2.2. Our main contribution is toward improving the frequency-based pooling in HDR-VDP-2 to enhance its objective quality prediction accuracy. We achieve this by formulating and solving a constrained optimization problem and thereby finding the optimal pooling weights. We also carried out extensive cross-validation as well as verified the performance of the new method on independent databases. These indicate clear improvement in prediction accuracy as compared with the default pooling weights. The source codes for HDR-VDP-2.2 are publicly available online for free download and use.
KEYWORDS: Data modeling, Colorimetry, Visualization, Visual process modeling, Spatial frequencies, Contrast sensitivity, Modulation, Calibration, Eye models, RGB color model
Inspired by the ModelFest and ColorFest data sets, a contrast sensitivity function was measured for a wide range
of adapting luminance levels. The measurements were motivated by the need to collect visual performance data
for natural viewing of static images at a broad range of luminance levels, such as can be found in the case of high
dynamic range displays. The detection of sine-gratings with Gaussian envelope was measured for achromatic
color axis (black to white), two chromatic axes (green to red and yellow-green to violet) and two mixed chromatic
and achromatic axes (dark-green to light-pink, and dark yellow to light-blue). The background luminance varied
from 0.02 to 200 cd/m2. The spatial frequency of the gratings varied from 0.125 to 16 cycles per degree. More
than four observers participated in the experiments and they individually determined the detection threshold
for each stimulus using at least 20 trials of the QUEST method. As compared to the popular CSF models, we
observed higher sensitivity drop for higher frequencies and significant differences in sensitivities in the luminance
range between 0.02 and 2 cd/m2. Our measurements for chromatic CSF show a significant drop in sensitivity with
luminance, but little change in the shape of the CSF. The drop of sensitivity at high frequencies is significantly
weaker than reported in other studies and assumed in most chromatic CSF models.
We explore three problems related to quality assessment in computer graphics: the design of efficient user studies;
the scene-referred metrics for comparing high-dynamic-range images; and the comparison of metric performance
for the database of computer graphics distortions. This paper summarizes the most important observations from
investigating these problems and gives a high level perspective on the problem of quality assessment in graphics.
Many visual difference predictors (VDPs) have used basic psychophysical data (such as ModelFest) to calibrate the
algorithm parameters and to validate their performances. However, the basic psychophysical data often do not contain
sufficient number of stimuli and its variations to test more complex components of a VDP. In this paper we calibrate the
Visual Difference Predictor for High Dynamic Range images (HDR-VDP) using radiologists' experimental data for
JPEG2000 compressed CT images which contain complex structures. Then we validate the HDR-VDP in predicting the
presence of perceptible compression artifacts. 240 CT-scan images were encoded and decoded using JPEG2000
compression at four compression ratios (CRs). Five radiologists participated to independently determine if each image
pair (original and compressed images) was indistinguishable or distinguishable. A threshold CR for each image, at which
50% of radiologists would detect compression artifacts, was estimated by fitting a psychometric function. The CT
images compressed at the threshold CRs were used to calibrate the HDR-VDP parameters and to validate its prediction
accuracy. Our results showed that the HDR-VDP calibrated for the CT image data gave much better predictions than the
HDR-VDP calibrated to the basic psychophysical data (ModelFest + contrast masking data for sine gratings).
The overall image quality benefits substantially from good reproduction of black tones. Modern displays feature
relatively low black level, making them capable rendering good dark tones. However, it is not clear if the
black level of those displays is sufficient to produce a "absolute black" color, which appears no brighter than an
arbitrary dark surface. To find the luminance necessary to invoke the perception of the absolutely black color,
we conduct an experiment in which we measure the highest luminance that cannot be discriminated from the
lowest luminance achievable in our laboratory conditions (0.003 cd/m2). We measure these thresholds under
varying luminance of surround (up to 900 cd/m2), which simulates a range ambient illumination conditions. We also analyze our results in the context of actual display devices. We conclude that the black level of the LCD display with no backlight dimming is not only insufficient for producing absolute black color, but it may also appear grayish under low ambient light levels.
Defocus imaging techniques, involving the capture and reconstruction of purposely out-of-focus images, have
recently become feasible due to advances in deconvolution methods. This paper evaluates the feasibility of
defocus imaging as a means of increasing the effective dynamic range of conventional image sensors. Blurring
operations spread the energy of each pixel over the surrounding neighborhood; bright regions transfer energy to
nearby dark regions, reducing dynamic range. However, there is a trade-off between image quality and dynamic
range inherent in all conventional sensors.
The approach involves optically blurring the captured image by turning the lens out of focus, modifying that
blurred image with a filter inserted into the optical path, then recovering the desired image by deconvolution.
We analyze the properties of the setup to determine whether any combination can produce a dynamic range
reduction with acceptable image quality. Our analysis considers both properties of the filter to measure local
contrast reduction, as well as the distribution of image intensity at different scales as a measure of global contrast
reduction. Our results show that while combining state-of-the-art aperture filters and deconvolution methods
can reduce the dynamic range of the defocused image, providing higher image quality than previous methods,
rarely does the loss in image fidelity justify the improvements in dynamic range.
Many quality metrics take as input gamma corrected images and assume that pixel code values are scaled
perceptually uniform. Although this is a valid assumption for darker displays operating in the luminance range
typical for CRT displays (from 0.1 to 80 cd/m2), it is no longer true for much brighter LCD displays (typically
up to 500 cd/m2), plasma displays (small regions up to 1000 cd/m2) and HDR displays (up to 3000 cd/m2).
The distortions that are barely visible on dark displays become clearly noticeable when shown on much brighter
displays. To estimate quality of images shown on bright displays, we propose a straightforward extension to the
popular quality metrics, such as PSNR and SSIM, that makes them capable of handling all luminance levels
visible to the human eye without altering their results for typical CRT display luminance levels. Such extended
quality metrics can be used to estimate quality of high dynamic range (HDR) images as well as account for
display brightness.
The advances in high dynamic range (HDR) imaging, especially in the display and camera technology, have a significant
impact on the existing imaging systems. The assumptions of the traditional low-dynamic range imaging, designed for
paper print as a major output medium, are ill suited for the range of visual material that is shown on modern displays. For
example, the common assumption that the brightest color in an image is white can be hardly justified for high contrast
LCD displays, not to mention next generation HDR displays, that can easily create bright highlights and the impression
of self-luminous colors. We argue that high dynamic range representation can encode images regardless of the technology
used to create and display them, with the accuracy that is only constrained by the limitations of the human eye and
not a particular output medium. To facilitate the research on high dynamic range imaging, we have created a software
package (http://pfstools.sourceforge.net/) capable of handling HDR data on all stages of image and video processing. The
software package is available as open source under the General Public License and includes solutions for high quality
image acquisition from multiple exposures, a range of tone mapping algorithms and a visual difference predictor for HDR
images. Examples of shell scripts demonstrate how the software can be used for processing single images as well as video
sequences.
Most common image and video formats have been designed to work with existing output devices, like LCD or CRT monitors. As the display technology makes huge progress, these formats can no longer represent the data that the new devices can display. Therefore a shift towards higher precision image and video formats seems to be imminent.
To overcome limitations of the common image and video formats, such as JPEG, PNG or MPEG, we propose a novel color space, which can accommodate an extended dynamic range and guarantees the precision that is below the visibility threshold. The proposed color space, which is derived from the contrast detection data, can represent the full range of luminance values and the complete color gamut that is visible to the human eye. We show that only minor changes are required to the existing encoding algorithms to accommodate a new color space and therefore greatly enhance information content of the visual data. We demonstrate this on two compression algorithms for High Dynamic Range (HDR) visual data: for static images and for video. We argue that the proposed HDR representation is a simple and universal way to encode visual data independently of the display
or capture technology.
New imaging and rendering systems commonly use physically accurate
lighting information in the form of high-dynamic range (HDR) images
and video. HDR images contain actual colorimetric or physical
values, which can span 14 orders of magnitude, instead of 8-bit
renderings, found in standard images. The additional precision and
quality retained in HDR visual data is necessary to display images
on advanced HDR display devices, capable of showing contrast of
50,000:1, as compared to the contrast of 700:1 for LCD displays.
With the development of high-dynamic range visual techniques comes a
need for an automatic visual quality assessment of the resulting
images. In this paper we propose several modifications to the Visual
Difference Predicator (VDP). The modifications improve the
prediction of perceivable differences in the full visible range of
luminance and under the adaptation conditions corresponding to real
scene observation. The proposed metric takes into account the
aspects of high contrast vision, like scattering of the light in the
optics (OTF), nonlinear response to light for the full range of
luminance, and local adaptation. To calibrate our HDR VDP we perform
experiments using an advanced HDR display, capable of displaying the
range of luminance that is close to that found in real scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.