PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7529, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quality assessment is becoming an important issue in the framework of image processing. This need is expressed
by the fact that the quality threshold of end-users has been shifted up because of the large availability of high
fidelity sensors at very affordable price. This observation has been made for different application domains such as
printing, compression, transmission, and so on. Starting from this, it becomes very important to manufacturers
and producers to provide products of high quality to attract the consumer. This high interest on quality means
that tools to measure it have to be available. This work is dedicated to the comparison of subjective methodologies
in the digital cinema framework. The main goal is to determine with a group of observers, which methodology is
better for assessing digital cinema content and what is the annoyance level associated to each of them. Several
configurations are tested side by side, Butterfly, one by one, Horizontal scroll, vertical scroll, Horizontal and
vertical scroll.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Single Stimulus (SS) method is often chosen to collect subjective data testing no-reference objective metrics, as it is
straightforward to implement and well standardized. At the same time, it exhibits some drawbacks; spread between
different assessors is relatively large, and the measured ratings depend on the quality range spanned by the test samples,
hence the results from different experiments cannot easily be merged . The Quality Ruler (QR) method has been
proposed to overcome these inconveniences. This paper compares the performance of the SS and QR method for
pictures impaired by Gaussian blur. The research goal is, on one hand, to analyze the advantages and disadvantages of
both methods for quality assessment and, on the other, to make quality data of blur impaired images publicly available.
The obtained results show that the confidence intervals of the QR scores are narrower than those of the SS scores. This
indicates that the QR method enhances consistency across assessors. Moreover, QR scores exhibit a higher linear
correlation with the distortion applied. In summary, for the purpose of building datasets of subjective quality, the QR
approach seems promising from the viewpoint of both consistency and repeatability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging systems in camera phones have image quality limitations attributed to optics, size, and cost constraints. These
limitations generally result in unwanted system noise. In order to minimize the image quality degradation, nonlinear
noise cleaning algorithms are often applied to the images. However, as the strength of the noise cleaning increases, this
often leads to texture degradation. The Camera Phone Image Quality (CPIQ) initiative of the International Imaging
Industry Association (I3A) has been developing metrics to quantify texture appearance in camera phone images. Initial
research established high correlation levels between the metrics and psychophysical data from sets of images that had
noise cleaning filtering applied to simulate capture in actual camera phone systems. This paper describes the subsequent
work to develop a texture-based softcopy attribute ruler in order to assess the texture appearance of eight camera phone
units from four different manufacturers and to assess the efficacy of the texture metrics. Multiple companies
participating in the initiative have been using the softcopy ruler approach in order to pool observers and increase
statistical significance. Results and conclusions based on three captured scenes and two texture metrics will be
presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Subjective image quality data for 9 image processing pipes and 8 image contents (taken with mobile phone
camera, 72 natural scene test images altogether) from 14 test subjects were collected. A triplet comparison setup
and a hybrid qualitative/quantitative methodology were applied. MOS data and spontaneous, subjective
image quality attributes to each test image were recorded. The use of positive and negative image quality
attributes by the experimental subjects suggested a significant difference between the subjective spaces of low
and high image quality. The robustness of the attribute data was shown by correlating DMOS data of the test
images against their corresponding, average subjective attribute vector length data. The findings demonstrate
the information value of spontaneous, subjective image quality attributes in evaluating image quality at variable
quality levels. We discuss the implications of these findings for the development of sensitive performance
measures and methods in profiling image processing systems and their components, especially at high image
quality levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a Videospace framework for classification of selected videos with chosen user-groups, device-types or
device-classes. Photospace has proven to be effective in classifying large amounts of still images via simple technical
parameters. We use the measures of subject-camera distance, scene lighting and object motion to classify single videos
and finally represent all videos of the chosen group in a
3-dimensional space. An expert-rated sample of video was
collected to obtain an estimation of the parameters for a chosen group of videos. Sub-groups of videos were found using
Videospace measures. The presented framework can be used to obtain information about technical requirements of
general device use and typical shooting conditions of the end users. Future measurement efficiency and precision could
be improved by using computer-based algorithms or device based measurement techniques to obtain better samples
Videospace parameters. Videospace information could be used for finding the most meaningful benchmarking contexts
or getting information about shooting in general with chosen devices or devices groups. Using information about typical
parameters for a chosen video group, algorithm and device development can be focused on typical shooting situations, if
processing power and device-size are otherwise reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Manufacturers of commercial display devices continuously try to improve the perceived image quality of their products.
By applying some post processing techniques on the incoming image signal, they aim to enhance the quality level
perceived by the viewer. Applying such techniques may cause side effects on different portions of the processed image.
In order to apply these techniques effectively to improve the overall quality, it is vital to understand how important
quality is for different parts of the image. To study this effect, a three-phase experiment was conducted where observers
were asked to score images which had different levels of quality in their saliency regions than in the background areas.
The results show that the saliency area has a greater effect on the overall quality of the image than the background. This
effect increases with the increasing quality difference between the two regions. It is, therefore, important to take this
effect into consideration when trying to enhance the appearance of specific image regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Psychophysical image quality assessments have shown that subjective quality depended upon the pictorial content of the
test images. This study is concerned with the nature of scene dependency, which causes problems in modeling and
predicting image quality. This paper focuses on scene classification to resolve this issue and used K-means clustering to
classify test scenes. The aim was to classify thirty two original test scenes that were previously used in a psychophysical
investigation conducted by the authors, according to their susceptibility to sharpness and noisiness. The objective scene
classification involved: 1) investigation of various scene descriptors, derived to describe properties that influence image
quality, and 2) investigation of the degree of correlation between scene descriptors and scene susceptibility parameters.
Scene descriptors that correlated with scene susceptibility in sharpness and in noisiness are assumed to be useful in the
objective scene classification. The work successfully derived three groups of scenes. The findings indicate that there is a
potential for tackling the problem of sharpness and noisiness scene susceptibility when modeling image quality. In
addition, more extensive investigations of scene descriptors would be required at global and local image levels in order
to achieve sufficient accuracy of objective scene classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of the study was to develop a test image for print quality evaluation to improve the current state of the art in
testing the quality of digital printing. The image presented by the authors in EI09 portrayed a breakfast scene, the content
of which could roughly be divided in four object categories: a woman, a table with objects, a landscape picture and a
gray wall. The image was considered to have four main areas of improvement: the busyness of the image, the control of
the color world, the salience of the object categories, and the naturalness of the event and the setting. To improve the first
image, another test image was developed. Whereas several aspects were improved, the shortcomings of the new image
found by visual testing and self-report were in the same four areas. To combine the insights of the two test images and to
avoid their pitfalls, a third image was developed. The goodness of the three test images was measured in subjective tests.
The third test image was found to address efficiently three of the four improvement areas, only the salience of the objects
left a bit to be desired.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sorting and searching operations used for the selection of test images strongly affect the results of image quality
investigations and require a high level of versatility. This paper describes the way that inherent image properties, which
are known to have a visual impact on the observer, can be used to provide support and an innovative answer to image
selection and classification. The selected image properties are intended to be comprehensive and to correlate with our
perception. Results from this work aim to lead to the definition of a set of universal scales of perceived image properties
that are relevant to image quality assessments.
The initial prototype built towards these objectives relies on global analysis of low-level image features. A
multidimensional system is built, based upon the global image features of: lightness, contrast, colorfulness, color
contrast, dominant hue(s) and busyness. The resulting feature metric values are compared against outcomes from
relevant psychophysical investigations to evaluate the success of the employed algorithms in deriving image features that
affect the perceived impression of the images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A crucial step in image compression is the evaluation of its performance, and more precisely the available way
to measure the final quality of the compressed image. Usually, to measure performance, some measure of the
covariation between the subjective ratings and the degree of compression is performed between rated image
quality and algorithm. Nevertheless, local variations are not well taken into account.
We use the recently introduced Maximum Likelihood Difference Scaling (MLDS) method to quantify suprathreshold
perceptual differences between pairs of images and examine how perceived image quality estimated
through MLDS changes the compression rate is increased. This approach circumvents the limitations inherent
to subjective rating methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
No-reference quality metrics estimate the perceived quality exploiting only the image itself. Typically, noreference
metrics are designed to measure specific artifacts using a distortion model. Some psycho-visual experiments
have shown that the perception of distortions is influenced by the amount of details in the image's content,
suggesting the need for a "content weighting factor." This dependency is coherent with known masking effects
of the human visual system. In order to explore this phenomenon, we setup a series of experiments applying
regression trees to the problem of no-reference quality assessment. In particular, we have focused on the blocking
distortion of JPEG compressed images. Experimental results show that information about the visual content of
the image can be exploited to improve the estimation of the quality of JPEG compressed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In medical networked applications, the server-generated application view, consisting of medical image content and
synthetic text/GUI elements, must be compressed and transmitted to the client. To adapt to the local content
characteristics, the application view is divided into rectangular patches, which are classified into content classes: medical
image patches, synthetic image patches consisting of text on a uniform/natural/medical image background and synthetic
image patches consisting of GUI elements on a uniform/natural/medical image background. Each patch is thereafter
compressed using a technique yielding perceptually optimal performance for the identified content class. The goal of this
paper is to identify this optimal technique, given a set of candidate schemes. For this purpose, a simulation framework is
used which simulates different types of compression and measures the perceived differences between the compressed
and original images, taking into account the display characteristics. In a first experiment, JPEG is used to code all
patches and the optimal chroma subsampling and quantization parameters are derived for different content classes. The
results show that 4:4:4 chroma subsampling is the best choice, regardless of the content type. Furthermore, frequency
dependant quantization yields better compression performance than uniform quantization, except for content containing a
significant number of very sharp edges. In a second experiment, each patch can be coded using JPEG, JPEG XR or JPEG
2000. On average, JPEG 2000 outperforms JPEG and JPEG XR for most medical images and for patches containing text.
However, for histopathology or tissue patches and for patches containing GUI elements, classical JPEG compression
outperforms the other two techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of
utmost importance since a high percentage of images are made under low lighting conditions where image quality failure
may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they
share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations
in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions
affect overall image quality.
Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target
blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial
frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for
sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image
quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The
impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels.
The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a
defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging
performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several measurable image quality attributes contribute to the perceived resolution of a printing system. These
contributing attributes include addressability, sharpness, raggedness, spot size, and detail rendition capability. This
paper summarizes the development of evaluation methods that will become the basis of ISO 29112, a standard for the
objective measurement of monochrome printer resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A mathematical model of the electrophotographic printing process has been developed. This model can be
used for analysis. From this a print simulation process has been developed to simulate the effects of the model
components on toner particle placement. A wide variety of simulated prints are produced from the model's
three main inputs, laser spread, charge to toner proportionality factor and toner particle size. While the exact
placement of toner particles is a random process, the total effect is not. The effect of each model parameter on
the ISO 13660 print quality attributes line width, fill, raggedness and blurriness is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The techniques of one-dimensional projection in the spatial domain and contrast sensitivity function (CSF) are generally
used to measure banding. Due to the complex printing process of laser printers, hardcopy prints contain other 2D nonuniformities
such as graininess and mottle besides banding. The method of 1D projection is useful for extracting banding,
but it induces the confounding effect of graininess or mottle on the measurement of perceived banding. The appearance
of banding in laser printers is more similar to the sum of various rectangular signals having different amplitudes and
frequencies. However, in many cases banding is modeled as a simple sinusoidal signal and the CSF is frequently applied.
In this paper, we propose new measurement method of banding well correlated with human perception. Two kinds of
spatial features give a good performance to banding measurement. First the correlation factor between two adjacent 1D
signals is considered to obtain banding power which reduces the confounding effect of graininess and mottle. Secondly,
a spatial smoothing filter is designed and applied to reduce the less perceptible low frequency components instead of
using the CSF. By using moving window and subtracting the local mean values, the imperceptible low frequency
components are removed while the perceptible low frequency components like the sharp edge of rectangular waves are
preserved. To validate the proposed method, psychophysical tests are performed. The results show that the correlations
between the proposed method and the perceived scales are 0.96, 0.90, and 0.95 for black, cyan, and magenta,
respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of the study was to develop a method for quality computation of digitally printed images. We wanted to use
only the attributes which have a meaning for subjective visual quality experience of digitally printed images. Based on
the subjective data and our assessments the attributes for quality calculation were sharpness, graininess and color
contrast. The proposed graininess metric divides the fine detail image into blocks and used the low energy blocks for
graininess calculation. The proposed color contrast metric computes the contrast of dominant colors using the coarse
scale image. The proposed sharpness metric divides the coarse scale image into blocks and uses the high energy blocks
for sharpness calculation. The reduced reference features of sharpness and graininess metrics are the numbers of high or
low energy blocks. The reduced reference features of the color contrast metric are the directions of the dominant colors
in reference image. The overall image quality was calculated by combining the values. The performance of proposed
application specific image quality metric was high compared to the state of the art reduced reference applicationindependent
image quality metric. Linear correlation coefficients between subjective and predicted MOS were 0.88 for
electrophotography and 0.98 for ink-jet printed samples, for a sample set of 21 prints for electrophotography and for inkjet,
subjectively evaluated by 28 observers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since typical document scanners are based on the principle of comparing the amount of light reflected from point of
interest with that of a known surface, the reflectance of point of interest on the document measured by a scanner is
therefore impacted by the reflectance of its neighboring region via multiple light reflections. The effect was referred to
as the "integrating cavity effect". We investigate the effect by establishing an optical model with optical ray tracing. We
demonstrated the effect by examining the illumination profile after accounting for multiple reflections off the document.
The simulation shows that the impact is less significant for the slow scan direction as opposed to the fast scan direction.
We identified that the platen glass can contribute just as much as the illumination assembly to the effect but the impact is
much more localized. Further we demonstrated that the impact of the illumination assembly and the platen glass is
independent from each other to a great degree and the impact of the illumination assembly to the illumination profile is
equivalent to a convolution of the original document content with a Gaussian kernel of a considerable band width.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extended Depth of Focus technologies are well known in the literature, and in recent years this technology has made its
way into camera phones. While the fundamental approach might have significant advantages over conventional
technologies, often in practice, it turns out the results can be accompanied by undesired artifacts that are hard to
quantify. In order to conduct an objective comparison with the conventional focus technology, new methods need to be
devised that are able to quantify not only the quality of focus but also the artifacts introduced by the use of EDOF
methods. In this paper we propose a test image and a methodology to quantify focus quality and its dependence on the
distance. Our test image is created from a test image element that contains different shapes to measure frequency
response.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Noise reduction in the image processing pipeline of digital cameras has a huge impact on image quality. It may
result in loss of low contrast fine details, also referred to as texture blur.Previous papers have shown, that the
objective measurement of the statistical parameter kurtosis in a reproduction of white Gaussian noise with the
camera under test correlates well with the subjective perception of these ramifications. To get a more detailed
description of the influence of noise reduction on the image, we compare the results of different approaches
to measure the spatial frequency response (SFR). Each of these methods uses a different test target, therefore
we get different results in the presence of adaptive filtering. We present a study on the possibility to derive a
detailed description of the influence of noise reduction on the different spatial frequency sub bands based on the
differences of the measured SFR using several approaches. Variations in the underlying methods have a direct
influence on the derived measurements, therefore we additionally checked for the differences of all used methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The evaluation of the noise present in the image acquisition system and the influence of the noise is essential to image
acquisition. However the mean square errors (MSE) is not divided into two terms, i.e., the noise independent MSE
(MSEfree) and noise dependent MSE (MSEnoise) were not discussed separately before. The MSEfree depends on the
spectral characteristics of a set of sensors, illuminations and reflectances of imaged objects and the MSEfree arises in the
noise free case, however MSEnoise originates from the noise present image acquisition system.
One of the authors (N.S.) already proposed a model to separate the MSE into the two factors and also proposed a model
to estimate noise variance present in image acquisition systems. By the use of this model, we succeeded in the expression
of the MSEnoise as a function of the noise variance and showed that the experimental results agreed fairly well with the
expression when the Wiener estimation was used for the recovery. The present paper shows the extended expression for
the influence of the system noise on the MSEnoise and the experimental results to show the trustworthiness of the
expression for the regression model, Imai-Berns model and finite dimensional linear model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic portrait enhancement by attenuating skin flaws (pimples, blemishes, wrinkles, etc.) has received
considerable attention from digital camera manufacturers thanks to its impact on the public. Subsequently, a
number of algorithms have been developed to meet this need. One central aspect to developing such an algorithm
is quality assessment: having a few numbers that precisely indicate the amount of beautification brought by an
algorithm (as perceived by human observers) is of great help, as it works on circumvent time-costly human
evaluation. In this paper, we propose a method to numerically evaluate the quality of a skin beautification
algorithm. The most important aspects we take into account and quantize to numbers are the quality of the
skin detector, the amount of smoothing performed by the method, the preservation of intrinsic skin texture, and
the preservation of facial features. We combine these measures into two numbers that assess the quality of skin
detection and beautification. The derived measures are highly correlated with human perception, therefore they
constitute a helpful tool for tuning and comparing algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote sensing is widely used to assess the destruction from natural disasters and to plan relief
and recovery operations. How to automatically extract useful features and segment interesting objects from
digital images, including remote sensing imagery, becomes a critical task for image understanding.
Unfortunately, the data collection of aerial digital images is constrained with bad weather, muzzy
atmosphere, and unstable camera or camcorder. As a result, remote sensing imagery is shown as lowcontrast,
blurred, and dark from time to time. Here, we introduce a new method integrating image local
statistics and image natural characteristics to enhance remote sensing imagery. This method computes the
adaptive histogram equalization to each distinct region of the input image and then redistributes the lightness
values of the image. The natural characteristic of image is applied to adjust the restoration contrast. The
experiments on real data show the effectiveness of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pan-sharpened images can effectively be used in various remote sensing applications. During recent years a vast
number of pan-sharpening algorithms has been proposed. Thus, the evaluation of their performance became a
vital issue. The quality assessment of pan-sharpened images is complicated by the absence of reference data, the
ideal image what the multispectral scanner would observe if it had as high spatial resolution as the panchromatic
instrument. This paper presents a novel method to evaluate the degree of local quality degradation in pansharpened
images, which is the result of contrast inversion of the fusing bands. The proposed method does
not require a reference image. Firstly, the algorithm identifies the areas in which the contrast inversion may
be confidently detected. Then, based on the found spatial consistency violations, the quantitative degradation
index is calculated for the fused product. The proposed approach was validated with the use of very high
resolution optical imagery. The experiments have shown that the proposed measure objectively reflects local
quality deterioration of pan-sharpened images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider two physical systems where overlapped displays are employed: (1) Wobulation -a single projector that
rapidly shifts the entire display in time by a subpixel amount; (2) Several projector displays overlaid in space with a
complex array of space-varying subpixel offsets. In this work we focus on quantifying the resolution increase of these
approaches over that of a single projector. Because of the nature of overlapping projections with different degrees of
prospective distortion, overlaid pixels have space-varying offsets in both dimensions. Our simulator employs the
perspective transformation or homography associated with the particular projector geometry for each subframe. The
resulting simulated displays are stunningly accurate. We use "grill" patterns to assess the resolution performance that
vary in period, phase, and orientation. A new Fourier-based test procedure is introduced that generates repeatable results
that eliminate problems due to phase and spatial variation. We report on results for 2 and 4 position wobulation, and for
1, 2, 4, and 10 overlaid projectors using the frequency-domain based contrast modulation metric. The effects of subpixel
phase are illustrated for various grill periods. The results clearly show that resolution performance is indeed improved
for overlapped displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The technology of scanning-backlight can effectively reduce motion blur in LCD, but reintroduce large area flicker
phenomenon. Perception experiments were performed to study the flicker visibility in a scanning-backlight LCD system.
Different operational modes of the scanning-backlight were used to generate different light performance. Five color
blocks: red, green, blue, white and yellow were chosen as experimental stimuli to check the influence of color on flicker
visibility in the most strict situation. Two natural images, for each with a colorful version, a black-and-white version,
together with a uniform white block (without any details) with the same average luminance of the natural image, were
adopted to verify the influence of color and details in image content on flicker visibility in normal viewing situation.
Results show that, color has no statistically significant influence on flicker visibility when luminance profiles are similar.
And details in image content can effectively decrease sensitivity to flicker visibility, which could because details can
distract viewer's attention away from flicker perception.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an approach to improve the performance of peak signal-to-noise ratio (PSNR) and structural
similarity (SSIM) for image quality assessment in digital cinema applications. Based on the particularities of quality
assessment in a digital cinema setup, some attributes of the human visual system (HVS) are taken into consideration,
including the fovea acuity angle and contrast sensitivity, combined with viewing conditions in the cinema to select
appropriate image blocks for calculating the perceived quality by PSNR and SSIM. Furthermore, as the HVS is not able
to perceive all the distortions because of selective sensitivities to different contrasts, and masking always exists, we
adopt a modified PSNR by considering the contrast sensitivity function and masking effects. The experimental results
demonstrate that the proposed approach can evidently improve the performance of image quality metrics in digital
cinema applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a methodology for TV set verification, intended for detecting picture quality degradation and
functional failures within a TV set. In the proposed approach we compare the TV picture captured from a TV set under
investigation with the reference image for the corresponding TV set in order to assess the captured picture quality and
therefore, assess the acceptability of TV set quality. The methodology framework comprises a logic block for designing
the verification process flow, a block for TV set quality estimation (based on image quality assessment) and a block for
generating the defect tracking database. The quality assessment algorithm is a full-reference intra-frame approach which
aims at detecting various digital specific-TV-set picture degradations, coming from TV system hardware and software
failures, and erroneous operational modes and settings in TV sets. The proposed algorithm is a block-based scheme
which incorporates the mean square error and a local variance between the reference and the tested image. The artifact
detection algorithm is shown to be highly robust against brightness and contrast changes in TV sets. The algorithm is
evaluated by performance comparison with the other state-of-the-art image quality assessment metrics in terms of
detecting TV picture degradations, such as illumination and contrast change, compression artifacts, picture
misalignment, aliasing, blurring and other types of degradations that are due to defects within the TV set video chain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a quality-aware computational optimization of inpainting based upon the intelligent application of a battery of
inpainting methods. By leveraging the Decision-Action-Reward Network (DARN) formalism and a bottom-up model of
human visual attention, methods are selected for optimal local use via an adjustable quality-time tradeoff and (empirical)
training statistics aimed at minimizing observer foveal attention to inpainted regions. Results are shown for object removal
in high-resolution consumer video, including a comparison of output quality and efficiency with homogeneous inpainting
applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The effect of video compression is examined using the task-based performance metrics of the new Video National
Intelligence Interpretability Rating Scale (Video NIIRS). Video NIIRS is a subjective task criteria scale similar to the
well-known Visible NIIRS used for still image quality measurement. However, each task in the Video NIIRS includes a
dynamic component that requires video of sufficient spatial and temporal resolution. The loss of Video NIIRS due to
compression is experimentally measured for select cases. The results show that an increase in the compression and an
associated increase in artifacts reduces task based interpretability and lowers the Video-NIIRS rating of the video clips.
The extent of the effect has implications for system design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image compression techniques such as JPEG and MPEG induce losses of image quality. Representative specifications
are blocking artifact, color bleeding, and smearing. These losses are usually investigated on the spatial distortions from
reconstructed images such as MSE(mean square error) and PSNR(peak signal to noise ratio). However, color
information is practically influenced by compression techniques. The distortion of color information is shown as
distorted information of gamut characteristics such as gamut size in the reconstructed images. Accordingly, this paper
introduces the investigation of the relationship between image compression and the gamut characteristics for MPEG-2
compression. Some image quality metrics are introduced; gamut size and gamut fidelity using unique color and CDI
(color distribution index), respectively. The influence of moving object is first observed with time sequential. Then,
deterioration due to the variation of bit rate is observed using gamut size and gamut characteristics. Results shows the
moving objects do not influence a lot to the gamut characteristic, however, the decrease of bit rate gives lots of
deterioration for gamut characteristics shown as the variation of CDI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The IP television services quality is a critical issue because of the nature of transport infrastructure. Packet loss is the
main cause of service degradation in such kind of network platforms.
The use of forward error correction (FEC) techniques in the application layer (AL-FEC), between the source of TV
service (video server) and the user terminal, seams to be an efficient strategy to counteract packet losses alternatively or
in addiction to suitable traffic management policies (only feasible in "managed networks").
A number of AL-FEC techniques have been discussed in literature and proposed for inclusion in TV over IP
international standards. In this paper a performance evaluation of the AL-FEC defined in SMPTE 2022-1 standard is
presented.
Different typical events occurring in IP networks causing different types (in terms of statistic distribution) of IP packet
losses have been studied and AL-FEC performance to counteract these kind of losses have been evaluated. The
performed analysis has been carried out in view of fulfilling the TV services QoS requirements that are usually very
demanding.
For managed networks, this paper envisages a strategy to combine the use of AL-FEC with the set-up of a transport
quality based on FEC packets prioritization. Promising results regard this kind of strategy have been obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure.
In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the
estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on
local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial
approximation - intersection of confidence intervals) method that use an anisotropic window around each point and
obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of
estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using
maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of
estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow
in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of
running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX
functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our
algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow
sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked
to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code,
makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring
algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire
program and so the runtime should be potentially decreased by considering these loops in C. However, after
implementing eighth "for" in C using the MEX library, the measured run time unexpectedly increased. According to the
timing in MEX, this runtime increscent is not because of connecting MATLAB to MEX but it is related to loops. Our
simulation shows implementation of a loop in C++ takes two times more than the same loop in MATLAB. In spite of C
functions, the OpenCV2 functions take less time, so OpenCV that is an open source computer vision library written in C
and C++ (and it is an active development on interfaces for MATLAB) would be useful for getting more speed. After
implementation the "for" loops of our algorithm using OpenCV library, our simulations show the run time decreases a
lot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Active near-infrared illumination may be used in a face recognition system to achieve invariance to changes of
the visible illumination. Another benefit of active near-infrared illumination is the bright pupil effect which can
be used to assist eye detection. But long time exposure to
near-infrared radiation is hazardous to the eyes.
The level of illumination is therefore limited by potentially harmful effects to the eyes. Image sensors for face
recognition under active near-infrared illumination have therefore to be carefully selected to provide optimal
image quality in the desired field of application. A model of the active illumination source is introduced. Safety
issues with regard to near-infrared illumination are addressed using this model and a radiometric analysis. From
the illumination model requirements on suitable imaging sensors are formulated. Standard image quality metrics
are used to assess the imaging device performance under application typical conditions. The characterization
of image quality is based on measurements of the Opto-Electronic Conversion Function, Modulation Transfer
Function and noise. A methodology to select an image sensor for the desired field of application is given. Two
cameras with low-cost image sensors are characterized using the key parameters that influence the image quality
for face recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new Full-Reference Singular Value Decomposition (SVD) based Image Quality Measurement (IQM) is proposed in
this paper. Most of the recently developed IQMs that have been designed for measuring universal distortion types have
worse results in measuring blur type distortions. The proposed method A-SVD aims at capturing the loss of structural
content instead of measuring the distortion of pixel intensity value. A-SVD uses the change in the angle between the
principal singular vectors as a distance between the original and distorted image blocks. Experiments were conducted
using the LIVE database. The proposed algorithm was compared with another recently proposed SVD based method
named M-SVD and other well-established methods including SSIM, MSSIM, and VSNR. Results have shown that the
proposed method has an advantage in discerning blurry types of image distortions while providing comparable results
for other distortion types. Also, the proposed method provides better linear correlation with the human score, which is a
desirable attribute for the IQM to be used in other applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present work concerns the development of a no-reference demosaicing quality metric. The demosaicing
operation converts a raw image acquired with a single sensor array, overlaid with a color filter array, into a
full-color image. The most prominent artifact generated by demosaicing algorithms is called zipper. In this work
we propose an algorithm to identify these patterns and measure their visibility in order to estimate the perceived
quality of rendered images. We have conducted extensive subjective experiments, and we have determined the
relationships between subjective scores and the proposed measure to obtain a reliable no-reference metric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as
their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently
have large information content and lossless coding of holographic data is rather inefficient due to the speckled
nature of the interference fringes they contain.
Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy
compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used
to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly
speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be
misleading. For example, for low compression ratios, a numerically significant coding error can have visually
negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved,
while maintaining the reconstruction quality at visually lossless levels.
Using an experimental threshold estimation method, the staircase algorithm, we determined the highest
compression ratio that was not perceptible to human observers for objects compressed with Dirac and
MPEG-4 compression methods. This level of compression can be regarded as the point below which compression
is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold
compression can be obtained with the above methods without any perceptible change in the appearance of video
sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.