The current study is aimed to propose a post-processing method for video enhancement by adopting a color-protection
technique. The color-protection intends to attenuate perceptible artifacts due to over-enhancements in visually sensitive
image regions such as low-chroma colors, including skin and gray objects. In addition, reducing the loss in color texture
caused by the out-of-color-gamut signals is also taken into account. Consequently, color reproducibility of video
sequences could be remarkably enhanced while the undesirable visual exaggerations are minimized.
KEYWORDS: Data conversion, Data modeling, Mathematical modeling, RGB color model, Image enhancement, Image processing, Chromium, Information operations, Adaptive optics, Algorithm development
YCbCr color space composed of luma and chominance components is preferred for its ease of image processing.
However the non-orthogonality between YCbCr components induces unwanted perceived chroma change as controlling
luma values. In this study, a new method was designed for the unwanted chroma change compensation generated by
luma change. For six different YCC_hue angles, data points named ‘Original data’ generated with uniformly distributed
luma and Cb, Cr values. Then the weight values were applied to luma values of ‘Original data’ set resulting in ‘Test
data’ set followed by ‘new YCC_chroma’ calculation having miminum CIECAM02 ΔC between original and test data
for ‘Test data’ set. Finally mathematical model is developed to predict amount of YCC_chroma values to compensate
CIECAM02 chroma changes. This model implemented for luma controlling algorithm having constant perceived
chroma. The performance was tested numerically using data points and images. After compensation the result is
improved 51.69% than that before compensation when CIECAM02 Δ C between ‘Original data’ and ‘Test data’ after
compensation is compared. When new model is applied to test images, there is 32.03% improvement.
This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics
for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some
objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection
(IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and
frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their
restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the
paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration
accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented.
Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration
based methods appear more accurate to the HR image in a real world case where any prior information about the blur
kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those
methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was
found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration
accuracy of those SR algorithms.
Much research has shown that perceived image contrast increases as the surround luminance increases, but a number of recent studies reported opposite trends under higher surround luminance levels. We measured the change in perceived image contrast under a wide range of surround luminance levels covering from dark up to 2087 cd/m2. A large-area illuminator was used to illuminate the surround. It consists of 23 dimmable fluorescent lamps and a diffuser. Its maximum luminance is 2087 cd/m2 and could be adjusted to six lower levels. A set of paired comparison experiments was conducted to compare the perception of image contrast under seven different surround luminance levels. The results showed that the perceived image contrast varies with surround luminance and the maximum perceived image contrast is found near a surround ratio (SR) of 1. As SR increases from 0 to 1, the z score is increased, which can be fully expected by the Bartleson and Breneman effect. However, it is drastically decreased in the region of SR > 1; thus, the perceived image contrast is eventually decreased.
We aim to quantify the perception of color gradation smoothness using objectively measurable properties. We propose a model to compute the smoothness of hardcopy color-to-color gradations. It is a gradient-based method that can be determined as a function of the 95th percentile of second derivative for the tone-jump estimator and the fifth percentile of first derivative for the tone-clipping estimator. Performance of the model and a previously suggested method were psychophysically appreciated, and their prediction accuracies were compared to each other. Our model showed a stronger Pearson correlation to the corresponding visual data, and the magnitude of the Pearson correlation reached up to 0.87. Its statistical significance was verified through analysis of variance. Color variations of the representative memory colors-blue sky, green grass and Caucasian skin-were rendered as gradational scales and utilized as the test stimuli.
A series of psychophysical experiments using paired comparison method was performed to investigate various visual
attribute affecting image quality of a mobile display. An image quality difference model was developed to show high
correlation with visual results. The result showed that Naturalness and Clearness are the most significant attributes
among the perceptions. A colour quality difference model based on image statistics was also constructed and it was
found colour difference and colour naturalness are important attributes for predicting image colour quality difference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.