For multispectral image acquisition in remote sensing, high spatial resolution requires a small instantaneous field of view (IFOV). However, the smaller the IFOV, the lower the amount of light exposure to imaging sensors, and the lower the signal-to-noise ratio. To overcome this weakness, we propose a new random coded exposure technique for acquiring high-resolution multispectral images without reducing IFOV. The new image acquisition system employs a high-speed rotating mirror controlled by a random sequence to modulate exposure to an ordinary imager without increasing the sampling rate. The proposed high-speed coded exposure strategy makes it possible to maintain sufficient light exposure even with a small IFOV. The randomly sampled multispectral image can be recovered in high spatial resolution by exploiting the signal sparsity. The recovery algorithm is based on the compressive sensing theory. Simulation results demonstrate the efficacy of the proposed technique.
KEYWORDS: LCDs, Light emitting diodes, Point spread functions, Image segmentation, Optimization (mathematics), Transmittance, High dynamic range imaging, Detection and tracking algorithms, LED backlight, Image resolution
Local backlight dimming in Liquid Crystal Displays (LCD) is a technique for reducing power consumption and
simultaneously increasing contrast ratio to provide a High Dynamic Range (HDR) image reproduction. Several backlight
dimming algorithms exist with focus on reducing power consumption, while other algorithms aim at enhancing contrast,
with power savings as a side effect. In our earlier work, we have modeled backlight dimming as a linear programming
problem, where the target is to minimize the cost function measuring the distance between ideal and actual output. In this
paper, we propose a version of the abovementioned algorithm, speeding up execution by decreasing the number of input
variables. This is done by using a subset of the input pixels, selected among the ones experiencing leakage or clipping
distortions. The optimization problem is then solved on this subset. Sample reduction can also be beneficial in
conjunction with other approaches, such as an algorithm based on gradient descent, also presented here. All the proposals
have been compared against other known approaches on simulated edge- and direct-lit displays, and the results show that
the optimal distortion level can be reached using a subset of pixels, with significantly reduced computational load
compared to the optimal algorithm with the full image.
LED-backlit LCD displays hold the promise of improving the image quality while reducing the energy consumption
with signal-dependent local dimming. To fully realize such potentials we propose a novel local dimming
technique that jointly optimizes the intensities of LED backlights and the attenuations of LCD pixels. The
objective is to minimize the distortion in luminance reproduction due to the leakage of LCD and the coarse
granularity of the LED lights. The optimization problem is formulated as one of linear programming, and both
exact and approximate algorithms are proposed. Simulation results demonstrate superior performances of the
proposed algorithms over the existing local dimming algorithms.
KEYWORDS: Code division multiplexing, Color difference, Statistical analysis, Error analysis, Visualization, Digital filtering, Colorimetry, Color reproduction, Optical filters, Algorithm development
Single sensor digital color cameras capture only one of the three primary colors at each pixel and a process called color demosaicking (CDM) is used to reconstruct the full color images. Most CDM algorithms assume the existence of high local spectral redundancy in estimating the missing color samples. However, for images with sharp color transitions and high color saturation, such an assumption may be invalid and visually unpleasant CDM errors will occur. In this paper, we exploit the image nonlocal redundancy to improve the local color reproduction result. First, multiple local directional estimates of a missing color sample are computed and fused according to local gradients. Then, nonlocal pixels similar to the estimated pixel are searched to enhance the local estimate. An adaptive thresholding method rather than the commonly used nonlocal means filtering is proposed to improve the local estimate. This allows the final reconstruction to be performed at the structural level as opposed to the pixel level. Experimental results demonstrate that the proposed local directional interpolation and nonlocal adaptive thresholding method outperforms many state-of-the-art CDM methods in reconstructing the edges and reducing color interpolation artifacts, leading to higher visual quality of reproduced color images.
In most applications, video deinterlacing has to be performed in real time. Numerous algorithms have been
developed to strike a good balance between throughput and quality. The motion adaptive deinterlacing algorithm
switches between two modes: direct merging of two fields in areas of no motion, or intrafield adaptive interpolation
when motions are detected. In this paper, we propose a fast GPU-aided implementation of a motion adaptive
deinterlacing algorithm using NVIDIA CUDA (Compute Unified Device Architecture) technology. We discuss
the techniques of adapting the computations in motion detection and adaptive directional interpolation to the
GPU architecture for maximum video throughput possible. The objective is to fully utilize the processing power
of GPU without compromising the visual quality of the deinterlaced video. Experimental results are reported
and discussed to demonstrate the performance of the proposed GPU-aided motion adaptive video deinterlacer
in both speed and visual quality.
We propose a practical context-based adaptive image resolution upconversion algorithm. The basic idea is to use a low-resolution (LR) image patch as a context in which the missing high-resolution (HR) pixels are estimated. The context is quantized into classes and for each class an adaptive linear filter is designed using a training set. The training set incorporates the prior knowledge of the point spread function, edges, textures, smooth shades, etc. into the upconversion filter design. For low complexity, two 1-D context-based adaptive interpolators are used to generate the estimates of the missing pixels in two perpendicular directions. The two directional estimates are fused by linear minimum mean-squares weighting to obtain a more robust estimate. Upon the recovery of the missing HR pixels, an efficient spatial deconvolution is proposed to deblur the observed LR image. Also, an iterative upconversion step is performed to further improve the upconverted image. Experimental results show that the proposed context-based adaptive resolution upconverter performs better than the existing methods in both peak SNR and visual quality.
Large-scale, widespread distribution of high definition multimedia
contents using IP networks is extremely resource intensive. Service providers have to employ an expensive network of servers, routers, link infrastructures and set-top boxes to accommodate the generated traffic. The goal in this paper is to develop network-aware media communication solutions that help service providers to efficiently utilize their deployed network infrastructures for media delivery. In particular, we investigate the following fundamental problem: given a fixed network infrastructure, what is the best strategy to multicast multiple multimedia contents from a set of server nodes to a set of clients, to realize the best reconstruction quality at the client nodes? We use rate-distortion to formalize the notion of media quality and to formulate the corresponding optimization problem.
We show that current approaches in which multimedia
compression and network delivery mechanisms
are designed separately are inherently suboptimal. Thus,
better utilization of network resources requires a joint
consideration of media compression and network delivery.
We develop one such approach based on optimized
delivery of balanced Multiple Description Codes (MDC),
in which the MDC itself is also optimized with respect to
the optimized delivery strategy. Simulation results are
reported, verifying that our solution can significantly
outperform existing, layered, solutions. As a byproduct,
our solutions introduces a fundamentally different use of
MDC. Up until now, MDC has been adopted to combat
losses, mostly in packet lossy networks. We show that
MDC is an efficient tool for network communication,
even in error-free networks. In particular MDC, when
properly duplicated at routers, can exploit the rich topological
structures in networks to maximize the utilization
of the network resources, beyond conventional coding
techniques.
Almost all existing color demosaicking algorithms for digital cameras are designed on the assumption of high
correlation between red, green, blue (or some other primary color) bands. They exploit spectral correlations
between the primary color bands to interpolate the missing color samples. The interpolation errors increase in
areas of no or weak spectral correlations. Consequently, objectionable artifacts tend to occur on highly saturated
colors and in the presence of large sensor noises, whenever the assumption of high spectral correlations does
not hold. This paper proposes a remedy to the above problem that has long been overlooked in the literature.
The main contribution of this work is a technique of correcting the interpolation errors of any existing color
demosaicking algorithm by piecewise autoregressive modeling.
Functional Magnetic Resonance Imaging (fMRI) data sets are four
dimensional (4D) and very large in size. Compression can enhance
system performance in terms of storage and transmission
capacities. Two approaches are investigated: adaptive DPCM and
integer wavelets. In the DPCM approach, each voxel is coded as a
1D signal in time. Due to the spatial coherence of human anatomy
and the similarities in responses of a given substance to stimuli,
we classify the voxels by quantizing autoregressive coefficients
of the associated time sequences. The resulting 2D classification
map is sent as side information. Each voxel time sequence is DPCM
coded using a quantized autoregressive model. The prediction
residuals are coded by simple Rice coding for high decoder
throughput.
In the wavelet approach, the 4D fMRI data set is mapped to a 3D
data set, with the 3D volume at each time instance being laid out
into a 2D plane as a slice mosaic. 3D integer wavelet packets are
used for lossless compression of fMRI data. The wavelet
coefficients are compressed by 3D context-based adaptive
arithmetic coding. An object-oriented compression mode is also
introduced in the wavelet codec. An elliptic mask combined with
the classification of the background is used to segment the
regions of interest from the background.
Significantly higher lossless compression of 4D fMRI than JPEG
2000 and JPEG-LS is achieved by both methods. The 2D
classification map for compression can also be used for image
segmentation in 3D space for analysis and recognition purposes.
This segmentation supports object-based random access to very
large 4D data volumes. The time sequence of DPCM prediction
residuals can be analyzed to yield information on the responses of
the imaged anatomy to the stimuli. The proposed wavelet method
provides an object-oriented progressive (lossy to lossless)
compression of 4D fMRI data set.
KEYWORDS: Digital watermarking, Transparency, Visualization, Video, Data communications, Multimedia, Internet, Motion analysis, Analog electronics, Digital imaging
Digital cinema, a new frontier and crown jewel of digital multimedia, has the potential of revolutionizing the science, engineering and business of movie production and distribution. The advantages of digital cinema technology over traditional analog technology are numerous and profound. But without effective and enforceable copyright protection measures, digital cinema can be more susceptible to widespread piracy, which can dampen or even prevent the commercial deployment of digital cinema. In this paper we propose a novel approach of fingerprinting each individual distribution copy of a digital movie for the purpose of tracing pirated copies back to their source. The proposed fingerprinting technique presents a fundamental departure from the traditional digital watermarking/fingerprinting techniques. Its novelty and uniqueness lie in a so-called semantic or subjective transparency property. The fingerprints are created by editing those visual and audio attributes that can be modified with semantic and subjective transparency to the audience. Semantically-transparent fingerprinting or watermarking is the most robust kind among all existing watermarking techniques, because it is content-based not sample-based, and semantically-recoverable not statistically-recoverable.
Color demosaic of CCD data has been thoroughly studied for still digital cameras. But much to our surprise, there has seemingly been an absence of research on color demosaic techniques that are tailored to CCD video cameras. The temporal dimension of a sequence of colour mosaic images can reveal new information on the missing color components due to the subsampling of mosaic, which is otherwise unavailable in the spatial domain of individual frames. In the temporal approach of color demosaic a pixel of the current frame is to be matched to another in a reference frame via motion analysis such that the CCD camera samples the same position in different colors in the two different frames in question. As a result the color sample that is missing in spatial domain may be recovered from temporal domain. Or, even better, the intra-frame and inter-frame color demosaic techniques can be combined via data fusion to achieve more robust color restoration.
KEYWORDS: Wavelets, 3D modeling, Image compression, 3D image processing, Wavelet transforms, 3D acquisition, Performance modeling, Data modeling, Data centers, Medical imaging
We examine progressive lossy to lossless compression of medical volumetric data using three-dimensional (3D) integer wavelet packet transforms and set partitioning in hierarchical trees (SPIHT). To achieve good lossy coding performance, we describe a 3D integer wavelet packet transform that allows implicit bit shifting of wavelet coefficients to approximate a 3D unitary transformation. We also address context modeling for efficient entropy coding within the SPIHT framework. Both lossy and lossless coding performances are better than those reported recently in reference one.
Recent progresses in wavelet image coding have brought the field into its maturity. Major developments in the process are rate-distortion (R-D) based wavelet packet transformation, zerotree quantization, subband classification and trellis- coded quantization, and sophisticated context modeling in entropy coding. Drawing from past experience and recent insight, we propose a new wavelet image coding technique with trellis coded space-frequency quantization (TCSFQ). TCSFQ aims to explore space-frequency characterizations of wavelet image representations via R-D optimized zerotree pruning, trellis coded quantization, and context modeling in entropy coding. Experiments indicate that the TCSFQ coder achieves twice as much compression as the baseline JPEG coder does at the same peak signal to noise ratio (PSNR), making it better than all other coders described in the literature.
We examine color quantization of images using trellis coded quantization (TCQ). Together with a simple halftoning scheme, an eight-bit trellis coded color quantizer reproduces images that are visually indistinguishable from the 24-bit originals. The proposed algorithm can be viewed as a predictive trellis coded color quantization scheme. It is universal in the sense that no training or look-up table is needed. The complexity of TCQ is linear with respect to image size, making trellis coded color quantization suitable for interactive graphics and a window-based display environment.
The near-lossless CALIC is one of the best near-lossless intraframe image coding schemes which exploits and removes the local context correlation of images. Wavelet transform localizes the frequency domain and exploits the frequency- based global correlation of images. Applying the context modeling for the wavelet transform coefficients, a state of the art intraframe near-lossless coding scheme can be obtained. In this paper, we generalize the intraframe wavelet transform CALIC to interframe coding to form a hybrid near-lossless multispectral image compression. Context modeling techniques lend themselves easily to modeling of image sequences. While wavelet transform exploits the global redundancies, the interframe context modeling can thoroughly exploit the statistical redundance is both between and within the frames. First, the image frame is wavelet transformed in the near-lossless mode to obtain a set of orthogonal subclasses of images. Then the coefficients of interframes of the image are predicted using the gradient-adjusted predictor based on both intra- and inter-frame current coefficient context. The predicted coefficients are adjusted predictor based on both intra- and inter-frame current coefficient context. The predicted coefficients are adjusted using the sample mean of prediction errors conditioned on the current context and the residues are quantized. An incremental scheme is used for the prediction errors in a moving time windows for prediction bias cancellation. All the components are distortion controlled in the minmax metric to ensure the near-lossless compression. The decompression is the inverse of the process. It is demonstrated that the near-lossless wavelet transform and context modeling interframe image compression is one of the best schemes in high-fidelity multispectral image compression and it outperforms its intraframe counterpart with 10-20 percent compression gains while keeping the high fidelity.
KEYWORDS: Digital cameras, Color reproduction, Image quality, Visualization, Reconstruction algorithms, Raster graphics, Multimedia, Data processing, CCD image sensors, Linear filtering
Digital cameras are gaining popularity in many applications of multimedia information processing. But the CCD sensor used by digital cameras does not provide all three red, green, blue primaries for each pixel. Instead it uses an interlaced sampling scheme with only one primary per pixel. This article considers the problem of reconstructing a 24- bit/pixel color image from the interlaced sampling. A simple, efficient, and effective algorithm for color restoration from digital camera data is proposed. The proposed algorithm uses a pattern matching technique to reconstruct the missing color primaries based on the pixel contexts. Experimental results show that the proposed algorithm outperforms the technique of color interpolation.
Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.