PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7248, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Joint Photographic Experts Group (JPEG) committee is a joint working group of the International Standardization
Organization (ISO) and the International Electrotechnical Commission (IEC). The word "Joint" in JPEG however does
not refer to the joint efforts of ISO and IEC, but to the fact that the JPEG activities are the result of an additional
collaboration with the International Telecommunication Union (ITU). Inspired by technology and market evolutions, i.e.
the advent of wavelet technology and need for additional functionality such as scalability, the JPEG committee launched
in 1997 a new standardization process that would result in 2000 in a new standard: JPEG 2000. JPEG 2000 is a
collection of standard parts, which together shape the complete toolset. Currently, the JPEG 2000 standard is composed
out of 13 parts. In this paper, we review these parts and additionally address recent standardization initiatives within the
JPEG committee such as JPSearch, JPEG-XR and AIC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The directional wavelet used in image processing has orientation selectivity and can provide a sparse representation of
edges in natural images. Multiwavelets offer the possibility of better performance in image processing applications as
compared to the scalar wavelet. Applying directionality to multiwavelets may thus gain both advantages. This paper
proposes a scheme, named multiridgelets, which is an extension of ridgelets. We consider the application of the
balanced multiwavelet transform to the Radon transform of an image. Specifically, we consider its use in the image
texture analysis. The regular polar angle method is employed to realize the discrete transform. Three statistical features:
standard deviation, median, and entropy are computed based on multiridgelet coefficients. Comparative study was made
with the results obtained using 2D wavelets, scalar ridgelets, and curvelets. Classification of the mura defects of the LCD
screen is tested to quantify performance of the proposed texture analysis methods. 240 normal images and 240 simulated
defected images are supplied to train the support vector machine classifier and another 40 normal and 40 defected
images for testing. It concludes that multiridgelets were comparable to or better than curvelets and gave significant
performance than 2D wavelets and scalar ridgelets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the decomposition of multivariate functions into sums and compositions of monovariate
functions. The global purpose of this work is to find a suitable strategy to express complex multivariate functions
using simpler functions that can be analyzed using well know techniques, instead of developing complex Ndimensional
tools. More precisely, most of signal processing techniques are applied in 1D or 2D and cannot easily
be extended to higher dimensions. We recall that such a decomposition exists in the Kolmogorov's superposition
theorem. According to this theorem, any multivariate function can be decomposed into two types of univariate
functions, that are called inner and external functions. Inner functions are associated to each dimension and
linearly combined to construct a hash-function that associates every point of a multidimensional space to a
value of the real interval [0, 1]. Every inner function is the argument for one external function. The external
functions associate real values in [0, 1] to the image by the multivariate function of the corresponding point of
the multidimensional space.
Sprecher, in Ref. 1, has proved that internal functions can be used to construct space filling curves, i.e. there
exists a curve that sweeps the multidimensional space and uniquely matches corresponding values into [0, 1]. Our
goal is to obtain both a new decomposition algorithm for multivariate functions (at least bi-dimensional) and
adaptive space filling curves. Two strategies can be applied. Either we construct fixed internal functions to obtain
space filling curves, which allows us to construct an external function such that their sums and compositions
exactly correspond to the multivariate function; or the internal function is constructed by the algorithm and is
adapted to the multivariate function, providing different space filling curves for different multivariate functions.
We present two of the most recent constructive algorithms of monovariate functions. The first method is
due to Sprecher (Ref. 2 and Ref. 3). We provide additional explanations to the existing algorithm and present
several decomposition results for gray level images. We point out the main drawback of this method: all the
function parameters are fixed, so the univariate functions cannot be modified; precisely, the inner function
cannot be modified and so the space filling curve. The number of layers depends on the dimension of the
decomposed function. The second algorithm, proposed by Igelnik in Ref. 4, increases the parameters flexibility,
but only approximates the monovariate functions: the number of layers is variable, a neural networks optimizes
the monovariate functions and the weights associated to each layer to ensure convergence to the decomposed
multivariate function.
We have implemented both Sprecher's and Igelnik's algorithms and present the results of the decompositions
of gray level images. There are artifacts in the reconstructed images, which leads us to apply the algorithm on
wavelet decomposition images. We detail the reconstruction quality and the quantity of information contained
in Igelnik's network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a new patch-based multi-resolution analysis of semi-regular mesh surfaces. This analysis brings
patch-specific wavelet decomposition, quantization and encoding to the mesh compression process. Our underlying
mesh partitioning relies on surface roughness (based on frequency magnitude variations), in order to
produce patches, representative of semantic attributes of the object. With current compression methods based
on wavelet decomposition, some regions of the mesh still have wavelet coefficients with a non negligible magnitude
or polar angle (the angle with the normal vector), reflecting the high frequencies of the model. For each
non-smooth region, our adaptive compression chain provides the possibility to choose the best prediction filter
adjusted to its specificity. Our hierarchical analysis is based on a semi-regular mesh decomposition produced
by second-generation wavelets. Apart from progressive compression, other types of applications can benefit
from this adaptive decomposition, like error resilient compression, view-dependent reconstruction, indexation or
watermarking. Selective refinement examples are given to illustrate the concept of ROI (Region Of Interest)
decoding, which few people has considered, whereas it is possible with JPEG2000 for images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Next generation image compression system should be optimized the way human vision system (HVS) works. HVS
has been evolved over millions of years for the images which exist in our environment. This idea is reinforced by
the fact that sparse codes extracted from natural images resemble the primary visual cortex of HVS. We have
introduced a novel technique in which basis functions trained by Independent Component Analysis (ICA) have
been used to transform the image. ICA has been used to extract the independent features (basis functions) which
are localized, bandlimited and oriented like HVS and resemble wavelet and Gabor bases. A greedy algorithm
named matching pursuit (MP) has been used to transform the image in the ICA domain which is followed by
quantization and multistage entropy coding. We have compared our codec with JPEG from the DCT family
and JPEG2000 from the wavelets family. For fingerprint images, results are also compared with wavelet scalar
quantization (WSQ) codec which has been especially tailored for this type of images. Our codec outperforms
JPEG and WSQ and also performs comparable to JPEG2000 with lower complexity than the latter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In lossy packet networks such as the Internet, information often gets lost due to, e.g., network congestion. While
these problems are typically solved by Active Error Concealment techniques such as error correcting codes,
they do not always work for applications such as real time video. In these cases, Passive Error Concealment is
essential. Passive error concealment exploits the redundancy in the video: lost data are estimated from their
correctly received neighboring data. In this paper, we focus on wavelet based video coding. We compress
video frames by dispersively spreading neighboring wavelet coefficients over different packets, and by coding
these packets independently from each other. If a packet gets lost during the transmission, we estimate the
missing data (wavelet coefficients in I-frames and P-frames, and motion vectors) with passive error concealment
techniques.
In the proposed method, we extend our earlier image concealment method to video. This technique applies a
locally adaptive interpolation for the reconstruction of lost coefficients in the I-frames of wavelet coded video. We
also investigate how the lost coefficients in P-frames can be reconstructed. For the reconstruction of lost motion
vectors, we use the vector median filtering reconstruction technique. Compared to related video reconstruction
methods of similar complexity, the proposed scheme estimates the lost data much better. The reconstructed
video also looks better. As the proposed method is fast and of low complexity, it is widely usable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyzes the statistical dependencies between wavelet coefficients in wavelet-based decompositions of 3D meshes. These dependencies are estimated using the interband, intraband and composite mutual information. For images, the literature shows that the composite and the intraband mutual information are approximat-ely equal, and they are both significantly larger than the interband mutual information. This indicates that intraband coding designs should be favored over the interband zerotree-based coding approaches, in order to better capture the residual dependencies
between wavelet coefficients. This motivates the design of intraband wavelet-based image coding schemes, such as quadtree-limited (QT-L) coding, or the state-of-the-art JPEG-2000 scalable image coding standard. In this paper, we empirically investigate whether these findings hold in case of meshes as well. The mutual information estimation results show that, although the intraband mutual information is significantly larger than the interband mutual information, the composite case cannot be discarded, as the composite mutual information is also significantly larger than the intraband
mutual information. One concludes that intraband and composite codec designs should be favored over the traditional interband
zerotree-based coding approaches commonly followed in scalable coding of meshes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Active infrared thermography is a method for non-destructive testing (NDT) of materials and components. In pulsed
thermography (PT), a brief and high intensity flash is used to heat the sample. The decay of the sample surface
temperature is detected and recorded by an infrared camera. Any subsurface anomaly (e.g. inclusion, delamination, etc.)
gives rise to a local temperature increase (thermal contrast) on the sample surface. Conventionally, in Pulsed Phase
Thermography (PPT) the analysis of PT time series is done by means of Discrete Fourier Transform producing phase
images which can suppress unwanted physical effects (due to surface emissivity variations or non-uniform heating). The
drawback of the Fourier-based approach is the loss of temporal information, making quantitative inversion procedures
tricky (e.g. defect depth measurements). In this paper the complex Morlet-Wavelet transform is used to preserve the time
information of the signal and thus provides information about the depth of a subsurface defect. Additionally, we propose
to use the according phase contrast value to derive supplementary information about the thermal reflection properties at
the defect interface. This provides additional information (e.g. about the thermal mismatch factor between the specimen
and the defect) making interpretation of PPT results easier and perhaps unequivocal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new algorithm for objective blur estimation using wavelet decomposition. The central idea of
our method is to estimate blur as a function of the center of gravity of the average cone ratio (ACR) histogram. The key
properties of ACR are twofold: it is powerful in estimating local edge regularity, and it is nearly insensitive to noise. We
use these properties to estimate the blurriness of the image, irrespective of the level of noise. In particular, the center of
gravity of the ACR histogram is a blur metric. The method is applicable both in case where the reference image is
available and when there is no reference. The results demonstrate a consistent performance of the proposed metric for a
wide class of natural images and in a wide range of out of focus blurriness. Moreover, the proposed method shows a remarkable insensitivity to noise compared to other wavelet domain methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In our paper we present innovative approach to objective quality evaluation that could be computed using mean difference between original and tested image in different wavelet subbands. DWT subband decomposition properties are similar to human visual system (HVS) characteristics facilitating integration of DWT into image quality evaluation. DWT decomposition is done with multiresolution analysis of a signal that allows us to decompose a signal into approximation and detail subbands. DWT coefficients were computed using reverse biorthogonal spline wavelet filter banks. Coefficients for HH subband in level 2 are used to compute new image quality measure (IQM). IQM is defined as difference between HH level 2 coefficients of original and degraded image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data power law scaling behavior is observed in many fields. Velocity of fully developed turbulent flow, telecommunication
traffic in networks, financial time series are some examples among many others. The goal of the
present contribution is to show the scaling behavior of physiological time series in marathon races using wavelet
leaders and the Detrended Fluctuation Analysis.
Marathon race is an exhausting exercise, it is referenced as being a model for studying the limits of human
ambulatory abilities.
We analyzed the athlete's heart rate and speed time series recorded simultaneously. We find that the heart cost
time series, number of heart beats per meter, increases with the fatigue appearing during the marathon race, its
tendency grows in the second half of the race for all athletes.
For most physiological time series, we observed a concave behavior of the wavelet leaders scaling exponents which
suggests a multifractal behavior. Otherwise, the Detrended Fluctuation Analysis shows short and long range
time-scale power law exponents with the same break point for each physiological time series and each athlete.
The short range time-scale exponent increases with fatigue in most physiological signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coming across with the emerging Knowledge Society, the enriched video is nowadays a hot research topic, from both
academic and industrial perspectives. The principle consists in associating to the video stream some metadata of various
types (textual, audio, video, executable codes, ...). This new content is to be further exploited in a large variety of
applications, like interactive DTV, games, e-learning, and data mining, for instance. This paper brings into evidence the
potentiality of the watermarking techniques for such an application. By inserting the enrichment data into the very video
to be enriched, three main advantages are ensured. First, no additional complexity is required from the terminal and the
representation format point of view. Secondly, no backward compatibility issue is encountered, thus allowing a unique
system to accommodate services from several generations. Finally, the network adaptation constraints are alleviated. The
discussion is structured on both theoretical aspects (the accurate evaluation of the watermarking capacity in several reallife
scenarios) as well as on applications developed under the framework of the R&D contracts conducted at the
ARTEMIS Department.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach for wavelet-based demosaicing of color filter array (CFA) images is presented. It is observed that
conventional wavelet-based demosaicing results in demosaicing artifacts in high spatial frequency regions of the
image. By proposing a framework of locally adaptive demosaicing in the wavelet domain, the presented method
proposes computationally simple techniques to avoid these artifacts. In order to reduce computation time and
memory requirements even more, we propose the use of the dual tree complex wavelet transform. The results
show that wavelet-based demosaicing, using the proposed locally adaptive framework, is visually comparable
with state-of-the-art pixel based demosaicing. This result is very promising when considering a low complexity
wavelet-based demosaicing and denoising approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a universal embedding distortion model for wavelet based watermarking is presented. The present
work extends our previous work on modelling embedding distortion for watermarking algorithms that use orthonormal
wavelet kernels to non-orthonormal wavelet kernels, such as biorthogonal wavelets. By using a common
framework for major wavelet based watermarking algorithms and the Parseval's energy conservation theorem
for orthonormal transforms, we propose that the distortion performance, measured using the mean square error
(MSE), is proportional to the sum of energy of wavelet coefficients to be modified by watermark embedding. The
extension of the model to non-orthonormal wavelet kernel is obtained by rescaling the sum of energy of wavelet
coefficients to be modified by watermark embedding using a weighting parameter that follows the energy conservation
theorems in wavelet frames. The proposed model is useful to find optimum input parameters, such as,
the wavelet kernel, coefficient selections and subband choices, for a given wavelet based watermarking algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geometric shapes embedded in 2D or 3D images often have boundaries with both high and low curvature regions. These
boundaries of varying curvature can be efficiently captured by adaptive grids such as quadtrees and octrees. Using these
trees, we propose to store sample values at the centers of the tree cells in order to simplify the tree data structure, and to take
advantage of the image pyramid. The difficulty with using a cell-centered tree approach is the interpolation of the values
sampled at the cell centers. To solve this problem, we first restrict the tree refinement and coarsening rules so that only a
small number of local connectivity types are produced. For these connectivity types, we can precompute the weights for
a continuous interpolation. Using this interpolation, we show that region-based image segmentation of 2D and 3D images
can be performed efficiently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A framework for evaluating wavelet based watermarking schemes against scalable coded visual media content
adaptation attacks is presented. The framework, Watermark Evaluation Bench for Content Adaptation Modes
(WEBCAM), aims to facilitate controlled evaluation of wavelet based watermarking schemes under MPEG-21
part-7 digital item adaptations (DIA). WEBCAM accommodates all major wavelet based watermarking in single
generalised framework by considering a global parameter space, from which the optimum parameters for a specific
algorithm may be chosen. WEBCAM considers the traversing of media content along various links and required
content adaptations at various nodes of media supply chains. In this paper, the content adaptation is emulated
by the JPEG2000 coded bit stream extraction for various spatial resolution and quality levels of the content.
The proposed framework is beneficial not only as an evaluation tool but also as design tool for new wavelet based
watermark algorithms by picking and mixing of available tools and finding the optimum design parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a multipurpose watermarking scheme is proposed. The meaning of the word multipurpose is to make the proposed scheme as single watermarking scheme (SWS) or multiple watermarking scheme (MWS)
according to our requirement and convenience. We first segment the host image into blocks by means of Hilbert space filling curve and based on amount of DCT energy in the blocks, the threshold values are selected which make proposed scheme multipurpose. For embedding of n watermarks (n - 1) thresholds are selected. If the
amount of DCT energy of the block is less than the threshold value then ENOPV decomposition is performed and watermark is embedded in either low or high or all frequency sub-bands by modifying the singular values. If the amount of DCT energy of the block is greater than the threshold value then embedding is done by modifying
the singular values. This process of embedding through ENOPV-SVD and SVD is applied alternatively to all (n - 1) threshold values. Finally, modified blocks are mapped back to their original positions using inverse Hilbert space filling curve to get the watermarked image. A reliable extraction process is developed for extracting all
watermarks from attacked image. Experiments are done on different standard gray scale images and robustness is carried out by a variety of attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.