PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
When estimating the standard deviation of a magnitude with spatial dependence, it is necessary to carry out several measurements at every location where the magnitude is estimated. However, in image applications the standard deviation is commonly measured by means of only one high-uniform image. This procedure tends to increase the standard deviation due to spatial nonuniformity of the image. In this work we propose a new technique based on the variogram, which is a function that measures the spatial correlation of the image, that can be employed to accurately estimate the standard deviation of nonuniform images obtained by charge-coupled device cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Displacement co-occurrence is used to describe the spatial dependence relationships of binary digital images, which in turn are used for disconnectedness detection and texture analysis. A ‘‘m-points-and-M-cells’’ model, in particular, is used to study onedimensional displacement co-occurrence statistics, which then is used for row-wise and column-wise two-dimensional image analysis. A compact state is used to characterize a case for aggregating pixels into a connected region, resulting in a regular straight-line distribution of the displacement co-occurrence histogram. An image pattern is described by a scattering state that usually yields an irregular histogram distribution. With reference to the compact state’s histogram, the vector norms of the scattering states’ histograms can be used to characterize the image’s disconnectedness feature. A connected line along horizontal or vertical orientations results in zero vector norms, and therefore, the nonzero vector norm indicates the presence of disconnectedness. The displacement cooccurrence histogram is invariant under translation, and shear deformation. Using a ‘‘W-points-and-MXN-lattice’’ model and the concept of rectangular rectification, any binary pattern can be considered as one of its scattering states. Two-dimensional displacement co-occurrence matrices yield the statistics of twodimensional displacement configurations, and can be used in texture analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TOPICS: Image segmentation, Image processing algorithms and systems, Color image segmentation, Image processing, Human vision and color perception, RGB color model, Color image processing, Color difference, Neurons, Visualization
A color image segmentation methodology based on a self-organizing map (SOM) is proposed. The method developed takes into account the color similarity and spatial relationship of objects within an image. According to the features of color similarity, an image is first segmented into coarse cluster regions. The resulting regions are then treated by computing the spatial distance between any two cluster regions, and the SOM with a labeling process is applied. In this paper, the selection of the parameters for the SOM algorithm was also investigated experimentally. The experimental results show that the proposed system is feasible, and that the segmented object regions are similar to those perceived by human vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the problem of transform domain image denoising in two parts. Considering additive data independent noise corrupted images in the first part, we review a class of local transform domain filters, and compare their performances. We improve the performance of local transform domain filters by proposing averaging over overlapping windows. Comparisons include discussion of relations with wavelet denoising and simulations over different images. In the second part, we consider data dependent noise corrupted images, and propose a novel transform domain denoising method. We study the performance of the method for the case of film-grain noise. Experimental results justify the effectiveness of the studied transform domain filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Numerous problems in electronic imaging systems involve the need to interpolate from irregularly spaced data. One example is the calibration of color input/output devices with respect to a common intermediate objective color space, such as XYZ or L*a*b*. In the present report we survey some of the most important methods of scattered data interpolation in two-dimensional and in three-dimensional spaces. We review both single-valued cases, where the underlying function has the form f :R2?R or f :R3?R, and multivalued cases, where the underlying function is f:R2?R2 or f:R3?R3. The main methods we review include linear triangular (or tetrahedral) interpolation, cubic triangular (Clough–Tocher) interpolation, triangle based blending interpolation, inverse distance weighted methods, radial basis function methods, and natural neighbor interpolation methods. We also review one method of scattered data fitting, as an illustration to the basic differences between scattered data interpolation and scattered data fitting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a fast and robust algorithm for recovering the control points of Be´ zier curves. The method is based on slope following and learning algorithm that provides an efficient way of finding the control points of any type of cubic Be´ zier curves. Experimental results demonstrate that our method is fast and efficient for recovering the control points accurately. The control points are applied to line image indexing and employed for the implementation of the identification of actors drawn in Japanese traditional painting pictures, known as Ukiyoe pictures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Range sensors that employ structured-light triangulation techniques often require calibration procedures, based on the system optics and geometry, to relate the captured image data to object coordinates. A Bernstein basis function (BBF) neural network that directly maps measured image coordinates to object coordinates is described in this paper. The proposed technique eliminates the need to explicitly determine the sensor’s optical and geometric parameters by creating a functional map between image-to-object coordinates. The training and test data used to determine the map are obtained by capturing successive images of the points of intersection between a projected light line and horizontal markings on a calibration bar, which is stepped through the object space. The surface coordinates corresponding to the illuminated pixels in the image are determined from the neural network. An experimental study that involves the calibration of a range sensor using a BBF network is presented to demonstrate the effectiveness and accuracy of this approach. The root mean squared errors for the x and y coordinates in the calibrated plane, 0.25 and 0.15 mm, respectively, are quite low and are suitable for many reverse engineering and part inspection applications. Once the network is trained, a hand carved wooden mask of unknown shape is placed in the work envelope and translated perpendicular to the projected light plane. The surface shape of the mask is determined using the trained network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a metric to predict the visibility of color halftone textures. This metric is represented by the critical viewing distance below which the halftone textures can be discriminated. It is intended to be used in the evaluation of the texture visibility of uniform color halftone patterns, which plays an important role in halftone design and optimization. The metric utilizes the visual threshold versus intensity function and contrast sensitivity functions for luminance and chrominance. To verify the metric, the texture visibility was determined experimentally using a psychovisual experiment. The critical viewing distances determined by the experiment and those predicted by the metric were compared, and a good correlation was achieved. The results have shown that the metric is capable of predicting the visibility over a wide range of texture characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we comprehensively categorize image quality measures, extend measures defined for gray scale images to their multispectral case, and propose novel image quality measures. They are categorized into pixel difference-based, correlation-based, edge-based, spectral-based, context-based and human visual system (HVS)-based measures. Furthermore we compare these measures statistically for still image compression applications. The statistical behavior of the measures and their sensitivity to coding artifacts are investigated via analysis of variance techniques. Their similarities or differences are illustrated by plotting their Kohonen maps. Measures that give consistent scores across an image class and that are sensitive to coding artifacts are pointed out. It was found that measures based on the phase spectrum, the multiresolution distance or the HVS filtered mean square error are computationally simple and are more responsive to coding artifacts. We also demonstrate the utility of combining selected quality metrics in building a steganalysis tool.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TOPICS: Image quality, Visual process modeling, Magnetic resonance imaging, Image processing, Image quality standards, Spatial frequencies, Signal to noise ratio, Visualization, Image analysis, Human vision and color perception
Perceptual difference models (PDMs) have become popular for evaluating the perceived degradation of an image by a process such as compression. We used a PDM to evaluate interventional magnetic resonance imaging (iMRI) methods that rapidly acquire an image at the expense of some anticipated image degradation compared to a conventional slower diagnostic technique. In particular, we examined MR keyhole techniques whereby only a portion of the spatial frequency domain, or k-space, was acquired, thereby reducing the time for the creation of image updates. We used a PDM based on the architecture of another visual differencemodel and validated it for noise and blur, degrading processes present in fast iMRI. The PDM showed superior correlation with human observer ratings of noise and blur compared to the mean squared error (MSE). In an example application, we simulated four keyhole techniques and compared them to a slower, full k-space diagnostic acquisition. For keyhole images, the MSE gave erratic results compared to the ratings by interventional radiologists. The PDM performed much better and gave an Az value >0.9 in a receiver operating characteristic analysis. Keyhole simulations showed that a single, central stripe acquisition, which sampled 25% of k-space, provided stable image quality within a clinically acceptable range, unlike three other keyhole schemes described in the literature. Our early experience shows the PDM to be an objective, promising tool for the evaluation of fast iMRI methods. It allows one to quantitatively make engineering decisions in the design of iMRI pulse sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a hybrid raster/vector system, two representations of the image are stored. Digitized raster image preserves the original drawing in its exact visual form, whereas additional vector data can be used for resolution-independent reproduction, image editing, analysis, and indexing operations. We introduce two techniques for utilizing the vector features in context-based compression of the raster image. In both techniques, Hough transform is used for extracting the line features from the raster image. The first technique uses a feature-based filter for removing noise near the borders of the extracted line elements. This improves the image quality and results in more compressible raster image. The second technique utilizes the line features to improve the prediction accuracy in the context modeling. In both cases, we achieve better compression performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A joint classification-compression scheme that provides the user with added capability to prioritize classes of interest in the compression process is proposed. The dual compression system includes a primary unit for conventional coding of a multispectral image set followed by an auxiliary unit to code the resulting error induced on pixel vectors that represent classes of interest. This technique effectively allows classes of interest in the scene to be coded at a relatively higher level of precision than nonessential classes. Prioritized classes are selected from a thematic map or directly specified by their unique spectral signatures. Using the specified spectral signatures of the prioritized classes as end members, a modified linear spectral unmixing procedure is applied to the original data as well as to the decoded data. The resulting two sets of concentration maps, which represent classes prioritized before and after compression, are compared and the differences between them are coded via an auxiliary compression unit and transmitted to the receiver along with a conventionally coded image set. At the receiver, the differences found are blended back into the decoded data for enhanced restoration of the prioritized classes. The utility of this approach is that it works with any multispectral compression scheme. This method has been applied to test the imagery from various platforms including NOAA’s AVHRR (1.1 km GSD), and LANDSAT 5 TM (30 m GSD), LANDSAT 5 MSS (79 m GSD).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An image compression algorithm exploiting the interband and intraband dependencies in a multiresolution decomposition structure is introduced. The transform coefficients, other than those which belong to the direct current band, are adaptively scanned into two different one-dimensional arrays. These arrays are generated by using the observed statistical behavior of wavelet transform coefficients, in an effort to maximize local stationarity. The arrays are then entropy coded separately by a recently introduced entropy coding method. The algorithm is also extended for the compression of wavelet packet transform coefficients, by proposing a new efficient way of defining the parent-children link for wavelet packet decomposition. Experimental results indicate that the algorithms have competitive performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent proliferation of digital multimedia content has raised concerns about authentication mechanisms for multimedia data. A number of authentication techniques based on digital watermarks have been proposed in the literature. In this paper we examine the security of the Yeung–Mintzer authentication watermarking technique and show that it is vulnerable to different types of impersonation and substitution attacks whereby an attacker is able to either create or modify images that would be considered authentic. We present two attacks. The first attack infers the secret watermark insertion function. This enables an attacker to embed a valid watermark in any image. The attack works without knowledge of the binary watermark inserted in the image, provided the attacker has access to a few images that have been watermarked with the same secret key (insertion function) and contain the same watermark. We show simulation results in which the watermark and the watermark insertion function can be mostly reconstructed in a few (1–5) minutes of computation, using as few as two images. The second attack we present, which we call the ‘‘collage attack’’ is a variation of the Holliman–Memon counterfeiting attack. The proposed variation does not require knowledge of the watermark logo and produces counterfeits of superior quality by means of a sophisticated dithering process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is focused on digital image authentication, considered as the process of evaluating the integrity of image contents relatively to the original picture and being able to detect, in an automatic way, malevolent content modifications. A computationally efficient spatial watermarking technique for authentication of visual information, robust to small distortions caused by compression, is described. In essence, content-dependent authentication data are embedded into the picture, by modifying the relationship of image projections throughout the entire image. To obtain a secure data embedding and extraction procedure, directions onto which image parts are projected depend on a secret key. In order to guarantee minimum visibility of the embedded data, the insertion process is used in conjunction with perceptual models, exploiting spatial domain masking effects. The viability of the method as a means of protecting the content is assessed under JPEG compression and semantic content modifications. With the present system, robustness to JPEG compression up to compression factors of about 1:10 can be achieved, maintaining the subjective image quality after watermark insertion. At the same time, it is possible to detect and localize small image manipulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Initial work on JPEG2000 started in 1997, when a call for proposals was issued by the JPEG committee to create a new standard to ‘‘address areas where current standards failed to produce the best quality or performance,’’ ‘‘provide capabilities to markets that currently do not use compression,’’ and ‘‘provide an open system approach to imaging applications.’’ The ISO committee approved part 1 of JPEG2000 as an International Standard in December 2000. Considering the rich set of functionalities provided by JPEG2000, and the central role that compression plays in modern visual communications and storage, this successor to the highly acclaimed JPEG standard is expected to appear soon in a diverse set of existing and emerging applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.