Embedding data in hard copy is in widespread use for applications that include pointing the reader to on-line
content by means of a URL, tracing the source of a document, labeling, and packaging. Most solutions involve
placing overt marks on the page. The most common are 1D, 2D, and 3D (color) barcodes. However, while
barcodes are a popular means for encoding information for printed matter, they add unsightly overt content.
In order to avoid such overt content, Stegatones are clustered-dot halftones that encode a data payload by
single-pixel shifts of selected dot-clusters. In a Stegatone, we can embed information in images or graphics –
not in the image file, as is done in traditional watermarking, but in the halftone on the printed page. However,
the recovery performance of stegatones is not well understood across a wide variety of printing technologies,
models, and resolutions, along with variations of scanning resolution. It would thus be very useful to have a
tool to quantify stegatone performance under these variables. The results would then be used to better calibrate
the encoding system. We develop and conduct a test procedure to characterize Stegatone performance. The
experimental results characterize Stegatone performance for a number of printers, scanners, and resolutions.
In anticipation of the proliferation of micro-projectors on our handheld imaging devices, we designed and tested a
camera-projector system that allows a distant user to point into a remote 3D environment with a projector. The solution
involves a means for locating a projected dot, and for adjusting its location to correspond to a position indicted a remote
user viewing the scene through a camera. It was designed to operate efficiently, even in the presence of camera noise.
While many camera-projector display systems require a calibration phase, the presented approach allows calibration-free
operation. The tracking algorithm is implemented with a modified 2D gradient decent method that performs even in the
presence of spatial discontinuities. Our prototype was constructed using a standard web-camera and network to perform
real-time tracking, navigating the projected dot across irregularly shaped and colored surfaces accurately. Our tests
included a camera-projector system and client on either side of the Atlantic Ocean with no loss of responsiveness.
In wavelet-based image coding, a variety of masking properties have been exploited that result in spatially-adaptive quantization
schemes. It has been shown that carefully selecting uniform quantization step-sizes across entire wavelet subbands
or subband codeblocks results in considerable gains in efficiency with respect to visual quality. These gains have been
achieved through analysis of wavelet distortion additivity in the presence of a background image; in effect, how wavelet
distortions from different bands mask each other while being masked by the image itself at and above threshold. More
recent studies have illustrated how the contrast and structural class of natural image data influences masking properties
at threshold. Though these results have been extended in a number of methods to achieve supra-threshold compression
schemes, the relationship between inter-band and intra-band masking at supra-threshold rates is not well understood. This
work aims to quantify the importance of spatially-adaptive distortion as a function of compressed target rate. Two experiments
are performed that require the subject to specify the optimal balance between spatially-adaptive and non-spatiallyadaptive
distortion. Analyses of the resulting data indicate that on average, the balance between spatially-adaptive and
non-spatially-adaptive distortion is equally important across all tested rates. Furthermore, though it is known that meansquared-
error alone is not a good indicator of image quality, it can be used to predict the outcome of this experiment with
reasonable accuracy. This result has convenient implications for image coding that are also discussed.
Wavelet-based transform coding is well known for its utility in perceptual image compression. Psychovisual modeling has lead to a variety of perceptual quantization schemes, for efficient at-threshold compression. Successfully extending these models to supra-threshold compression, however, is a more difficult task. This work attempts to bridge the gap between at threshold modeling and supra-threshold compression by combining a spatially-selective quantization scheme, designed for at-threshold compression with simple MSE-based rate-distortion optimization. A psychovisual experiment is performed to determine how textured image regions can be used to mask quantization induced distortions. Texture masking results from this experiment are used to derive a spatial quantization scheme, which hides distortion in high-contrast image regions. Unlike many spatial quantizers, this technique requires explicit side information to convey contrast thresholds to generate step-sizes. A simple coder is presented that is designed that applies spatially-selective quantization to meet any rate constraints near and above threshold. This coder leverages this side information to reduce the rate required to code the quantized data. Compression examples are compared with JPEG-2000 examples with visual frequency weighting. When matched for rate, the spatially quantized images are highly competitive with and in some cases superior to the JPEG-2000 results in terms of visual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.