In a recent work,1 chirp spread spectrum(CSS) was proposed as a low probability of intercept(LPI) waveform. CSS has been previously proposed for low power IoT applications and adopted in the LoRa standard. However, the LPI use of the waveform was new. The key in adopting CSS for LPI applications is the use of waveforms with a large time-bandwidth product. The pulse compression gain available to the intended receiver is not available to the intercept receiver. This report builds upon the previous work by testing the waveform in a 15 mile terrestrial link from atop Sandia Crest to points west. The transmit power level is swept from +27 dBm to -33 dBm. The intended receiver uses channelized matched processing whereas the intercept receiver deploys a wideband radiometer. The experiment measured the minimum transmit power level and maximum intercept range to keep the signal under the intercept receiver noise floor and yet detectable by the intended receiver.
KEYWORDS: Digital watermarking, Sensors, Signal to noise ratio, Acoustics, Signal detection, Bismuth, Receivers, Interference (communication), Holmium, Signal attenuation
Sonar watermarking is the practice of embedding low-power, secure digital signatures in the time frequency space of a waveform. The algorithm is designed for a single source/receiver configuration. However, in a multiuser environment, multiple sources broadcast sonar waveforms that overlap in both time and frequency. The receiver can be configured as a filter bank where each bank is dedicated to detecting a specific watermark. However, a filter bank is prone to mutual interference as multiple sonar waveforms are simultaneously present at the detector input. To mitigate mutual interference, a multiuser watermark detector is formulated as a decorrelating detector that decouples detection amongst the watermark signatures. The acoustic channel is simulated in software and modeled by an FIR filter. This model is used to compensate for the degradation of spreading sequences used for watermark embedding. The test statistic generated at the output of the decorrelating detector is used in a joint maximum likelihood ratio detector to establish the presence or absence of the watermark in each sonar waveform. ROC curves are produced for multiple sources positioned at varying ranges subject to ambient ocean noise controlled by varying sea states.
An image watermarking scheme that combines Hermite functions expansion and space/spatial-frequency analysis is proposed. In the first step, the Hermite functions expansion is employed to select busy regions for watermark embedding. In the second step, the space/spatial-frequency representation and Hermite functions expansion are combined to design the imperceptible watermark, using the host local frequency content. The Hermite expansion has been done by using the fast Hermite projection method. Recursive realization of Hermite functions significantly speeds up the algorithms for regions selection and watermark design. The watermark detection is performed within the space/spatial-frequency domain. The detection performance is increased due to the high information redundancy in that domain in comparison with the space or frequency domains, respectively. The performance of the proposed procedure has been tested experimentally for different watermark strengths, i.e., for different values of the peak signal-to-noise ratio (PSNR). The proposed approach provides high detection performance even for high PSNR values. It offers a good compromise between detection performance (including the robustness to a wide variety of common attacks) and imperceptibility.
This paper reports on performance of a previously reported sonar watermarking algorithm in actual
sea trials. Tests were conducted at the South Florida Ocean Measurement Facility in shallow water
depths of 200m and a range of 7km subject to a 80dB propagation loss. The watermark was designed
to match the acoustic channel simulated in the Sonar Simulation Toolset (SST), but no access to the
actual acoustic channel was available prior to the test. Watermark detection was carried out over
multiple ping cycles. For a 10-ping cycle, it was possible to achieve zero false alarm and a single
miss at SWR=27dB. At SWR=30dB, zero false alarm was maintained but the miss rate increased to
three. This experiment has rearmed the detectability of the watermark in actual sea deployment.
KEYWORDS: Digital watermarking, Sensors, Signal detection, Acoustics, Receivers, Acoustic emission, Signal to noise ratio, Data hiding, Control systems, Computer simulations
Undersea communication channels are filled with acoustic emissions of various kinds. From sonar to the signals
used in acoustic communications, man-made noise, and biological signals generated by marine life, the ocean is
a complex conduit for diverse emissions. In this work we propose an algorithm in which an acoustic emission
such as a sonar signal is transparently and securely embedded with signatures known as a digital watermark.
Extracting the watermark helps to distinguish, for example, a friendly sonar from other acoustic emissions that
may exist as part of the natural undersea environment, or from pings that may have originated from hostile forces
or echoes fabricated by an adversary. We have adopted spread spectrum as an embedding technique. Spread
spectrum allows for matching the watermark to propagation, multipath, and noise profiles of the channel. The
sonar is first characterized by its spectrogram and divided up into non-overlapping blocks in time. Each block
is individually embedded with a single bit drawn from the watermark payload. The seeds used to generate the
spreading codes are the keys used by authorized receivers to recover the watermark. The detector is a maximum
likelihood detector using test statistics obtained by integrating a correlation detector output over the entire sonar
pulse width. Performance of the detector is controlled by signal-to-watermark ratio, specific frequency bands
selected for watermarking, watermark payload, and processing gain. For validation, we use Sonar Simulation
Toolset (SST). SST is a software tool that is custom-made for the simulation of undersea channels using realistic
propagation properties in oceans. Probabilities of detection and false alarm rates, as well as other performance
boundaries, are produced for a shallow water channel subject to multipath and additive noise.
Radar has established itself as an effective all-weather, day or night sensor. Radar signals can penetrate walls
and provide information on moving targets. Recently, radar has been used as an effective biometric sensor for
classification of gait. The return from a coherent radar system contains a frequency offset in the carrier frequency,
known as the Doppler Effect. The movements of arms and legs give rise to micro Doppler which can be clearly
detailed in the time-frequency domain using traditional or modern time-frequency signal representation. In this
paper we propose a gait classifier based on subspace learning using principal components analysis(PCA). The
training set consists of feature vectors defined as either time or frequency snapshots taken from the spectrogram
of radar backscatter. We show that gait signature is captured effectively in feature vectors. Feature vectors are
then used in training a minimum distance classifier based on Mahalanobis distance metric. Results show that
gait classification with high accuracy and short observation window is achievable using the proposed classifier.
Using radar in a through-the-wall imaging application is an expanding field of research both for civilian and
military uses. Thus far, most of the attention has been directed toward building radar imaging systems to detect
objects within a room or building. The resulting images are full of ambiguity and difficult to interpret what the
image is displaying. Presented here is a novel approach that addresses the interpretation of the images produced
by the aforementioned imaging systems. We propose a classification scheme that provides an interpretation
of an urban environment imaged in 3D. This approach builds probabilistic object models from feature vectors
extracted from a volumetric radar image. A minimum-distance classifier is used to label radar image data and
provide a 3D visualization of an urban scene. Results using real radar backscatter data validate the effectiveness
of our method.
KEYWORDS: Digital watermarking, Video, Target detection, Video surveillance, Sensors, Matrices, Unmanned aerial vehicles, Data storage, Databases, 3D modeling
In this work we propose the use of digital watermarks to establish track history of multiple tracked objects.
Target tracks extracted from a frame sequence can be represented and analyzed using spatio-temporal graphs.
However, this approach requires extensive computation and data storage. It is possible to achieve the same result
by digitally watermarking each tracked target. This self-contained video can then be searched to extract track
history. Track history is obtained by computing a quantity called target adjacency matrix. Interpretation of the
elements of this matrix, their location, magnitude and movement will establish desired track history. Given two
randomly selected frames we can establish changes of spatial relationship among targets in the two frames as
well as entry or exit of new targets in each frame.
Automatic tracking of targets in image sequences is an important capability. Although effective algorithms exist
to implement frame to frame registration, connecting the tracks across frames is of recent interest. The current
approach to this problem is by building a spatio-temporal graph. In this work we argue that the same rationale
used to fingerprint multimedia content for tracing purposes can be used to follow targets across frames. Riding
on top of a tracker, tracked targets receive unique watermarks which propagate throughout the video. These
watermarks can then be searched for and used in a newly defined target adjacency matrix. The properties of
this matrix establishes how target sequencing evolves across frames. The watermarked video is self-contained
and does not require buiding and maintaining of a spati-temporal graphs.
In this work we report on the first H.264 authentication watermarker that operates directly in the bitstream, needing
no video decoding or partial decompression. The main contribution of the work is identification of a watermarkable
code space in H.264 protocol. The algorithm creates "exceptions" in H.264 code space that only the decoder
understands while keeping the bitstream syntax compliant The code space is defined over the Context Adaptive
Variable Length Coded(CAVLC) portion of protocol. What makes this algorithm possible is the discovery that most
of H.264 code space is in fact unused. The watermarker securely maps eligible CAVLC to unused portions of the
code space. Security is achieved through a shared key between embedder and decoder. The watermarked stream
retains its file size, remains visually transparent, is secure against forging and detection. Since the watermark is
placed post compression it remains fragile to re encoding and other tampering attempts.
Digital watermarks for images can be made relatively robust to luminance and chrominance changes. More challenging problems are geometric or combined intensity/geometric attacks. In this work we use an additive watermarking model, commonly used in spread spectrum, using a new spreading function. The spreading function is a 2D circular chirp that can simultaneously resist JPEG compression and image rotation. Circular chirp is derived from a block chirp by polar mapping. The resistance to compression is achieved by the available tuning parameters of a block chirp. Tuning parameters include the chirp's initial frequency and chirp rate. These two parameters can be used to perform spectral shaping to avoid JPEG compression effects. Rotational invariance is achieved by mapping the block chirp to a ring whose inner and outer diameters are selectable. The watermark is added in spatial domain but detection is performed in polar domain where rotation translates to translation.
In this work we propose a new algorithm for fragile, high capacity yet file-size preserving watermarking of MPEG-2 bitstreams. Watermarking is performed entirely in the compressed domain with no need for full or even partial decompression thus providing the speed necessary for real time video applications. The algorithm is based on the idea that that not every code word in the entropy coded portion of video is used. By mapping used variable length codes to unused ones a watermark situation can be realized. However, by examining MPEG-2 streams it was realized that the unused codespace is practically nonexistent. The solution lies in expanding the codespace. This expansion is achieved by creating a paired codespace defined over individual blocks of video. Although every VLC may have been used in the video, there are numerous VLC pairs that do not occur in any block together. This situation creates the redundancy within the codespace necessary for watermarking. We show that the watermarked video is resistant to forgery attacks and remains secure to watermark detection attempts.
In a previous work we proposed an algorithm to embed watermark bits directly in a compressed bitstream. The embedding algorithm is reversible, requires no decompression, causes no increase in file size, and is fast. The problem is that the watermarked bitstream will either lose format-compliance or suffer unacceptable visual degradation. We have now solved these problems for JPEG streams. The algorithm determines the AC VLC codespace that is actually used by the image. Watermarking specific VLCs places the resulting codewords outside of the used codespace, but still within the orignal VLC table specified in JPEG standard. To keep the watermarked stream format-compliant and visually acceptable, the run/size values of watermarked VLCs are remapped in a way to keep visual degradation to a minimum. Since watermarked VLCs are never used in the image, this remapping does not alter the image outside of watermarked VLC codespace.
In this paper, we propose two-dimensional (2-D) frequency
modulated (FM) signals for digital watermarking. The hidden
information is embedded into an image using the binary phase
information of a 2-D FM prototype. The original image is properly
partitioned into several blocks. In each block, a 2-D FM watermark
waveform is used and the watermark information is embedded using
the binary phase. The parameters of the FM watermark are selected
in order to achieve low bit error rate (BER) detection of the
watermark. Detailed study of performance analysis and parameter
optimizations is performed for 2-D chirp signals as an example of
2-D FM waveforms. Experimental results compare the proposed
methods and support their effectiveness.
Traditional variable length codes(VLC) used in compressed media are brittle and suffer synchronization loss following bit errors. To counter this situation, resynchronizing VLCs(RVLC) have been proposed to help identify, limit and possibly reverse channel errors. In this work we observe that watermark bits are in effect forced bit errors and are amenable to the application of error-resilient techniques. We have developed a watermarking algorithm around a two-way decodable RVLC. The inherent error control property of the code is now exploited to implement reversible watermarking in the compressed domain. A new decoding algorithm is developed to reestablish synchronization that is lost as a result of watermarking. Resynchronization is achieved by disambiguating among many potential markers that are abundantly emulated in data. The algorithm is successfully applied to several MPEG-2 streams.
Algorithms that perform data hiding directly in the compressed domain, without the need for partial decompression or transcoding, are highly desirable. We based this work on the recognition of the idea that only a limited amount of a possible codespace is actually used for any specific code. Therefore, if bits are chosen appropriately, watermarking them will place a codeword outside of the valid codespace. Variable length codes in compressed bitstreams, however, have virtually no redundancy to losslessly carry hidden data. Altered VLCs will likely remain valid. In this work, we examine the bitstream not as individual VLCs but as codeword-pairs. Pairing codewords conceptually shrinks the percentage of available codespace that is actually being used. This idea has a number of key advantages, including that the watermark embedding is mathematically lossless, file size is preserved and the watermarked bitstream will still remain format-compliant. This algorithm is most appropriate for compression algorithms that are error-resilient. For example, the error concealment property of MPEG-4 or H.263+ can also counter bit “errors” caused by the watermarking while playing the video. The off-line portion of the algorithm needs to be run only once for a given VLC table regardless of multiple mediums employing the table. This allows for the algorithm to be applied in real time, both in embedding and removal, due to implementation in the compressed domain.
KEYWORDS: Digital watermarking, Time-frequency analysis, Image compression, Detection and tracking algorithms, Signal processing, Fermium, Frequency modulation, Sensors, Signal generators, Electronic filtering
In this work we propose a new domain for digital watermarking. The proposed domain is the joint time-frequency cells of the Wigner distribution of the image. This domain provides an expanded, richer environment for watermark embedding. Watermark is placed in selected time-frequency cells of Wigner distribution. An inverse mapping then generates the watermarked signal. It is recognized that this approach is similar to time-frequency filtering problem. Since the watermarked Wigner may no longer be an admissible distribution any longer, the algorithm estimates a signal whose Wigner distribution is closest to the desired watermarked distribution. Selecting watermarkable time-frequency cells is similar, but not the same, problem encountered in DCT watermarking. For watermarking that is robust to JPEG, only those cells whose time-frequency signatures are relatively untouched by compression are selected. As a result, watermark remains detectable across all JPEG quality factors.
KEYWORDS: Digital watermarking, Video, Video surveillance, Algorithm development, Video compression, Interferometric modulator displays, RGB color model, Heart, Quantization, Data hiding
A plausible motivation for video tampering is to alter the sequence of events. This goal can be achieved by re-indexing attacks such as frame removal, insertion, or shuffling. In this work, we report on the development of an algorithm to identify and, subject to certain limitations, reverse such acts. At the heart of the algorithm lies the concept of a frame-pair [f,f*]. Frame-pairs are unique in two ways. The first frame is the basis for watermarking of the second frame sometime in the future. A key that is unique to the location of frame f governs frame-pair temporal separation. Watermarking is done by producing a low resolution version of 24-bit frame, spreading it, and then embedding it in the color space of f*. As such, watermarking f* is tantamount to embedding a copy of frame f in a future frame. Having tied one frame, in content and timing, to another frame downstream, frame removal and insertion can be identified and, subject to certain limitations, reversed.
Spread spectrum has been used as a technique for secure communications for a long time. For still image watermarking, spread spectrum has been the method of choice for some time. In this paper we argue that spread spectrum in the form of CDMA has a more natural application in the watermarking of uncompressed digital video. The reason for this sentiment is that digital video, by virtue of its time-space property, fits the direct sequence spread spectrum more readily. The problem, however, is that conventional CDMA achieves its data-hiding capability by a massive increase in bitrate. Successful implementation of CDMA for video watermarking, therefore, requires a reformulation of the concept. To this end, digital video is modeled as a bitplane stream along the time axis. Using a modified m-sequence, bitplanes of specific order are pseudorandomly marked for watermarking. Then, the desired watermark is mapped to a single bitplane and spread via another 2D m-sequence, not necessarily related to the first one, along a stream parallel to that of the video. The tagged planes are removed and replaced by the spread watermark. We show that the above approach resists noise as well as attacks on destroying synchronization at the watermark detector. Such attacks may include regular and/or random frame removal.
Energy minimization approaches in the form of deformable model fittings have recently attracted considerable attention. Such models are particularly effective when the sought after shapes undergo elastic deformations that cannot readily be accounted for by RST effects. Deformable models are by their nature iterative processes. Such approaches are computationally costly and require a detailed model building step, frequently in the form of cubic Bsplines. It is reported that almost all misclassifications in the energy minimization approaches can be attributed to two problems: local minima and modeling difficulties[3]. In this paper we have taken handwritten character recognition problem and proposed a solution that bypasses the two bottlenecks above. The algorithm requires no parametric shape models and is non-iterative. Character classes are modeled from a library of handwritten digits and described by discrete spatial processes. Contour classification is then performed by a MAP rule implemented on an unknown observed digit. We have shown classification rates that match or exceed those obtained by complex deformable templates. Keywords: deformable models, character recognition
KEYWORDS: Zoom lenses, Optics manufacturing, Cameras, Mobile robots, 3D image processing, Motion models, Data modeling, Computer engineering, 3D modeling, Lenses
It has been demonstrated recently that images obtained under varying focal lengths contain information on the 3D structure of the scene. This depth-from-zoom approach relies on the knowledge of `center of zoom'. Under zoom, image flow expands radially outward from the image center. In real lenses, however, this center is itself moving on a trajectory determined by the zoom action as well as lens manufacture. Therefore, optic flow vectors will follow an arbitrary curvilinear path in the image plane. In this paper we develop a general model for the determination of image flow given a corresponding locus for the center of zoom.
In this work, we have proposed the use of flow not induced by motion but by active manipulation of the lens focal length. The sequence of images thus obtained are to a great extent equivalent to those generated by camera translation with several distinct advantages: (1) the platform undergoes no perturbation while capturing the image sequences, (2) focus of expansion (FOE) is theoretically constrained to be at the center of the image or close proximity for camera imperfection, and (3) the flow is radial and as a result the correspondence problem, so difficult elsewhere, reduces to a 1-D search problem.
Outdoor navigation is characterized by motions through regions of varied terrain. The weighted region problem (WRP) is a generalization of the obstacle avoidance problem with 1/oo cost structure. By assigning indices to surface patches proportional to their traversability WRP seeks a path with the shortest length in the weighted sense. This work generalizes the WRP paradigm by stating that the traversability indices may only be available through a probability distribution. The reported indices therefore are not an exact description of the terrain rather they are an observation drawn from their respective distributions. Development of a decision basis directing the search is an objective of this paper.
Weighted region problem is a generalization of the obstacle avoidance problem. By defining a midi pie-valued index, terrain patches can be distinguished on the basis of their traversability. This paper generalizes the weighted region problem by defining a probability distribution over the traversability indices hence reflecting one's uncertainty as to the true terrain condition. Path planning under uncertain traversability requires a proper framework for decision making. This paper proposes a formal approach to this problem using elements of decision analysis and states- of-the-world paradigm. The uncertainty over the traversability index, path length and planners preferences defined over the consequence space are all encapsulated into a single decision making step.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.