PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The performance of Bayesian detection of Gaussian signals using noisy observations is investigated via the error exponent for the average error probability. Under unknown signal correlation structure or limited processing capability it is reasonable to use the simple quadratic detector that is optimal in the case of an independent and identically distributed (i.i.d.) signal. Using the large deviations principle, the performance of this detector (which is suboptimal for non-i.i.d. signals) is compared with that of the optimal detector for correlated signals via the asymptotic relative efficiency defined as the ratio between sample sizes of two detectors required for the same performance in the large-sample-size regime. The effects of SNR on the ARE are investigated. It is shown that the asymptotic efficiency of the simple quadratic detector relative to the optimal detector
converges to one as the SNR increases without bound for any bounded spectrum, and that the simple quadratic detector performs as well as the optimal detector for a wide range of the correlation values at high SNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Space-time coding techniques can be used to achieve very high spectral efficiencies in highly scattering environments using multiple transmit and receive antennas. At the remote station, there is usually a more limited space allotted to the antenna array than at the base station. Since the spectral efficiency improves with the number of antennas, one is interested in how many antennas can be crammed into the limited space on the remote station. This paper addresses some of the issues which affect the allowable density of antennas in the remote station. In particular, the mutual impedance between antenna elements in the remote array, the correlation between the signal and noise fields received by these elements, and amplifier noise contributions impact on the channel capacity achievable by such arrays. In particular, we assume the transmitter is radiating from nT elements of uncoupled half wave dipoles and knows nothing of the channel. A formula is given for the maximum channel capacity to a receiving array of nR elements, coupled to each other in the presence of ambient noise or interference with a uniform angle of arrival distribution. This formula neglects amplifier noise in the receivers. It is shown that the channel capacity is already determined at the terminals of the receiving array and can not be improved by internal coupling networks following the receiving array. When the propagation is by means of full three dimensional scattering, the channel capacity is unaffected by mutual coupling in the receiving array.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maximum system mutual information is considered for a group of interfering users employing single user detection and antenna selection of multiple transmit and receive antennas for flat Rayleigh fading channels with independent fading coefficients for each path. The case considered is that where there is very limited channel state information at the transmitter, but channel state information is assumed at the receiver. The focus is on extreme cases with very weak interference or very strong interference. It is shown that the optimum signaling covariance matrix is sometimes different from the standard scaled identity matrix. In fact this is true even for cases without interference if SNR is sufficiently weak. Further the scaled identity matrix is actually that covariance matrix that yields worst performance if the interference is sufficiently strong.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is of interest to find compare optimum beamforming communications between a random antenna array of sensors and a uniform antenna array base station to MIMO communications between the two arrays. For these purposes we examine a specific example. Channel capacity is compared for various versions of MIMO communications. Channel state information is assumed to be known a.) at the receiving array only, and b) at both the transmitting and receiving arrays. When the signal to noise ratio is high, the blind transmitter and the knowledgeable transmitter MIMO provides higher channel capacity than the beamformer, but for very low signal to noise ratio only the knowledgeable transmitter MIMO equals the beamformer channel capacity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wireless sensor networks (WSNs) have attracted considerable attention in recent years and motivate a host of new challenges for distributed signal processing. The problem of distributed or decentralized estimation has often been considered in the context of parametric models. However, the success of parametric methods is limited by the appropriateness of the strong statistical assumptions made by the models. In this paper, a more flexible nonparametric model for distributed regression is considered that is applicable in a variety of WSN applications including field estimation. Here, starting with the standard regularized kernel least-squares estimator, a message-passing algorithm for distributed estimation in WSNs is derived. The algorithm can be viewed as an instantiation of the successive orthogonal projection (SOP) algorithm. Various practical aspects of the algorithm are discussed and several numerical simulations validate the potential of the approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Orthogonal Frequency Division Multiplexing (OFDM) is a communications technique that transmits a signal over multiple, evenly spaced, discrete frequency bands. OFDM offers some advantages over traditional, single-carrier modulation techniques, such as increased immunity to inter-symbol interference. For this reason OFDM is an attractive candidate for sensor network application; it has already been included in several standards, including Digital Audio Broadcast (DAB); digital television standards in Europe, Japan and Australia; asymmetric digital subscriber line (ASDL); and wireless local area networks (WLAN), specifically IEEE 802.11a. Many of these applications currently make use of a standard convolutional code with Viterbi decoding to perform forward error correction (FEC). Replacing such convolutional codes with advanced coding techniques using iterative decoding, such as Turbo codes, can substantially improve the performance of the OFDM communications link. This paper demonstrates such improvements using the 802.11a wireless LAN standard.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrasonic sensor arrays are very useful for detecting natural and man-made events. This paper describes part of an ongoing project for compressing and transmitting a set of infrasonic signals that need to be delivered to a remote location for decompression and processing. The project also deals with the evaluation of the effect of the compression distortion on the signals by the use of task-specific distortion metrics. We evaluate the effectiveness of the scheme using one hour worth of signals that were collected during a Space Shuttle launch using a small array of 4 microphones. The approach described here is to combine the 4 signals/channels using a transmultiplexer and to use an off-the-shelf audio compression method, namely the popular MP3 method which is based on subband coding. The transmultiplexer is a 5-channel Cosine-Modulated filterbank from which only the first 4 channels are used.. The codec used in this study is the readily available LAME software package which allows one to choose the output bits per second rate and to turn off the psychoacoustic model. To use an audio coder, the combined signal is first converted to 16 bits per sample and then associated with a 16 KHz. sampling frequency. In the application considered, the microphone signals are used to compute time evolving quantities including: average spectral coherence, beamforming, and phase velocity. These same quantities are used as task-specific metrics that reveal the distortion caused by the application of the MP3 compressor so that the user can evaluate distortion tolerances. From visual evaluation of these metrics we conclude that a compression ratio between 6.4:1 and 8:1 produces negligible distortion in the three task-specific metrics. The beamforming metric is the most sensitive to the compression distortion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor network technology can revolutionize the study of animal ecology by providing a means of non-intrusive, simultaneous monitoring of interaction among multiple animals. In this paper, we investigate design, analysis, and testing of acoustic arrays for localizing acorn woodpeckers using their vocalizations. Each acoustic array consists of four microphones arranged in a square. All four audio channels within the same acoustic array are finely synchronized within a few micro seconds. We apply the approximate maximum likelihood (AML) method to synchronized audio channels of each acoustic array for estimating the direction-of-arrival (DOA) of woodpecker vocalizations. The woodpecker location is estimated by applying least square (LS) methods to DOA bearing crossings of multiple acoustic arrays. We have revealed the critical relation between microphone spacing of acoustic arrays and robustness of beamforming of woodpecker vocalizations. Woodpecker localization experiments using robust array element spacing in different types of environments are conducted and compared. Practical issues about calibration of acoustic array orientation are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distributed sensor networks have been proposed for a wide range of applications. In this paper, our goal is to locate a wideband source, generating both acoustic and seismic signals, using both seismic and acoustic sensors. For a far-field acoustic source, only the direction-of-arrival (DOA) in the coordinate system of the sensors is observable. We use the approximate Maximum-Likelihood (AML) method for DOA estimations from severalacoustic arrays. For a seismic source, we use data collected at a single tri-axial accelerometer to perform DOA estimation. Two seismic DOA estimation methods, the eigen-decomposition of the sample covariance matrix method and the surface wave method are used. Field measurements of acoustic and seismic signals generated by vertically striking a heavy metal plate placed on the ground in an open field are collected. Each acoustic array uses four low-cost microphones placed in a square configuration and separated by one meter. The microphone outputs of each array are collected by a synchronized A/D recording system and processed locally based on the AML algorithm for DOA estimation. An array of six tri-axial accelerometers arranged in two rows whose outputs are fed into an ultra low power and high resolution network-aware seismic recording system. Field measured data from the acoustic and seismic arrays show the estimated DOAs and consequent localizations of the source are quite accurate and useful.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target tracking is an essential capability for Wireless Sensor Networks (WSNs) and is used as a canonical problem for collaborative signal and information processing to dynamically manage sensor resources and efficiently process distributed sensor measurements. In existing work for target tracking in WSNs, such as the information-driven sensor query (IDSQ) approach, the tasking sensors are scheduled based on uniform sampling interval, ignoring the changing of the target dynamics and obtained estimation accuracy. This paper proposes the adaptive sensor scheduling strategy by jointly selecting the tasking sensor and determining the sampling interval according to the predicted tracking accuracy and tracking cost. The sensors are scheduled in two tracking modes, i.e., the fast tracking approaching mode when the predicted tracking accuracy is not satisfactory, and the tracking maintenance mode when the predicted tracking accuracy is satisfactory. The approach employs an Extended Kalman Filter (EKF) based estimation technique to predict the tracking accuracy, and adopts a linear energy model to predict the energy consumption. Simulation results demonstrate that, compared to the non-adaptive approach, the proposed approach can achieve significant improvement on energy consumption without degrading the tracking accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When one calculates a time-frequency distribution of white noise there sometimes appear transients of short duration. Superficially, these transients appear to be real signals but they are not. This comes about by random chance in the noise and also because particular types of distributions do not resolve components well in time. These fictitious signals can be misclassified by detectors and hence it is important to understand their origin and statistical properties. We present experimental studies regarding these false transients, and by simulation we statistically quantify their duration for various distributions. We compare the number and duration of the false transients when different distributions are used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extended interpolatory approximation is discussed for some classes of signals expressed by the finite sum of sinusoidal signals in the time domain. We assume that these signals have weighted norms smaller than a given positive number. The presented method has the minimum measure of approximation error among all the linear and the nonlinear approximations using the same measure of error and generalized sample values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Renyi entropy is receiving an important attention as a data analysis tool in many practical applications, due to its relevant properties when dealing with time-frequency representations (TFR). It's characterized for providing a generalized information content (entropy) of a given signal. The use of Renyi entropy can be extended to spatial-frequency applications. In this paper we present results of applying the Renyi entropy to an image fusion method, recently developed by the authors, based on the use of a 1-D pseudo-Wigner distribution (PWD). The fused image is obtained after the application of a Renyi entropy measure to the point-wise PWD of the images. The Renyi measure allow us to individually identify, from an entropic criterion, which pixels have a higher amount of information among the given input images. The method is illustrated with diverse related images of the same scene, with different amount of spatial-variant degradation or coming from different sources. In addition to that, some numerical results are presented, providing in this way a quantitative estimate of the accuracy of the fusion method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For an assessment of the power quality in power distribution systems, classical Fourier series-based power quality indices are normally employed. The classical Fourier series-based power quality indices assume the periodicity of the disturbance so that the applications are limited to the harmonics. Hence, it is necessary for us to redefine power quality indices for the "transient" disturbances. In this paper, development of time-frequency based power quality indices are discussed for an assessment of transient power quality. The time and frequency localized information of the transient disturbance signals will be utilized for a new definition of the transient power quality indices. As an example of time-frequency based power quality indices, new definition of transient telephone interference factor has been carefully derived and verified in comparison with traditional telephone interference factor. Time-frequency based power quality indices allow one to quantify the effects of transient disturbances by time and frequency localized information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method for computing the theoretically exact estimate of the instantaneous frequency of a signal from local values of its short time Fourier transform under the assumption that the complex logarithm of the signal is a polynomial in time. We apply the method to the problem of estimating and separating non-stationary components of a multi-component signal. Signal estimation and separation is based on a linear TF model in which the value of the signal at each time is distributed in frequency. This is a significant departure from the conventional nonlinear model in which signal energy is distributed in time and frequency. We further demonstrate by a simple example that IF estimated by the higher order method is significantly better than previously used first order methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When the short-time Fourier transform (STFT) of an audio signal is arbitrarily modified, it no longer truly represents a time-domain signal. Classically, the accepted solution to obtain a time-domain signal from a modified STFT (MSTFT) is to invert the MSTFT to a time-domain signal that has an STFT that is closest to the MSTFT in a least squares sense. This is also the approach currently taken by our modulation filtering techniques. However, it was never established that using the original and unmodified STFT phase in this reconstruction is optimal for modulation filtering. In this paper, we compare our signal reconstruction approach to a well-known iterative procedure that approximates a time-domain signal using only the STFT magnitude. We analyze the signal reconstruction of speech signals after filtering them with low-pass, band-pass and high-pass modulation filters. Our study shows that the iterative procedure yields quantitatively and qualitatively comparable signals at significantly higher computational cost. It therefore does not seem a worthwhile alternative to our current reconstruction technique, but it may prove useful for IIR modulation filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we present a tool that generates division hardware units. This generator, called divgen, allows a fast and wide space exploration in circuits that involve division operations. The generator produces synthesizable VHDL descriptions of optimized division units for various algorithms and parameters. The results of our generator have been demonstrated on FPGA circuits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a simple and efficient technique for extending the usefulness of Number Theoretic Transforms(NTTs). The technique is used with Fermat and Mersenne transforms, as well as transforms using general moduli. The constraint on transform length and wordlength is reduced by employing the proposed modified overlap technique, yielding practical architectures for convolution. The proposed technique relies on using transforms of different lengths operating in parallel with output samples time aligned. The usefulness of the technique is illustrated with an application in 2-D optical storage. Optical disks of the future may use a multi-track spiral (with a multi-spot laser) instead of the current single track spiral, yielding increased capacity and transfer speeds. However, this introduces increased complexity in the signal processing blocks, due to the 2 dimensional nature of the read-signals. This paper highlights the benefits of the proposed modified overlap-save method in a 2-D equalizer yielding a significant reduction in complexity compared to conventional equalizer approaches. The reduction is achieved by the novel way in which the transforms are applied and by using Number Theoretic Transforms (NTTs) with the modified overlap method. Despite using very short transform lengths, a significant decrease in computational complexity is still achieved when compared to an equivalent time domain approach. This is achieved through repeated use of the transformed input samples within the multi-track equalizer. A proposed transform domain architecture based on an NTT implementation for 7 rows is detailed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Logarithmic Number System (LNS) has area and power advantages over fixed-point and floating-point number systems in some applications that tolerate moderate precision. LNS multiplication/division require only addition/subtraction of logarithms. Normally, LNS is implemented with ripple-carry binary arithmetic for manipulating the logarithms; however, this paper uses carry-free residue arithmetic instead. The Residue Logarithmic Number System (RLNS) has the advantage of faster multiplication and division. In contrast, RLNS addition requires table-lookup, which is its main area and delay cost. The bipartite approach, which uses two tables and an integer addition, is introduced here to optimize RLNS addition. Using the techniques proposed here, RLNS with dynamic range and precision suitable for MPEG applications can be synthesized. Synthesis results show that bipartite RLNS achieves area savings and shorter delays compared to naive RLNS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With aggressive technology scaling there has been a rapid increase in leakage currents in modern CMOS processes. Leakage is mainly composed of sub-threshold and gate leakage. Leakage has been exponentially increasing with the scaling of threshold voltage and gate oxide thickness. This has resulted in power consumption being drastically affected. With the explosive increase in usage of embedded multimedia processors, high performance circuits need to be ultra low power in the standby mode while having a moderate power budget during runtime. One of the most power consuming high performance macros of an embedded processor is the floating point unit. In this paper we look into various macros of a floating point unit designed using a dynamic power optimized logic style called Limited Switch Dynamic Logic and circuit solutions for reducing the impact of leakage on these macros. The results are obtained for CMOS technology nodes of 90nm and 65nm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a scalable Elliptic Curve Crypto-Processor (ECCP) architecture for computing the point multiplication for curves defined over the binary extension fields (GF(2n)). This processor computes modular inverse and Montgomery modular multiplication using a new effcient algorithm. The scalability feature of the proposed crypto-processor allows a fixed-area datapath to handle operands of any size. Also, the word size of the datapath can be adjusted to meet the area and performance requirements. On the other hand, the processor is reconfigurable in the sense that the user has the ability to choose the value of the field parameter (n). Experimental results show that the proposed crypto-processor is competitive with many other previous designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today FPGAs are used in many digital signal processing applications. In order to design high-performance area efficient DSP pipelines various arithmetic functions and algorithms must be used. In this work, FPGA-based functional units for Cosine, Arctangent, and the Square Root functions are designed using bipartite tables and iterative algorithms. The bipartite tabular approach was four to 12 times faster than the iterative approach but requires 8-40 times more FPGA hardware resources to implement these functions. Next, these functions along with the FPGA hardware multipliers and a reciprocal bipartite table unit are used for hardware rectangular-to-polar and polar-to-rectangular conversion macro-functions. These macro-functions allow for a 7-10 times performance improvement for the high-performance pipelines or an area reduction of 9-17 times for the low cost implementations. In addition, software tool to design FPGA based DSP pipelines using the Cosine, Sine, Arctangent, Square Root, and Reciprocal units with the hardware multipliers is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an online arithmetic scheme for hardware evaluation of multinomials arising in Bayesian networks. The design approach consists of representing the multinomial in a factored form as an arithmetic circuit which is then partitioned into subgraphs and mapped to FPGA hardware as a network of online modules connected serially and operating in overlapped manner. This minimizes the interconnect demand without a drastic increase in computation latency. We developed a partitioning/mapping algorithm, designed basic radix-2 online operators and modules, and determined their cost/performance characteristics. We also evaluated the cost/performance characteristics of implementing a Bayesian network on an FPGA chip.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Residue Numbers System have some features which are fine for some implementations of cryptographic protocols. The main property of RNS is the distribution of the evaluation on large values on its small residues, allowing parallelization. This last property implies that we can randomize the distribution of the bases elements. Hence, the obtained arithmetic is leak resistant, it is robust against side channel attacks. But one drawback of RNS is that modular inversion is not obvious. Thus, RNS is well suited for RSA but not really for ECC. We analyze in this paper the features of the modular inversion in RNS over GF(P). We propose a RNS Extended Euclidean Algorithm which uses a quotient approximation module.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multidimensional logarithmic number system (MDLNS) is a recently developed number representation that is very efficient for implementing the Inner Product Step Processor (IPSP). The MDLNS provides more degrees of freedom than the classical LNS by virtue of the orthogonal bases and ability to obtain reduction of hardware complexity from the use of multiple digits. This paper presents an analysis of errors introduced in data mapping from real numbers to 2-dimentional LNS (2-DLNS). Due to non-uniform error distribution, mapping space is divided by pre-assigned segments, where error performance can be uniquely characterized. Mapping errors are collected piece-wisely over all of the segments. In 1-digit 2-DLNS, error collection can be simplified by using pattern-matching scheme. Expressions for error variance are derived. It is shown that the use of a 2-DLNS representation results in significant lower error variance compared to floating-point number systems. The hardware complexity required with the error performance comparable to classic LNS can be significantly reduced due to smaller size of ROMs compared with LNS. The results of the error analysis have been verified by numerical simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several classes of structured matrices (e.g., the Hadamard-Sylvester matrices and the pseudo-noise matrices) are used in the design of error-correcting codes. In particular, the columns of matrices belonging to the above two matrix classes are often used as codewords. In this paper we show that the two above classes essentially coincide: the pseudo-noise matrices can be obtained from the Hadamard-Sylvester matrices by means of the row/column permutations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we restore a one-dimensional signal that a priori is known to be a smooth function with a few jump discontinuities from a blurred, noisy specimen signal using a local regularization scheme derived in a Bayesian statistical inversion framework. The proposed method is computationally effective and reproduces well the jump discontinuities, thus is an alternative to using total variation (TV) penalty as a regularizing functional. Our approach avoids the non-differentiability problems encountered in TV methods and is completely data driven in the sense that the parameter selection is done automatically and requires no user intervention. A computed example illustrating the performance of the method when applied to the solution of a deconvolution problem is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the deconvolution problem of estimating an image from a noisy blurred version of it. In particular, we are interested in the boundary effects: since the convolution operator is non-local, the blurred image depend on the scenery outside the field of view. Ignoring this dependency leads to image distortion known as boundary effect. In this article, we consider two different approaches to treat the non-locality. One is to estimate the image extended outside the field of view. The other is to treat the influence of the out of view scenery as boundary clutter. both approaches are considered from the Bayesian point of view.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Given a 2D/3D image containing one or more objects of interest, we address the general problem of segmenting these objects providing a control over the smoothness of their boundaries. The boundary smooth is performed by a nonlinear diffusion process which simultaneously detect and segment the objects, controlling their boundary fairness while preserving the edges of the objects. We denote this process as regularized segmentation. We address the efficient computation of regularized segmentation using semi-implicit and implicit complementary volume schemes. Finally, we show segmentation results on artificial and real-word 2D images and preliminary results in 3D.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Linear Canonical Transform (LCT) describes the effect of any Quadratic Phase System (QPS) on an input optical wavefield. Special cases of the LCT include the fractional Fourier transform (FRT), the Fourier transform (FT) and the Fresnel Transform (FST) describing free space propagation. Recently we have published theory for the Discrete Linear Canonical Transform (DLCT), which is to the LCT what the Discrete Fourier Transform (DFT) is to the FT and we have derived the Fast Linear Canonical Transform (FLCT), a NlogN, algorithm for its numerical implementation using an approach similar to that used in deriving the FFT from the DFT. While the algorithm is significantly different to the FFT, it can be used to generate a new type of FFT algorithm using both time and frequency decimation intermittently and is based purely on the properties of the LCT and can be used for fast FT, FRT and FST calculations and in the most general case to rapidly calculate the effect of any QPS. In this paper we provide experimental validation of the algorithm in the simulation of an arbitrary two lens QPS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classic Lanczos method is an effective method for tridiagonalizing real symmetric matrices. Its block algorithm can significantly improve performance by exploiting memory hierarchies. In this paper, we present a block Lanczos method for tridiagonalizing complex symmetric matrices. Also, we propose a novel componentwise technique for detecting the loss of orthogonality to stablize the block Lanczos algorithm. Our experiments have shown our componentwise technique can reduce the number of orthogonalizations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classical Kharitonov theorem on interval stability cannot be carried over from polynomials to arbitrary entire functions. In this paper we identify a class of entire functions for which the desired generalization of the Kharitonov theorem can be proven. The class is wide enough to include classes quasi-polynomials occurring in the study of retarded systems with time delays, and some classes of entire functions that could be useful in studying systems with distributed delays. We also derive results for matrix polynomials and matrix entire functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an output signal selection method for the directional diversity in VSB receiver which is to improve the reception performance of VSB system in severe Rayleigh fading channel. The VSB system has only about 0.3% of known training signal for the receiver in a data field and the reception performance of VSB receiver is degraded significantly when there are near-0 dB ghosts in received signal. To overcome this problem, the directional diversity is suggested. In directional diversity the selection of an output signal with best channel condition in point of VSB equalizer is very important to improve VSB reception performance. For the selection of the optimal signal, we extracted channel profiles in time domain for all the signals by correlating the PN511 sequence in VSB field sync and selected one signal by comparing the channel profiles. The simulation results show that the proposed method selects a signal with the best channel condition among the signals, so the reception performance of the VSB system can be improved in severe Rayleigh channels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose new convolutive source separation and selective null beamforming methods using a pilot-based channel estimation technique in reverberation environments. First, we show convolutive sound source separation methods in determined and overdetermined cases. For the acoustic channel estimation, we propose a new channel estimation method using pilot sequence conceptually similar to a pilot code in wireless communication. Pilot sequence is composed of sum of sinusoidal sequences at frequencies matching STFT (Short-Time Fourier Transform) frequencies. Proposed channel estimation method considered the effect of spectral sampling with frequencies matching STFT frequencies provides precise channel information we want to know and has less computational loads than other competitive methods. After acoustic channel estimation, we can separate the signals from observations using inverse(or pseudo-inverse) operator of estimated channel matrix at each frequency bins. We show the proposed method has better performance than other competitive methods in various measures. Second, we propose selective null beamforming using a pilot-based channel technique. If we use same pilot sequence at each transmitter, we can obtain relative difference of acoustic channel between transmitters as well as between microphone array. From estimated channel information, we can calculate beamforming weight vectors to null the signals at target location and to transmit the signals to desired location without distortion using singular value decomposition at each frequency bins. We evaluate the performance of proposed methods through experiments as well as the computer simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We look into the optimal structure for the DTV (Digital TV) receiver system with an array antenna. In this paper, we apply the ML(Maximum likelihood) approach, which is a classical method in the theory of detection and estimation. And we examine the limitation of it. It is that the number of multi-paths should be less than that of antennas. But it conflicts with the indoor channel environment. To deal with this problem, we propose a sub-optimal structure, which has lower complexity and computation load, as compared with the optimal structure. And to verify the performance, we compare the SER(symbol error rate) curve of each system by computer simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, web-based intermediate view reconstruction for multiview stereoscopic 3D display system is proposed by using stereo cameras and disparity maps, Intel Xeon server computer system and Microsoft's DirectShow programming library and its performance is analyzed in terms of image-grabbing frame rate and number of views. In the proposed system, stereo images are initially captured by using stereo digital cameras and then, these are processed in the Intel Xeon server computer system. And then, the captured two-view image data is compressed by extraction of disparity data between them and transmitted to another client system through the information network, in which the received stereo data is displayed on the 16-view stereoscopic 3D display system by using intermediate view reconstruction. The program for controlling the overall system is developed based on the Microsoft DirectShow SDK. From some experimental results, it is found that the proposed system can display 16-view 3D images with a gray of 8bits and a frame rate of 15fps in real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Clouds have a nonstationary nature in that their local spectrum changes with position. We model this nonstationarity by extending the classical 1/fγ type spectrum. We make γ a function of position and we show that with this choice we can generate nonstationary clouds. Our model can be used to improve denoising algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.