Iris recognition is increasingly being deployed on population wide scales for important applications such as border security, social service administration, criminal identification and general population management. The error rates for this incredibly accurate form of biometric identification are established using well known, laboratory quality datasets. However, it is has long been acknowledged in biometric theory that not all individuals have the same likelihood of being correctly serviced by a biometric system. Typically, techniques for identifying clients that are likely to experience a false non-match or a false match error are carried out on a per-subject basis. This research makes the novel hypothesis that certain ethnical denominations are more or less likely to experience a biometric error. Through established statistical techniques, we demonstrate this hypothesis to be true and document the notable effect that the ethnicity of the client has on iris similarity scores. Understanding the expected impact of ethnical diversity on iris recognition accuracy is crucial to the future success of this technology as it is deployed in areas where the target population consists of clientele from a range of geographic backgrounds, such as border crossings and immigration check points.
Iris recognition systems have recently become an attractive identification method because of their extremely high
accuracy. Most modern iris recognition systems are currently deployed on traditional sequential digital systems, such as
a computer. However, modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays
(FPGAs) have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms.
In this study, iris matching, a repeatedly executed portion of a modern iris recognition algorithm is parallelized on an
FPGA system. We demonstrate a 19 times speedup of the parallelized algorithm on the FPGA system when compared to
a state-of-the-art CPU-based version.
A one-dimensional approach to iris recognition is presented. It is translation-, rotation-, illumination-, and scale-invariant. Traditional iris recognition systems typically use a two-dimensional iris signature that requires circular rotation for pattern matching. The new approach uses the Du measure as a matching mechanism, and generates a set of the most probable matches (ranks) instead of only the best match. Since the method generates one-dimensional signatures that are rotation-invariant, the system could work with eyes that are tilted. Moreover, the system will work with less of the iris than commercial systems, and thus could enable partial-iris recognition. In addition, this system is more tolerant of noise. Finally, this method is simple to implement, and its computational complexity is relatively low.
National security today requires identification of people, things and activities. Biometrics plays an important role in the identification of people, and indirectly, in the identification of things and activities. Therefore, the development of technology and systems that provide faster and more accurate biometric identification is critical to the defense of our country. In addition, the development of a broad range of biometrics is necessary to provide the range of options needed to address flexible and adaptive adversaries. This paper will discuss the importance of a number of critical areas in the development of an environment to support biometrics, including research and development, biometric education, standards, pilot projects, and privacy assurance.
In this paper, we investigate the accuracy of using a partial iris image for identification and determine which portion of the iris has the most distinguishable patterns. Moreover, we compare these results with the results of Du et. al. using the CASIA database. The experimental results show that it is challenging but feasible to use only a partial iris image for human identification.
An iris identification algorithm is proposed based on adaptive thresholding. The iris images are processed fully in the spatial domain using the distinct features (patterns) of the iris. A simple adaptive thresholding method is used to segment these patterns from the rest of an iris image. This method could possibly be utilized for partial iris recognition since it relaxes the requirement of using a majority of the iris to produce an iris template to compare with the database. In addition, the simple thresholding scheme can improve the computational efficiency of the algorithm. Preliminary results have shown that the method is very effective. However, further testing and improvements are envisioned.
A novel approach to iris recognition is proposed in this paper. It differs from traditional iris recognition systems in that it generates a one-dimensional iris signature that is translation, rotation, illumination and scale invariant. The Du Measurement was used as a matching mechanism, and this approach generates the most probable matches instead of only the best match. The merit of this method is that it allows users to enroll with or to identify poor quality iris images that would be rejected by other methods. In this way, the users could potentially identify an iris image by another level of analysis. Another merit of this approach is that this method could potentially improve iris identification efficiency. In our approach, the system only needs to store a one-dimensional signal, and in the matching process, no circular rotation is needed. This means that the matching speed could be much faster.
Many future systems will be smaller and smarter. The technology to support these future systems will require an emphasis in nanoscience research in materials and in nano-techniques for advanced electronics. An important catalyst for this research will be collaboration among universities, government, and industry.
Most digital signal processing methods have an underlying assumption of regularly-spaced data samples. However, many real-world data collection techniques generate data sets which are not sampled at evenly-spaced intervals, or which may have significant data dropout problems. Therefore, a method of interpolation is needed to model the signal on an even grid of arbitrary granularity. We propose the interpolation of nonuniformly sampled fields using a least- square fit of the data to a wavelet basis in a multiresolution setting.
Though wavelets have been used extensively for image coding, compression,and denoising, they are also gaining popularity in the geophysical sciences as an analysis tool. Capitalizing on the wavelet's relatively tight localization in both the time and frequency domains, the wavelet transform of a data field can yield significant information about the localized frequency content of the underlying process. As an analysis technique, though, standard wavelet transforms suffer from some of the real work constraints that data sets often impose. Primary among these is the fact that the data set is not typically supported on a standard rectangular grid. We investigate the application of a boundary-compensated wavelet transform supported on an arbitrarily-shaped region. Applications to satellite-based altimetry of ocean basins are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.