PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vision system productivity depends in part on maintenance and change-over time. Optics designed for the on-line environment can cost less over their lifetime than simple off-the-shelf hardware. We discuss ways to design vision optics that reduce and facilitate maintenance. In a flexible manufacturing environment, one production line may produce several different products each week. A fixed optical system cannot make the required measurements, while adjustable optics are not robust, and require long setup times. We have built vision system optics that use 'optical tooling' as an alternative. Optical components are pre-aligned on interchangeable tooling plates. Changeover is accomplished by bolting on a new plate, without further alignment. Pre-configured optical tooling is as robust as a fixed installation. There are a minimum number of adjustable components to be misaligned. Optical tooling makes vision optics more compatible with flexible manufacturing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a test of a vision system's capability to measure consistently the color of blotches and other distinctly colored regions on a flat surface. The system consists of an off-the-shelf color camera, a color-corrected halogen illumination source, and a personal computer equipped with a frame grabber. Correction algorithms to compensate for spatial variations in illumination and sensitivity differences between pixels and the RGB channels are employed along with a calibration procedure to assure consistent measurements of the differently colored regions over a number of samples. A custom conversion of RGB to XYZ color space based on a linear-least squares fit within a certain portion of the color space was also explored to determine its suitability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The traditional method of improving the depth-of-field of an imaging system is to stop down the aperture. Light gathering power and resolution are lost. We present a modified CCD camera system which achieves exceptionally high depth-of-field without stopping down the aperture. A special phase mask placed near the lens modifies the incoming wavefront making it nearly invariant to the state of focus. The resulting image has reduced contrast but still contains complete object information. Straightforward post-processing is then used to regain image contrast. In effect, we trade image SNR instead of aperture size to obtain high depth-of-field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-contact thermal measurement techniques such as on-line thermography can be valuable tools for process monitoring and quality control. Many manufacturing processes such as welding or casting are thermally driven, or exhibit strong correlation between thermal conditions and product characteristics. Infrared inspection of self-emitted radiation can provide valuable insight into process parameters not routinely observed yet which dominate product quality. Recent advances in IR system technology coupled with significant reductions in cost are making thermography a viable tool for such on-line monitoring. This paper describes the implementation of a novel rugged thermal imaging system based on a dual-wavelength technique for a large intelligent process monitoring project. The object of the portion described herein is to deploy a non- contact means of monitoring tooling surface thermal conditions. The technical and practical challenges of developing such a non-contact thermal measurement system for continuous inspection in an industrial environment are discussed, and methods of resolving them are presented. These challenges include implementation of a wavelength filter system for quantitative determination of the surface temperature. Also, unlike visible-spectrum machine vision applications, surface emissivity of the test object as well as reflections from other IR emitters must be taken into account when measuring infrared radiation for a part or process. However, the primary issues that must be addressed prior to deployment are compensation for ambient temperature conditions and optimization of the calibration process. Other issues center on remote camera control, image acquisition, data synchronization, and data interpretation. An example application of this system, along with preliminary data, is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Processing of real-time x-ray images of randomly oriented and touching pistachio nuts for product inspection is considered. We describe the image processing used to isolate individual nuts (segmentation). This involves a new watershed transform algorithm. Segmentation results on approximately 3000 x-ray (film) and real time x-ray (linescan) nut images were excellent (greater than 99.9% correct). Initial classification results on film images are presented that indicate that the percentage of infested nuts can be reduced to 1.6% of the crop with only 2% of the good nuts rejected; this performance is much better than present manual methods and other automated classifiers have achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the ability of a set of rotation invariant features to classify images of wear particles found in used lubricating oil of machinery. The rotation invariant attribute of the features is derived from the property of the magnitudes of Fourier transform coefficients that do not change with spatial shift of the input elements. By analyzing individual circular neighborhoods centered at every pixel in an image, local and global texture characteristics of an image can be described. A number of input sequences are formed by the intensities of pixels on concentric rings of various radii measured from the center of each neighborhood. Fourier transforming the sequences would generate coefficients whose magnitudes are invariant to rotation. Rotation invariant features extracted from these coefficients were utilized to classify wear particle images that were obtained from a number of different particles captured at different orientations. In an experiment involving images of 6 classes, the circular neighborhood features obtained a 91% recognition rate which compares favorably to a 76% rate achieved by features of a 6 by 6 co-occurrence matrix.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a new approach to image recognition using feature extraction based on a revised nearest neighbor clustering method is described. A set of candidate feature vectors are formed by using the Gabor transform of the sample image to compute a number of Gabor kernels with different frequency and orientation parameters. Each of the candidate feature vectors is then sequentially inputted to a self- organizing neural network architecture that is used in conjunction with a revised nearest-neighbor algorithm. The revised nearest-neighbor method assigns an input vector to the nearest prototype (code book vector) when the distance between them is found to be within a preset threshold, and creates a new prototype when the distance is larger than the preset threshold value. The distance computation is conducted by measuring the saliency among the vectors of interest, which differs from traditional norms (e.g. Euclidean norm). Simulation results show that the proposed method is efficient in extracting feature vectors from images. These feature vectors are representative of the image and can be applied to image identification. The novelty associated with this work lies in the use of the saliency of feature vectors as the distance norm and a growing cell self-organizing structure to capture the feature vectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Region segmentation provides a valid information check point and background knowledge for higher level machine vision tasks that rely on the fundamental image processing and analysis results such as segmentations. Our study of segmenting regions of interest on range images shows that knowing local surface geometries becomes the key to successfully solving segmentation problems. In this paper, three topics are presented and the experiments are shown: (1) segmentation based on morphological watershed algorithm; (2) a multiscale approach using morphology; and (3) a segmentation method based on surface normals. The examples used in this study are from the range images taken by a laser radar imager. The applications are focused in the area of manufacturing automation and inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are two crucial, complementary, issues faced during design and implementation of practically any but a simple image processing library. First is an ability to represent a variety of image types, typically the discriminate feature being the pixel type, e.g. binary, short integer, long integer, or floating point. The second issue is implementation of image processing algorithms that will be able to operate on each of the supported image representations. In many traditional library designs this leads to reimplementation of the same algorithm many times, once for each possible image representation. Some attempts to alleviate this problem introduce elaborate schemes of dynamic pixel representation and registration. This results in single algorithm implementation, however, due to dynamic pixel registration, efficiency of these implementations is poor. In this paper, we investigate use of parameterized algorithms and design issues involved in implementing them in C++. We permit single expression of the algorithm to be used with any concrete representation of an image. Use of advanced features of C++ and object-oriented programming allow us to use static pixel representations, where pixel types are resolved during compile time instead of run time. This approach leads to very flexible and efficient implementations. We have both advantages: single algorithm implementation for numerous image representations, and best possible speed of execution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a novel three-dimensional network and its application to pattern analysis. This is a multistage architecture which investigates partial correlations between structural image components. Initially the image is partitioned to be processed in parallel channels. In each channel, the structural components are transformed and subsequently separated depending on their informational activity, to be mixed with the components from other channels for further processing. An output result is represented as a pattern vector, whose components are computed one at a time to allow the quickest possible response. The paper presents an algorithm applied to facial images decomposition. The input gray-scale image is transformed so that each pixel contains information about the spatial structure of its neighborhood. A three-level representation of gray-scale image is used in order for each pixel to contain the maximum amount of structural information. The most correlated information is extracted first, making the algorithm tolerant to minor structural changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The SKIPSM finite-state machine image processing paradigm can be extended to 3-dimensional (and to n-dimensional) image processing in a very straightforward way. Two-dimensional SKIPSM involves an R-machine (row machine) and a C-machine (column machine) in sequence (in either order). Three- dimensional SKIPSM uses what will be called X-machines, Y- machines, and Z-machines (in any order). This means that large 3-D structuring elements can be applied to 3-D images in a single scan through the image, with three lookup-table accesses per volume element, and no other operations, regardless of the size of the structuring element. It is even possible to apply more than one 3-D structuring element simultaneously, with no increase in execution time. For binary erosion, which has interesting applications to the 3-D packing problem, many of the same lookup tables used for 2-D erosion can be used for 3-D erosion. This implies that the same software programs used to create these 2-D lookup tables can be used for 3-D tables, so that no new tools are required. For 3-D dilation, some changes are required, but all the tables needed can be created in a routine way from the corresponding 3-D erosion tables. A brief discussion of the use of SKIPSM for 3-D operations other than binary erosion and dilation is also included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is clear need for integrated and affordable machine vision systems in line-scan applications, e.g. for width measurement and defect detection. These applications require sensor-like solutions in a price range not achievable with traditional machine vision systems consisting of a line-scan camera, host computer, frame grabber and possibly one or more dedicated processing boards. Since an integrated solution would make a separate host computer and associated boards unnecessary, we set out to study the feasibility of integrated machine vision technology for such applications. Analyses of several potential applications were used to define the requirements for an integrated line-scan camera-based vision system. In order to demonstrate the feasibility of the concept, research prototype was designed based on these requirements. This is a complete machine vision system with camera front end, fast hardware for corrections, the necessary logic and a computer for higher-level data analysis and I/O. A 4096-pixel CCD array followed by 20 MHz 10 A/D conversion forms the front end. Illumination correction, geometric correction, 7 by 7 convolution, multilevel pixelwise thresholding and histogramming are all implemented with fast erasable programmable logic device (EPLD) circuits. A compact PC/104 with a 486 processor takes care of the high-level processing and control. Communication facilities include 12 TTL-level I/O lines, a serial line and a video output.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The morphological image processing operation of binary dilation, as usually defined, cannot be implemented as a single-pass pipelined operation because it is a 'one-pixel-to- many-pixels' operation, whereas pipelining is possible only for 'one-to-one' or 'many-to-one' operations. Fortunately, there is an indirect equivalent (negate-erode-negate) which can be pipelined, and which can therefore be implemented in either hardware or software using the single-pass SKIPSM FSM (finite-state machine) paradigm. The great speed advantage of SKIPSM, which offers execution time independent of structuring element size, can therefore be extended to binary dilation also. This paper provides a procedure for incorporating these negations into SKIPSM erosion lookup tables, thus creating dilation lookup tables. It also discusses the relationship between FSM initial conditions and image boundary conditions, and 180-degree structuring element rotation. Examples are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the aims of industrial machine vision is to develop computer and electronic systems destined to replace human vision in the process of quality control of industrial production. In this paper we discuss the development of a new design environment developed for real-time defect detection using reconfigurable FPGA and DSP processor mounted inside a DALSA programmable CCD camera. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The system is targeted for web inspection but has the potential for broader application areas. We describe and show test results of the prototype system board, mounted inside a DALSA camera and discuss some of the algorithms currently simulated and implemented for web inspection applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low-cost PC-based machine vision systems have become more common due to faster processing capabilities and the availability of compatible high-speed image acquisition and processing hardware. One development, which is likely to have a very favorable impact on this trend, is enhanced multimedia capabilities present in new processor chips such as Intel MMX and Cyrix M2 processors. Special instructions are provided with this type of hardware which, combined with a SIMD parallel processing architecture, provides a substantial speed improvement over more traditional processors. Eight simultaneous byte or four double-byte operations are possible. The new instructions are similar to those provided by DSP chips such as multiply and accumulate and are quite useful for linear processing operations like convolution. However, only four pixels may be processed simultaneously because of the limited dynamic range of byte data. Given the inherent limitations with respect to looping in SIMD hardware, nonlinear operations such as erosion and dilation would seem to be difficult to implement. However, special instructions are available for required operations. Benchmarks for a number of image-processing operations are provided in the paper to illustrate the advantages of the new multimedia extensions for vision applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The earlier papers on SKIPSM (separated-kernel image processing using finite state machines) concentrated mainly on implementations using pipelined hardware. Because of the potential for significant speed increases, the technique has even more to offer for software implementations. However, the gigantic structuring elements (e.g., 51 by 51 in one pass) readily available in binary morphology using SKIPSM are not practical in gray-level morphology. Nevertheless, useful structuring element sizes can be achieved. This paper describes two such applications: dilation with a 7 by 7 square and a 7 by 7 octagon. Previous 2-D SKIPSM implementations had one row machine and one column machine. Two of the implementations described here follow this pattern, but the other has four machines: row, column, and the two 45-degree diagonals. In operation, all of these are one-pass algorithms: The next pixel is 'fetched' from the input device, the two (or four) machines are updated in turn, and the resulting output pixel is written to the output device. All neighborhood information needed for processing is encoded in the state vectors of the finite-state machines. Therefore, no intermediate image stores are needed. Furthermore, even the input and output image stores can be eliminated if the image processor can keep up with the input pixel rate. Comparisons are provided between these finite-state-machine implementations and conventional implementation of the 2-step and 4-step decompositions, all based on the same structuring elements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SKIPSM (separated-kernel image processing using finite state machines) is a technique for implementing large-kernel binary- morphology operators and many other operations. While earlier papers on SKIPSM concentrated mainly on implementations using pipelined hardware, there is considerable scope for achieving major speed improvements in software systems. Using identical control software, one-pass binary erosion and dilation structuring elements (SEs) ranging from the trivial (3 by 3) to the gigantic (51 by 51, or even larger), are readily available. Processing speed is independent of the size of the SE, making the SKIPSM approach practical for work with very large SEs on ordinary desktop computers. PIP (prolog image processing) is an interactive machine vision prototyping environment developed at the University of Wales Cardiff. It consists of a large number of image processing operators embedded within the standard AI language Prolog. This paper describes the SKIPSM implementation of binary morphology operators within PIP. A large set of binary erosion and dilation operations (circles, squares, diamonds, octagons, etc.) is available to the user through a command-line driven dialogue, via pull-down menus, or incorporated into standard (Prolog) programs. Little has been done thus far to optimize speed on this first software implementation of SKIPSM. Nevertheless, the results are impressive. The paper describes sample applications and presents timing figures. Readers have the opportunity to try out these operations on demonstration software written by the University of Wales, or via their WWW home page at http://bruce.cs.cf.ac.uk/bruce/index.html .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hardware device is presented that converts color to speech for use by the blind and visually impaired. The use of audio tones for transferring knowledge of colors identified to individuals was investigated but was discarded in favor of the use of direct speech. A unique color-clustering algorithm was implemented using a hardware description language (VHDL), which in-turn was used to program an Altera Corporation's programmable logic device (PLD). The PLD maps all possible incoming colors into one of 24 color names, and outputs an address to a speech device, which in-turn plays back one of 24 voice recorded color names. To the author's knowledge, there are only two such color to speech systems available on the market. However, both are designed to operate at a distance of less than an inch from the surface whose color is to be checked. The device presented here uses original front-end optics to increase the range of operation from less than an inch to sixteen feet and greater. Because of the increased range of operation, the device can not only be used for color identification, but also as a navigation aid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Driving requires two basic visual components: 'visual sensory function' and 'higher order skills.' Among the elderly, it has been observed that when attention must be divided in the presence of multiple objects, their attentional skills and relational processes, along with impairment of basic visual sensory function, are markedly impaired. A high frame rate imaging system was developed to assess the elderly driver's ability to locate and distinguish computer generated images of vehicles and to determine their direction of motion in a simulated intersection. Preliminary experiments were performed at varying target speeds and angular displacements to study the effect of these parameters on motion perception. Results for subjects in four different age groups, ranging from mid- twenties to mid-sixties, show significantly better performance for the younger subjects as compared to the older ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wood edge glued panels are used extensively in the furniture and cabinetry industries. They are used to make doors, tops, and sides of solid wood furniture and cabinets. Since lightly stained furniture and cabinets are gaining in popularity, there is an increasing demand to color sort the parts used to make these edge glued panels. The goal of the sorting processing is to create panels that are uniform in both color and intensity across their visible surface. If performed manually, the color sorting of edge-glued panel parts is very labor intensive and prone to error. This paper describes a complete machine vision system for performing this sort. This system uses two color line scan cameras for image input and a specially designed custom computing machine to allow real-time implementation. Users define the number of color classes that are to be used. An 'out' class is provided to handle unusually colored parts. The system removes areas of character mark, e.g., knots, mineral streak, etc., from consideration when assigning a color class to a part. The system also includes a better face algorithm for determining which part face would be the better to put on the side of the panel that will show. The throughput is two linear feet per second. Only a four inch between part spacing is required. This system has undergone extensive in plant testing and will be commercially available in the very near future. The results of this testing will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an original solution of lighting system used for the control of surface aspect of 3D reflective products. The most important feature of this system is to provide an image where defects appear clearly from the background wherever defects are located on the product. The processes applied on this image allow us to compute defect surface and defect number.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A generalized application area of machine vision is in the classification of different objects based on specified criteria. Applications of this nature are encountered more and more often in real industrial situations and the need to design robust classification architectures is now being felt more intensely than ever before. In designing such systems, it is being increasingly realized that judicious combination of multiple experts forming an integral configuration can achieve a higher overall performance than any of the individual experts on its own. Many configurations, taking advantage of different individual strengths of different experts, have been investigated. One particular class of structure seeks to exploit the a priori knowledge about the behavior of a particular basic classifier on a particular reference data base and uses that information to form a hierarchical classification structure that treats the structurally similar and dissimilar objects separately. The basic classifier performs an initial separation of the input objects. Based on a priori knowledge, initially separated objects are regrouped to form structurally similar groups, incorporating objects that have a high probability of being confused. A number of such groups having two or three classes in each group can be formed. The structurally dissimilar objects are classified using a generalized classifier. On the other hand, the different groups formed in the previous stage undergo group- wise classification. The final decision of the classifier structure is formed by combining the decisions of the generalized classifier and the specialized group-wise classifiers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes four state-of-the-art digital video cameras, which provide advanced features that benefit computer image enhancement, manipulation, and analysis. These cameras were designed to reduce the complexity of imaging systems while increasing the accuracy, dynamic range, and detail enhancement of product inspections. Two cameras utilize progressive scan CCD sensors enabling the capture of high- resolution image of moving objects without the need for strobe lights or mechanical shutters. The second progressive scan camera has an unusually high resolution of 1280 by 1024 and a choice of serial or parallel digital interface for data and control. The other two cameras incorporate digital signal processing (DSP) technology for improved dynamic range, more accurate determination of color, white balance stability, and enhanced contrast of part features against the background. Successful applications and future product development trends are discussed. A brief description of analog and digital image capture devices will address the most common questions regarding interface requirements within a typical machine vision system overview.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The highest barriers to wide scale implementation of vision systems have been cost. This is closely followed by the level of difficulty of putting a complete imaging system together. As anyone who has every been in the position of creating a vision system knows, the various bits and pieces supplied by the many vendors are not under any type of standardization control. In short, unless you are an expert in imaging, electrical interfacing, computers, digital signal processing, and high speed storage techniques, you will likely spend more money trying to do it yourself rather than to buy the exceedingly expensive systems available. Another alternative is making headway into the imaging market however. The growing investment in highly integrated CMOS based imagers is addressing both the cost and the system integration difficulties. This paper discusses the benefits gained from CMOS based imaging, and how these benefits are already being applied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the modern era of flexible manufacturing, short production runs and strict quality requirements, flexible easily trained inspection systems are essential. It is no longer unusual to have lines in which the product changes several times a shift. In such circumstances inspection system setup and retraining times of the order of a couple of minutes or less may be required. There is a large class of assembly and packaging processes which require verification that the correct components are present in the corrected locations. In many of these applications the relative proportion of different colors in a particular region can be used as the basis of inspection. Since the color distributions are generally complex and defy simple descriptions, train by showing is the only practical solution. The 'minimum description' paradigm, which uses a full 3-dimensional color space without the information loss inherent in color coordinate transformations or separation, provides a key to easy robust automation of this type of inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine vision is an enabling technology for many applications but 'alignment' is arguably the most useful application class. Alignment is the task of 'finding the position of a landmark or work piece in the electronic image' so that it can be tracked, moved, followed, or otherwise adjusted. Many early alignment applications were in aerospace and defense. The visual 'landmark' they used was a star, a constellation or a laser-designated target. These applications made possible highly stable satellite platforms, accurate antenna aiming, and accurate military ordinance that are simply not possible with any other technology. These 'aiming' applications were extensions of traditional gunsighting techniques and nautical navigation. In factory automation, vision-based alignment continues to play a key role in the semiconductor and electronics manufacturing revolution. Robotic machinery requires precision guidance to mate work pieces (dice and printed wiring boards) with process machinery (bonders, saws, and robots). Machine vision technology arrived just in time to make this possible, and new developments continue to improve precision and productivity in this area. New alignment applications are emerging in unexpected areas, such as the automotive service garage. This paper describes a new automotive service application for vehicle wheel alignment. Two machine vision cameras measure the position and attitude of four wheel-mounted targets as the vehicle rolls and is steered. Six axes of rotation are used to define locations and orientations of the axles in three dimensional space. Their values are visibly inferred and measured, and their geometric relationships computed. The measurements are compared against the vehicles' ideal design tolerances for adjustment and repair purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High speed lathes using diamond cutting tools are used to 'turn' standard and diffractive optical elements without the need to do any finish polishing operations. In order to turn these optical parts, lathes that have positioning accuracy better than 1 millionth of an inch are used with diamond cutting tools to make very precise cuts. A system has been developed and implemented to calibrate the location of the cutting edge of the diamond tool using machine vision. This allows the operator to determine the cutting contact point within plus or minus 20 millionths of an inch and provides shape information about the cutting edge of the tool. The vision system allows calibrated 'setup' of the cutting tool without test cutting of material to determine where the cutting edge of the diamond tool contacts the material.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes an inspection system architecture for electro-optical module inspection. The manufacturing of electro-optical modules involves assembly of electronic component and optical component mounts on a printed circuit board. The system consists of a controls layer and a machine vision layer. The controls layer manages seven axis positioning system. The machine vision layer performs automated inspection of the module at component level as well as at joint level. There are three operation modes of the system. Use-friendly graphic interfaces have been developed for system operation. The system can be programmed off-line for positioning information. Once the coordinates for various inspection locations are programmed the system automatically learns fuzzy classification prototypes. The prototypes are then utilized for module inspection. A module scanning algorithm has been developed that can detect and identify electronics or optical missing components. Fuzzy average square weighted Euclidean distance classifier has been used for missing component classification. Results are reported to demonstrate the missing component algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a new automated image analysis technique for inspecting and monitoring changes in plastic bumper surfaces during the paint-baking process. This new technique produces excellent performance, and is appropriate for on-line production monitoring as well as laboratory analysis. The objective of this work was to develop an accurate method for determining the paint bake time and temperature at which parts had been treated. This task was accomplished using mathematical morphology to extract differentiating features from samples collected at three magnifications and sending these feature-vectors to a back-propagation neural network for classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An application case history is detailed where machine vision was implemented in a closed loop process control setting. Specifically, a machine vision system was used to measure dimensions on aseptic packaging. These dimensions were then compared to a nominal value and an error signal was generated based on the measurement deviation from nominal. This error signal was generated by the vision system in the form of a minus 10 V dc to 10 V dc analog signal proportional to the measurement deviation. In addition, the application made use of low angle lighting techniques, normalized correlation, and automatic part changeover and calibration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The World Wide Web (WWW) offers the chance to generate a comprehensive collection of reference material, expert systems, programs, databases, image archives, image analysis and other tools for assisting practicing vision systems design engineers. The WWW is also an ideal medium for disseminating material for training students, educating would-be customers and new users of machine vision systems technology. The paper explores the potential for WWW-based material, and highlights some of the resources that are available today. A major purpose of this article, however, is to appeal for help in developing a comprehensive set of design and reference material that will allow highly reliable and accurate visual inspection, monitoring and control systems to be designed in future, with a minimum of effort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.