PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Industry may potentially benefit a lot from the use of robots since they permit us to perform tasks that humans can not possibly do (in hostile environment, heavy devices to move...) with a better accuracy and an increased productivity. It is the same for surgical applications where robots are used as physical interfaces between computerized data and the real world for the execution of complex and/or accurate interventions. In this paper, we present a low-cost system whose purpose is to improve the safety by a redundant control when using robots. This system is based on a continuous monitoring of the position of a collection of passive markers by a set of standard video cameras. Since the task has been planned, the position of visible markers in the images can be predicted and can be verified with a limited image processing. Any significant discrepancy between the predicted position of a marker and the real one results in an emergency stop of the robot. The calibration procedures are presented and experimental results are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper addresses multi-sensor data fusion for the navigation of a 4 wheel vehicle with two driven wheels. The main advantage of such a configuration is its flexibility concerning free motion and navigation, this advantage is paid for, however, with an increased complexity concerning the dynamic model of the vehicle. The basic sensors of the vehicle comprise a fiber optical gyro, continuously delivering angular orientation information (namely the angular velocity), a landmark sensor, delivering global position information at those instants where a landmark is available and within the reach of the sensor. Eventually an undriven and therefore not subject to slippage measuring wheel can be added. The control inputs to the vehicle are taken to be noisy but nominally known but subject to errors due to measuring errors and unknown influences. The approach taken in the paper essentially uses Kalman filtering ideas namely extended Kalman filtering to implement multi model filtering. The Kalman filter incorporates the different noisy measurements in order to `fuse' them to one precise position and orientation estimate, copes with the only temporary available global information, and automatically realizes dead-reckoning where no global information is available. The paper covers the state space formulation of the problem, discuses the different models needed to describe the different motions. Based on a realistic state space model the corresponding Kalman filter is designed and tested with simulated measurement data delivered by a truth model simulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To solve some of the problems associated with using conventional ultrasonic range sensors for mobile robots, we propose the use of tri-aural sensors. A tri-aural sensor consists of one ultrasonic transceiver and two additional receivers. With it the robot can determine accurate position estimates, both distance and bearing, of most of the objects in its field of view. This sensor also has object recognition capabilities, making it possible to discriminate between edges and planes. However, this information is available only if the echoes detected by the three receivers can be combined in groups consisting of echoes generated by the same reflector. This problem is very similar to the matching problem in stereo-vision. In this paper we compare two matching algorithms. One based on the maximum likelihood principle. The other based on a multi-layer perceptron neural network. To test how these matching algorithms fare in realistic circumstances we have done a number of simulations. The results are discussed in the final section of the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced Sensor Concepts, Systems, and Applications I
Simultaneous interrogation of interferometric fiber-optic sensors by means of spectral signal processing is described. The method provides highly accurate nonincremental measurements of the optical path difference for each sensor in the network and consequently the external parameter affecting this sensor. The combination of different multiplexing techniques results in combining of several tens of sensors in the single network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fast and accurate optimization of an aspheric lens of large numerical aperture (ALLNA) is useful as a tool in optical simulation software packages. A series of analytical formulas is obtained from solving differential equations. An ALLNA with two aspheric surfaces and another one with a spherical surface and an aspheric surface are analytically optimized. A Monte Carlo method and a Newton-Raphson method are adapted to simulate an optical sensor consisting of a transmitting and a receiving sensor head. The transmission characteristics and other optical simulations from these two methods are in good agreement with each other as well as with experimental results. The optimized ALLNA with minimum power losses and minimum angle of divergence is significant for high performance optical collimator lens cap design. Optimization and simulation of ALLNA provides an important contribution to the design of advanced sensor heads and the development of optical range finders, e.g., the transmission characteristics are a crucial factor for sensor pre-amplifier circuitry design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel tactile sensor that utilizes the deformation of a holographic membrane is presented. The deformation changes the diffraction efficiency which results in an intensity coded signal that can be measured very precisely. In this paper we present the principle, the experimental set-up, and the mode of operation of the sensor. The aim of this research is to construct a prototype sensor with an array of 8 X 8 taxels each 1.5 X 1.5 mm2 in size. The total area of the sensor is about 1.5 cm2. The sensitivity of each taxel is 0.01 N with an operating range of 10 N. The final tactile sensor consists of the following components: a holographic membrane (the actual sensor element), a stiff support defining the grasping plane and a thin film that transmits the grasping forces. To determine the deformation a laser diode illuminates the holographic sensor membrane and a CCD detects the non-diffracted light.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the greatest problems concerning sensor guided systems is the cost of the sensor systems. In this paper we give a description of a new idea for a low-cost sensor. This optical system can give information about an obstacle in the field of view (FOV). The easiest kind of this sensor only gives information about the presence of an obstacle in an area of three to four meters in front of the sensor. With this sensor we can supervise a two-dimensional field of several m2 (squaremeters). One of the possible applications of the sensor is the autonomous vehicles and robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent Signal Processing, Multisensor Data Fusion, Control Concepts II
This paper develops constructions of filters for semi-automatic enhancement of still images. It emphasizes the joint application of linear and nonlinear filtering techniques. The decision which filter type is to be activated predominantly depends on perceptual criteria. These means of judging are readily described in terms of linguistic variables, i.e., the user is relieved from the tedious task of evaluating noise statistics. The decision process itself is supported by standard fuzzy reasoning techniques and corrected updates of the crisp outputs are available on a user-defined frame-by-frame or window-by-window basis. Thus, both handling of the filter tools and the individual filtering depth (gain, bandwidth, etc.) are controlled by a common rule base. Experiments with noisy grayscale images demonstrate that compared to non-joint methods, the proposed solution results in acceptable scene interpretation and clear identification of originally concealed objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel scheme for the purpose of minimizing delay time errors of a receiver of a time-of-flight distance sensor based on the principle of correlation. The nonrandom delay time error is reduced by means of an improved reference technique using alternatively activated target and reference photo diodes within the same receiver. Thus the accuracy of the sensor is considerably increased.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three dimensional (3D) measurement of surfaces is an essential capability for robot systems. Few approaches exist in the literature that tackle the problem using different methodologies. One approach that found its way into some applications in the past ten years is the use of structured light. One major advantage of this technique is the ease of detecting the features that are artificially superimposed on the original scene. Another advantage is the possibility of extracting the 3D information from a single image as opposed to stereo techniques. Light patterns that are normally used are in the form of stripes with equal or varying interdistance or more generally a grid formed of vertical and horizontal stripes. One problem that limits the use of this technique is the need for precise reference calibration to compute the transformation matrix that is used to compute the true world coordinates from the image coordinates via triangulation. This necessitates the knowledge and the use of the true world coordinates of few control points. The process should be carried out with a high degree of accuracy under a controlled environment, or else it will affect all the computations. This normally calls for an expensive fixed setup that may be suitable for some industrial applications but is not flexible for dynamic robot vision applications. In this paper we propose a method where depth from focus algorithms is employed to compute the transformation matrix which is applied to stripe points to form a complete 3D surface solution. Depth from focus calibration can be performed beforehand by varying the distance of a test object. Thus, there is no need for the use of any reference control points in the scene. This adds flexibility to the vision system and makes it feasible for robot applications. A full depth from focus solution for the whole scene is computationally expensive as opposed to stripe points triangulation. In this way we gain the advantages of both methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With this paper we would like to present a realistic approach for the combination of two different methods for three-dimensional object recognition, both using structured light triangulation: (1) The color-coded phase-shift method is able to reach a very good spatial resolution. But due to the finite period length of the object illumination the measured range values are ambiguous. (2) Using the color-coded triangulation an unambiguous three- dimensional image can be achieved, but compared to the color-coded phase-shift method, the spatial resolution is poor. Since both methods are able to generate a three-dimensional object description by processing only a single RGB-image, the ranging process may take place in 40 ms. This `real-time' processing of image data remains possible for the desired combination, if color-coded phase-shift method and color-coded triangulation are using the same image, and therefore the same structure of illumination, too. With this paper, the authors would like to present the principle of a combined ranging method as well as the derivation of an optimized illumination structure in order to enable the application of both methods and to achieve the best accuracy for the resulting 3D object description. Since both kinds of algorithms are using the same image, even moving objects can be ranged using a flashlight illumination. Compared to the involved methods we are expecting a better accuracy than the color-coded triangulation and a better reliability than the color-coded phase-shift method. At the same time the image processing can be done in less than 40 ms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the manufacturing of seamless tubes in a rolling process mandrel bars are used to produce thickwalled tube blanks. The quality of the surface inside of the tube blanks and the seamless tubes strongly depends on the surface area of the applied mandrel bar. Up to now the surface quality is inspected manually. Through automatic inspection the surface area can be continuously observed and therefore defects can be recognized and corrected in early stages. A method of solution for an inspection system, which is based on the application of fuzzy logic, has been developed. This method determines and reports the quality of the mandrel bars surface area. So the mandrel bars can be termed as `smooth,' `good' and `rough.'
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses print inspection systems based on digital image processing technology. The basic print inspection task is comparing a reference image to a current image. Upon this technology a number of different applications are based: Print quality check, print process control, order mix-up detection/prevention and (label-based) sorting. The main process parameters are position/orientation of the print, the number of colors printed, the complexity of the print and the set-up of the print inspection system (inline/inline with process control/offline). Important reasons for using print inspection system are: 100% quality control, operator-independent quality standard, yield increase, reduced need for manpower. Print inspection systems consist of a sensor unit containing the illumination and camera and the control computer system equipped with the image processing computer (a frame grabber with dedicated image processing hardware and optionally a CPU), the process computer (controlling the complete system and the I/O) and various I/O facilities (screen, keyboard, operator interface, digital I/O). Three examples of advanced print inspection systems are presented (all of which have been developed, implemented and installed by Basler GmbH): CD label inspection featuring a 3-chip color camera, CD paperwork inspection for detecting order mix-ups, and a mileage indicator inspection system with a line scan camera based sensor unit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a vision-based on-line level control system which is used in beverage filling machines. Motivation for the development of this sensor system was the need for an intelligent filling valve, which can provide constant filling levels for all container/product combinations (i.e. juice, milk, beer, water, etc. in glass or PET bottles with various transparency and shape) by using a non-tactile and completely sterile measurement method. The sensor concept being presented in this paper is based on several CCD-cameras imaging the moving containers from the outside. The stationary lighting system illuminating the bottles is located within the filler circle. The field of view covers between 5 and 8 bottles depending on the bottle diameter and the filler partitioning. Each filling element's number is identified by the signals of an angular encoder. The electro-pneumatic filling valves can be opened and closed by computer control The cameras continuously monitor the final stages of the filling process, i.e. after the filling height has reached the upper half of the bottle. The sensor system measures the current filling height and derives the filling speed. Based on static a priori- knowledge and dynamic process knowledge the sensor system generates a best estimate of the particular time when the single valve is to be closed. After every new level measurement the system updates the closing time. The measurement process continues until the result of the next level calculation would be available after the estimated closing time would have been passed. The vision-based filling valve control enables the filling machine to adapt the filling time of each valve to the individual bottle shape. Herewith a standard deviation between 2 and 4 mm (depending on the slew rate in the bottle neck) can be accomplished, even at filling speed > 70.000 bottles per hour. 0
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor System Applications, Robotics, Automated Work Cells, and Automation
This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a vision system which provides a robust model-based identification and localization of 2-D objects in industrial scenes. A symbolic image description based on the polygonal approximation of the object silhouettes is extracted in video real time by the use of dedicated hardware. Candidate objects are selected from the model database using a time and memory efficient hashing algorithm. Any candidate object is submitted to the next computation stage which generates pose hypotheses by assigning model to image contours. Corresponding continuous measures of similarity are derived from the turning functions of the curves. Finally, the previous generated hypotheses are verified using a voting scheme in transformation space. Experimental results reveal the fault tolerance of the vision system with regard to noisy and split image contours as well as partial occlusion of objects. THe short cycle time and the easy adaptability of the vision system make it well suited for a wide variety of applications in industrial automation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Poster Session: Advanced Sensor Concepts, Systems, and Applications II
Off-line-programming intends to increase the flexibility of robot systems and thereby the degree of automation in production. At present the available commercial off-line programming systems might be grouped in very efficient, but expensive workstation based systems and in PC-based ones which often possess only restricted efficiency with regard to their numerical and graphical power. This paper presents the PC-based and thereby low-cost robot off-line programming system of the University of Siegen (Ropsus) as a very efficient alternative. This efficiency is available by using optimized algorithms to solve the inverse kinematic problem as well as the collision detection problem and by using CAD-hardware which is optimized for 3D-animation, i.e., graphical standards for presentation and manipulation of cell objects are not used. The underlying operating system is Unix System V.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper deals with camera location calibration that is rigidly mounted on a robot hand or on its base. The proposed approach is based on computing an homogeneous transformation matrix that describes the relation between a real camera location and its model maintained by a robot control system. Algorithms are based on a two-stage computational procedure and are designed for two cases of input: vector and matrix data. It was established that the algorithms provide similar accuracy but essentially differ in required computer time. High speed performance of the developed algorithm with matrix input data is caused due to the low dimension of input data in the first stage, and the low time consumption of the second stage that deals with the whole amount of data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper is devoted to computer-aided welding work cells design, path planning and programming. It presents specially developed algorithms and software tools for simulation of arc and spot welding robotic cells and generation of technological programs. The software has been tested in the automotive industry on the manufacturing floor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The properties of thin film on ASIC image sensors are discussed starting from the dynamic range and linearity of the a-Si:H detector element. Depending on the operation mode, the dynamic range of the detector exceeds the performance of conventional CCDs by far. Limitations arising from a non optimized operation mode are examined employing a semi- empirical simulation model. They can be overcome with pixel electronics adapted to the demands of the detector. In order to maximize the dynamic range of the complete array, different modes of signal transport from the pixels to the peripheral circuitry are investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent image sensors are becoming increasingly important in the field of production automation and artificial vision. Pixel detectors based on the TFA (thin film on ASIC)-concept represent a promising alternative to existing conventional sensor concepts. A TFA array is a vertically integrated device and consists of an unpatterned amorphous silicon (a-Si:H) photo detector on top of a crystalline ASIC. The entire area of a pixel is used for both, the thin film photo detector and analog or digital signal processing in local pixel processors. A-Si:H photo detectors such as Schottky-, pin- or nipin-diodes approximate the spectral response of the human eye much better than crystalline detectors. In this paper a prototype of a TFA sensor is presented. It consists of an array of 32 X 32 pixels and performs digital contour extraction. The performance of the sensor is evaluated and the influence of parasitic effects such as crosstalk between pixels and capacitive coupling inside the pixels are discussed. Both parasitic effects can be eliminated by technological as well as electronical means.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new thin film color sensor array has been developed. In this device a single pixel consists of a combination of an amorphous silicon nipin detector and a crystalline operational amplifier. The carrier transport mechanisms of the diode under dark conditions as well as steady state opto electronic behavior of the nipin structure have been studied in theory and experiment in order to optimize the design of the image sensor. As a result, nipin structures with excellent dynamic range and linearity have been fabricated. Our study has also demonstrated that a three color detector can be obtained either by optimization of the design of the detector or by appropriate signal processing. The limitations arising from the design rules of the crystalline electronics for a single channel MOSFET process and their impact on readout performance and signal distortion are discussed. A novel two stage operational amplifier with optimized design and layout has been fabricated. Because of the superior performance of the amplifier and the diode this sensor is especially suitable for high sensitive color image processing applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced Sensor Concepts, Systems, and Applications I
An ultra-sensitive underwater super-HARP color TV camera has been developed. The characteristics -- spectral response, lag, etc. -- of the super-HARP tube had to be designed for use underwater because the propagation of light in water is very different from that in air, and also depends on the light's wavelength. The tubes have new electrostatic focusing and magnetic deflection functions and are arranged in parallel to miniaturize the camera. A deep sea robot (DOLPHIN 3K) was fitted with this camera and used for the first sea test in Sagami Bay, Japan. The underwater visual information was clear enough to promise significant improvements in both deep sea surveying and safety. It was thus confirmed that the Super- HARP camera is very effective for underwater use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dynamic development in digital signal processing is inseparably bound to the disclosure of the fast Fourier transform (FFT). Implications from the application of these efficient algorithms for calculating the discrete (inverse) Fourier transform are significant in many ways. Applicability of FFT algorithms ranges far into almost every aspect of physics and performs a central role in analysis, design and implementation of DSP algorithms and digital systems. Consumed computer time almost ceases to be a problem when using FFT compared with straightforward discrete Fourier transform (DFT). The cutdown on consumed computer time by usage of FFT algorithms even holds greater promise for multidimensional applications with in general more complex tasks and heavier data loads to cope with. Without multidimensional FFT algorithms for high speed convolution or spectral analysis the successes for example in SAR, tomography, data compression or picture processing could not have been achieved. Since the introduction of the Cooley-Tukey-algorithm in 1965 methods to calculate the two- or N dimensional Fourier transform of a set of data are based essentially on the separability of the 2D FFT. With a 1D FFT algorithm the data set is `combed' row- and columnwise to form the 2D transform of the calculated 1D transforms. After some basics and recalling some different conventional approaches to 1D and 2D Fourier transform the paper concentrates on Vector-Radix-algorithms which decimate and transform a 2D data set simultaneous for both index directions and therefore seem suitable for parallelization. Vector-Radix-approaches are derived for general radices and for the 2D case also for nonquadratic data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maintenance of steam generators occupies a substantial proportion of scheduled shutdowns at nuclear power stations. Maintenance operations are broken down into a number of distinct phases; these are performed separately to ensure accountability for the work carried out at each stage, thereby guaranteeing the quality of the maintenance process as a whole. One of these phases, known as `marking,' consists in locating certain tubes in the steam generator tube plate and marking them using a suitable system. The list of tubes for marking may be determined on the basis of prior tests. Marked tubes will undergo subsequent operations as required, such as plugging for example. Clearly, the quality of the marking process will have a significant impact on all subsequent maintenance operations on tubes in the secondary bundle. Present-day marking tools make little use of automation, and over-reliance on human judgement means that the marking phase is liable to error. Moreover, depending on the number of tubes to mark, this phase can be long and fastidious. With these considerations in mind, the EDF Research Division has developed a display system for locating steam generator tubes, with the main purpose of facilitating marking operations. Following an initialization phase, this system (named LUCANER) provides the operator with a simple, reliable and fully automatic method for locating tubes in the tube plate. Besides reducing the risk of error, the system also reduces the time required for the marking phase. The system can also be used for complementary phases involving checks on markings, checks on plugging, etc. In a wider context, it provides visual inspection capabilities over a large part of the bowl.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to estimate the position of a mobile vehicle is a key task for navigation over large distances in complex indoor environments such as nuclear power plants. Schematics of the plants are available, but they are incomplete, as real settings contain many objects, such as pipes, cables or furniture, that mask part of the model. The position estimation method described in this paper matches 3-D data with a simple schematic of a plant. It is basically independent of odometry information and viewpoint, robust to noisy data and spurious points and largely insensitive to occlusions. The method is based on a hypothesis/verification paradigm and its complexity is polynomial; it runs in (Omicron) (m4n4), where m represents the number of model patches and n the number of scene patches. Heuristics are presented to speed up the algorithm. Results on real 3-D data show good behavior even when the scene is very occluded.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes the investigations carried out to implement a line of sight control and communication link for a mobile robot vehicle for use in structured nuclear semi-hazardous environments. Line of sight free space optical laser communication links for remote teleoperation have important applications in hazardous environments. They have certain advantages over radio/microwave links and umbilical control such as greater protection against generation of and susceptance to electro-magnetic fields. The cable-less environment provides increased integrity and mechanical freedom to the mobile robot. However, to maintain the communication link, continuous point and tracking is required between the base station and the mobile vehicle. This paper presents a novel two ended optical tracking system utilizing the communication laser beams and photodetectors. The mobile robot is a six wheel drive vehicle with a manipulator arm which can operate in a variety of terrain. The operator obtains visual feedback information from cameras placed on the vehicle. From this information, the speed and direction of the vehicle can be controlled from a joystick panel. We describe the investigations carried out for the communication of analogue video and digital data signals over the laser link for speed and direction control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Free space communication links for remote tele-operation of robots have important applications in hazardous environments. They have a distinct advantage over conventional umbilical control in that they allow potentially greater freedom when manoeuvering around obstacles, and obviate any unnecessary cable payload. Omnidirectional, short-wave radio links are widely used for such purposes. Very little consideration has yet been given to the use of microwave frequencies (10 GHz and above), whose fixed line-of-sight operating mode and high bandwidth have made them ideal for external local area networks. In this paper, we examine the broadcast characteristics of a commercial 23 GHz microwave link using a 25 cm diameter horn. Investigations were carried out in a representative environment over a wide range of distances, alignment criteria, and pathways. We describe the investigations in received signal quality for the communication of analogue video and digital data. Received signal tests using image processing equipment show the signal to noise ratio obtained under test conditions, compared to that required for adequate quality. We conclude by assessing the relative merits and disadvantages of using a microwave link for telemetry applications, compared with conventional radio and laser free space systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel artificial vision system with fully automatic operation that complements the results obtained by eddy current techniques in the inspection of heat exchanger tubes of nuclear power plants is presented. The system has a specifically developed fiberoptic probe based on a reflectometric principle that uses a linear array of optical fibers to form an image of the inner surface of the tubes, allowing the user to detect and measure defects opened to this surface. The system architecture, principle of measurement, main features and experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the manufacturing process certain workpieces are inspected for dimensional measurement using sophisticated quality control techniques. During the operation phase, these parts are deformed due to the high temperatures involved in the process. The evolution of the workpieces structure is noticed on their dimensional modification. This evolution can be measured with a set of dimensional parameters. In this paper, a three dimensional automatic inspection of these parts is proposed. The aim is the measuring of some workpieces features through 3D control methods using directional lighting and a computer artificial vision system. The results of this measuring must be compared with the parameters obtained after the manufacturing process in order to determine the degree of deformation of the workpiece and decide whether it is still usable or not. Workpieces outside a predetermined specification range must be discarded and replaced by new ones. The advantage of artificial vision methods is based on the fact that there is no need to get in touch with the object to inspect. This makes feasible its use in hazardous environments, not suitable for human beings. A system has been developed and applied to the inspection of fuel assemblies in nuclear power plants. Such a system has been implemented in a very high level of radiation environment and operates in underwater conditions. The physical dimensions of a nuclear fuel assembly are modified after its operation in a nuclear power plant in relation to the original dimensions after its manufacturing. The whole system (camera, mechanical and illumination systems and the radioactive fuel assembly) is submerged in water for minimizing radiation effects and is remotely controlled by human intervention. The developed system has to inspect accurately a set of measures on the fuel assembly surface such as length, twists, arching, etc. The present project called SICOM (nuclear fuel assembly inspection system) is included into the R&D program P.I.E. created for electrical companies and is developed jointly with Iberdrola, Tecnatom and Enusa.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.