PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Sensor based, computer controlled end effectors for mechanical arms are receiving more and more attention in the robotics industry, because commonly available grippers are only adequate for simple pick and place tasks. This paper describes the current status of the research at JPL on a smart hand for a Puma 560 robot arm. The hand is a self contained, autonomous system, capable of executing high level commands from a supervisory computer. The mechanism consists of parallel fingers, powered by a DC motor, and controlled by a microprocessor embedded in the hand housing. Special sensors are integrated in the hand for measuring the grasp force of the fingers, and for measuring forces and torques applied between the arm and the surrounding environment. Fingers can be exercised under position, velocity and force control modes. The single-chip microcomputer in the hand executes the tasks of communication, data acquisition and sensor based motor control, with a sample cycle of 2 ms and a transmission rate of 9600 baud. The smart hand described in this paper represents a new development in the area of end effector design because of its multi-functionality and autonomy. It will also be a versatile test bed for experimenting with advanced control schemes for dexterous manipulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Whenever a manipulator handles an unknown object, it is necessary to compensate for the weight as it causes position errors. Although the integral function of an PID control law eliminates position errors, it takes a lot of time to converge to a set point. Sometimes, it also causes the system to go unstable by increasing its order. Therefore, a feedforward control method is usually used to compensate for this weight. In the weight compensation method which does not use a force sensor, the servostiffnesses of the manipulator joints are used to identify the load mass. However, this method needs the calibration of the servostiffnesses. This paper presents a weight compensation method using only joint servo-inputs corresponding to the object weight. Therefore, it does not need the complicated calibration of the joint servostiffnesses. This paper clarifies the compensation algarithm which cancels the joint servostiffneses and presents experimental results to compensate for an object weight.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new concept of robotic systems, "Dynamically Reconfigurable Robotic System(DRRS)" is shown in this paper. Each cell of the robotic module in DRRS can detach itself and combine them autonomously depending on a task, such as manipulators or mobile robots, so that the system can reorganize the optimal total shape, unlike robots developed so far which cannot reorganize automatically by changing the linkage of arms, replacing some links with others or reforming shapes in order to adapt itself to the change of working environments and demands. The newly proposed 'robotic system in this paper can be reconfigurable dynamically to a given task, so that the level of the flexibility and adaptability is much higher than that of the conventionals. DRRS has many unique adavantages, such as optimal shaping under circumstances, fault tolerance, self repairing and others. Some demonstrations can be shown experimentally and a decision method for such cell structured manipulator configurations is also proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, it is shown that it becomes easy to translate a natural language style task instruction into the motion level robot program by utilizing the procedures which a human being has as image of the assembly parts. In this research the concept of the object oriented programming is used as the knowledge representation form. The operational procedures of the assembly parts and other attributes are constructed by the object oriented model. The translation into the robot program is executed by using the message passing facility of objects. In this translation process, the task procedure, such that a bolt is tightened after a washer is placed on, is required. So, the object, in which the task procedure is described, is prepared particularly to the parts objects in order to raise the modularity of the object and link easily with the task planning system in future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel approach to the bilateral motion is realized in the master-slave robot hand system. In the approach, there are neither force sensors nor tactile sensors in the slave hand. The contact force of the slave hand is estimated by the disturbance observer in the microprocessor-based controller. The estimated force is calculated from the velocity and the current of the slave hand for the feedback to the master hand in order that the operator feels the touch quality of the object grasped by the slave hand to some extent. The position reference of the slave hand is given through the master position. These two cross feedback loops are implemented in one microprocessor. The above algorithm is tested in two similar robot hands. The obtained results show the sufficient performance of the bilateral motion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A pingpong robot system using a 7 degrees of freedom direct drive arm has been developed. This system is composed of a binocular camera for measuring the position and the speed of a pingpong ball, a pitching machine, a direct drive robot, and a controller with a multi-microcomputer system. Three important techniques have been developed here. First involves the fast ball position measurement technique using the binocular camera, which has two linear sensors located horizontally on the focal plane of each lens. Second is a technique for forecasting hitting position and time. Third, direct drive robot control technique with real time calculation of inverse kinematics. These techniques are essential to perform unstructured tasks with a robot in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents in detail the design and the performance of a photo-sensor array of 4x4 cells, for parallel input of optical data. This 2-D photoarray of 4x4 cells is the kernel for the construction of a larger one of NxN cells in order to service as a parallel input device for array-processors. In particular, the 2-D photosensive array will be used on the HERMES systolic array vision machine. We investigated phototransistors and photoresistors devices, through both theoretical and experimental work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with an exact state-space dynamic model for a manipulator with a single flexible link. The Bernoulli-Euler beam equations are used to derive the matrix transfer function. This transfer function can be used to describe the link or beam displacement at any position along the beam. A 6th order state model is derived and the opttmal linear regulator controller is determined. It is shown that using a "raised cosine" reference command signal yields a smooth yet quick response. Finally unmodeled modes are added to the plant making it 8th order. The controller and observer are based on the 6th order model. In the performance degrades significantly; however the addition of a very simple law pass filter operating on the control signal restores the behavior to excellent quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides a description of the quadruped walking robot "TURTLE-1". A new link mechanism named ASTBALLEM is used for the legs of this robot. With this mechanism highly rigid and easily controllable legs are constructed. Each leg has two degrees of freedom and is driven by two DC servo motors. The motion of the legs is controlled by a micro computer and various gaits are generated. Static stability is maintained as the robot walks. Moreover, its walk is quasi-dynamic; that is, it has a manner of walking that has a two legged supporting period.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of an execution module capable of functioning as as on-line supervisor for a robot equipped with a vision sensor and tactile sensing gripper system is described. The on-line module is supported by two off-line software modules which provide a procedural based assembly constraints language to allow the assembly task to be defined. This input is then converted into a normalised and minimised form. The host Robot programming language permits high level motions to be issued at the to level, hence allowing a low programming overhead to the designer, who must describe the assembly sequence. Components are selected for pick and place robot movement, based on information derived from two cameras, one static and the other mounted on the end effector of the robot. The approach taken is multi-path scheduling as described by Fox pi. The system is seen to permit robot assembly in a less constrained parts presentation environment making full use of the sensory detail available on the robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The motion planning and control system of a six-axis robot manipulator is realized by multiple microprocessors arranged in a hierarchical structure. The design and implementation of the multimicroprocessor-based system is described. The objectives are the realization of more advanced control techniques and real-time Cartesian space control in robot motion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a programming environment for robotics based on a threaded code architecture. The environment is supported by a multiprocessing operating system and in-cludes a programming language, an interactive interpreter, a debugger, and a multi-window manager. The paper illustrates how the threaded code organization on which the system is based has proved beneficial in obtaining a good integration of the programming tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A machine vision system should 1 designed in the context of providing the tools for process improvement. Obvious benefits of a vision system, such as the reduction in labor costs, and the potential for 100% part inspection are important, but providing process control and improving the manufacturing process can often provide the greatest long-term economical savings. Process improvements can be achieved by the operator adaptively tuning the manufacturing operation, in response to on-line process statistics (provided through a menu-driven user interface) and color graphic images on a video monitor. This paper discusses the design, development, and integration of a specific turnkey vision inspection system into an existing production facility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A microprocessor-controlled line scan camera system for measuring edges and lengths of steel strips is described, and the problem of subpixel edge detection and estimation is considered based on measurements of gray values of the line images at a limited number of pixels. The true edge locations may be between pixels. A two-stage approach is presented. At the first stage, a simple template matching method is used to place the estimated edge point to the nearest pixel value. Three second-stage methods are examined, which are designed for subpixel estimation. In the case of nonstationary Poisson noise, a recursive maximum likelihood method is proposed. Monte Carlo simulations for Gaussian and Poisson noise indicate that subpixel estimation accuracy can be obtained if the noise level is not very high.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for boundary estimation in textured images is described in this paper. This method aims at segmentation of images that contain 3-dimensional textured scenes. The essence of the method is estimation of boundaries between regions in an image, that contain homogeneous texture, us-ing a linear recursive filter and a random search procedure. Boundaries are modeled by the cubic B-spline functions and the measurement model employs the measures of inertia and energy calculated from co-occurrence matrices. The results obtained are very satisfactory. The method is an integral part of a 3-dimensional vision system that determines dis-tances to various obstacles placed on textured surfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach for recognizing 3-D objects using a combined overhead and eye-in-hand vision system is presented. A novel eye-in-hand vision system using a fiber-optic image array is described. The significance of this approach is the fast and accurate recognition of 3-D object information compared to traditional stereo image processing. For the recognition of 3-D objects, the over-head vision system will take 2-D top view image and the eye-in-hand vision system will take side view images orthogonal to the top view image plane. We have developed and demonstrated a unique approach to integrate this 2-D information into a 3-D representation based on a new approach called "3-D Volumetric Descrip-tion from 2-D Orthogonal Projections". The Unimate PUMA 560 and TRAPIX 5500 real-time image processor have been used to test the success of the entire system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An ultrasonic imaging system for the use in factory automation and computer vision is proposed in this paper. In order to keep the effectiveness in measuring range at large incidence angles multiple ultrasonic sensors are used in this system. A three-element delay-sum beamformer has been designed and the circuit of ultrasonic signal processor will be described. This ultrasonic system is fully programmable and it serves as an interface to the IBM PC. An X-Y axis mover, controlled via RS232, is installed so that the range profile, which will be presented in this paper, can be obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A 3-D vision sensor and a recognition system for machine parts with the vision sensor have been developed. The 3-D vision sensor with hardwares is capable of detect-ing range data of 150 x 241 points in 2.5 sec. This recognition system can discrimi-nate any of nine similarly shaped machine parts moving on a belt conveyor in 3.5 sec. with a correct recognition rate of 99.6 %.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an automated color defect inspection system of color imaging devices. The system uses color image processing technology, but does not use conventional primary color (R,G,B) components. In this system, two kinds of chrominance signals (R-Y, B-Y) are introduced to analyze color images. Newly developed color edge detection and color contrast calculation methods are used to detect and evaluate the color defects. Color contrast are calculated by analyzing the two dimensional chrominance signal's scatterplot. Algorithms of color defect inspection and experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to acquire and process images of various resolutions is required for working with nonstandard sizes of sensors or image data. A commmercially available system is described that can connect to virtually any sensor, acquire at virtually any resolution, and process sensor data at very high speeds. The system is able to acquire the output of sensors such as line scan devices (CCD,
Laser, Diode), area scan devices (CCD, Vidicon, scanning electron beam), continuous analog signals, digital signals, etc. The advantages of processing and communicating variable resolution images at a fixed data rate are also discussed.
Variable resolution image processing can concentrate hardware processing resources on regions of interest within a larger image. Previously these functions were implemented with unique hardware designed for each particular task. This system is configured at the software and systems integration levels, rather than at the
hardware design level. Additionally this modular family uses the MAXbus for image transfers, allowing it to interface with a large body of available image processing hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The era of "blind" laser soldering is coming to an end. The new Intelligent Laser Soldering system has an infrared "eye" focused on the solder joint being heated by the laser beam. As soon as all the solid solder has melted, the laser shuts off. In this way every joint receives precisely the amount of heat necessary to reach full liquefaction of the solder alloy. No more overheated joints. No more "cold" joints. And no more need for subsequent inspection, because for each joint, theinfrared,detector's output is the "infrared signature", containing all information about the joint's quality characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine vision is an integral part in industry automation. This paper will discuss the 3-D vision technology and its applications. A brief description of several turkey automation systems developed and being developed that using 3-D vision technology in the fields of inspection and robotic guidance&control will be presented. The applications range from advanced robotic technology in automotive car production to sophisticated robotic system for U.S. Navy and Air force. This 3-D vision measuring capability has proved to the versatile key to successfully implementing adaptively controlled robot motion and robot path. Other extension of the technology to provide 3-D volumetric sensing and research effort in integrating 3-D vision with CAD/CAM system are examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the development of an intelligent vision guided system through the integration of a vision system into a robot. Systems like the one described in this paper are able to work alone. They can be used in many automated assembly operations. Such systems can do repetitive tasks more efficiently and accurately than human operators because of the immunity of machines to human factors such as boredom, fatigue, and stress. In order to better understand the capabilities of such systems, this paper will highlight what can be accomplished by such systems by detailing the development of such a system. This system is already built and functional.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chamfer matching is a method for identifying and locating pre-determined objects in two-dimensional arrays. The method is reported to be very tolerant to noise, due to the cumulative error calculation. This paper will describe an efficient implementation of it for estimating rota-tional and translational parameters. It is suitable for use in vision sys-tems, where fast execution is essential, but the scenes to be analyzed may be noisy and occluded. Novel contributions are the method of estimating the angle of the best rotational fit of the model object with a scene, and the fast rotational scanning applied. Our method speeds up the calculation, especially in a proposed parallel processing environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The FAMA (Fine granularity Advanced Multiprocessor Architecture), currently being developed in the Department of Electrical Engineering and Computer Science of the University of Zaragoza, is an SIMD array architecture optimized for computer-vision applications. Because of its high cost-effectiveness, it is a very interesting alternative for industrial systems. Papers describing the processor element of FAMA have been submitted to several conferences; this paper focuses on the rest of components that complete the architecture: controller, I/O interface and software. The controller generates instructions at a 10MHz rate, allowing efficient access to bidimensional data structures. The I/O interface is capable of reordering information for efficient I/O operations. Development tools and modules for classical computer-vision tasks are being worked on in a first stage, the implementation of models based on existing theories on human vision will follow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new contour representation and object recognition system for visual inspection. Extracted contour is represented in two internal data. One is the edge pointer map which is a labeled edge image and the other is the edge feature list which contains various characteristics of the labeled edges pointed by the edge map. This contour representation makes the matching process fast, because the internal database suggests the next position to search and the edge list points the next list to check.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.