The restricted field of view of traditional camera technology is increasingly limiting in many relevant applications such
as security, surveillance, automotive, robotics, autonomous navigation or domotics. Omnidirectional cameras with their
horizontal field of view of 360° would be ideal devices for these applications if they were small, cost-effective, robust
and lightweight. Conventional catadioptric system designs require mirror diameters and optical path lengths of several
centimeters, often leading to solutions that are too large and too heavy to be practical. We are presenting a novel optical
design for an ultra-miniature camera that is so small and lightweight that it can be used as a key navigation aid for an
autonomous flying micro-robot. The catadioptrical system consists of two components with a field-stop in-between: the
first subsystem consists of a reflecting mirror and two refracting lens surfaces, and the second subsystem contains the
imaging lens with two refractive surfaces. The field of view is 10°(upward) and 35°(downward). A field stop diameter of
1 mm and a back focal length of 2.3 mm have been achieved. For
low-cost mass fabrication, the lens designs are
optimised for production by injection moulding. Measurements of the first omnidirectional lens prototypes with a high-resolution
imager show a performance close to the simulated values concerning spot size and image formation. The total
weight of the optics is only 2 g including all mechanical mounts. The system's outer dimensions are 14.4 mm in height,
with a 11.4 mm × 11.4 mm foot print, including the image sensor and its casing.
Optical metrology methods are classified into three fundamental techniques: Triangulation makes use of different
positions of cameras and/or light projectors; interferometry employs standing light wave patterns; time-of-flight uses
temporal light modulation. Using the unifying framework of linear shift-invariant system theory, it is shown that in all
three cases the phase delay of a harmonic function must be determined. Since the precision of such phase measurements
is photon noise limited, the distance resolution and the dynamic range are governed by the same functional relationship
for the three fundamental optical metrology methods. This equation is derived under the assumption of Gaussian noise in
the photogenerated charges in the photodetector; this assumption is a very valid one for almost all light sources, optical
elements and photosensors. The equation for the precision of all types of optical distance measurement techniques
contains the method's experimental parameters in a single factor, from which the optimum distance range of each of the
three fundamental techniques can be deduced. For interferometry this range is 1 nm - 1 &mgr;m, for triangulation it is 1 &mgr;m -
10 m, and for time-of-flight ranging it is > 0.1 m, if visible or near infrared light is used.
Optical time-of-flight (TOF) distance measurements can be performed using so-called smart lock-in pixels. By sampling the optical signal 2, 4 or n times in each pixel synchronously with the modulation frequency, the phase between the emitted and reflected signal is extracted and the object's distance is determined. The high
integration-level of such lock-in pixels enables the real-time acquisition of the three-dimensional environment without using any moving mechanical components. A novel design of the 2-tap lock-in pixel in a 0.6 μm semiconductor technology is presented. The pixel was implemented on a sensor with QCIF resolution. The optimized
pixel design allows for high-speed operation of the device, resulting in a nearly-optimum demodulation performance and precise distance measurements which are almost exclusively limited by photon shot noise. In-pixel background-light suppression allows the sensor to be operated in an outdoor environment with sunlight incidence. The highly complex pixel functionality of the sensor was successfully demonstrated on the new SwissRanger SR3000 3D-TOF camera design. Distance resolutions in the millimeter range have been achieved
while the camera is operating with frame rates of more than 20Hz.
Semiconductor technology progresses at a relentless pace, making it possible to provide image sensors and each pixel with an increasing amount of custom analog and digital functionality. As experience with such photosensor functionality grows, an increasing variety of modular building blocks become available for smart pixels, single-chip digital cameras and functional image sensors. Examples include a non-linear pixel response circuit for high-dynamic range imaging with a dynamic range exceeding 180 dB, low-noise amplifiers and avalanche-effect pixels for high-sensitivity detection performance approaching single-photoelectron resolution, lock-in pixels for optical time-of-flight range cameras with sub-centimeter distance resolution and in-pixel demodulation circuits for optical coherence tomography imaging. The future is seen in system-on-a-chip machine vision cameras (“seeing chips”), post-processing with non-silicon materials for the extension of the detection range to the X-ray, ultraviolet and infrared spectrum, the use of organic semiconductors for low-cost large-area photonic microsystems, as well as imaging of fields other than electromagnetic radiation.
A new miniaturised 256 pixel silicon line sensor, which allows for
the acquisition of depth-resolved images in real-time, is
presented. It reliably and simultaneously delivers intensity data
as well as distance information on the objects in the scene. The
depth measurement is based on the time-of-flight (TOF) principle.
The device allows the simultaneous measurement of the phase,
offset and amplitude of a radio frequency modulated light field
that is emitted by the system and reflected back by the camera
surroundings, without requiring any mechanical scanning parts. The
3D line sensor will be used on a mobile robot platform to
substitute the laser range scanners traditionally used for
navigation in dynamic and/or unknown environments.
KEYWORDS: Demodulation, Modulation, Signal detection, Distance measurement, Electrons, Sensors, Camera shutters, Signal processing, 3D image processing, Diffusion
A new pixel structure for the demodulation of intensity modulated
light waves is presented. The integration of such pixels in line
and area array sensors finds application in time-of-flight
three-dimensional imaging. In 3D range imaging an illumination
module sends a modulated optical signal to a target, where it is
reflected back to the sensor. The phase shift of the reflected
signal compared to the emitted signal is proportional to the
distance to one point of the target. The detection and
demodulation of the signal is performed by a new pixel structure
named drift field pixel. The sampling process is based on the fast
separation of photogenerated charge due to lateral electrical
fields below a high-resistive transparent poly-Si photogate. The
dominant charge transfer phenomenon of drift, instead of diffusion
as in conventional CCD pixels, allows much higher modulation
frequencies of up to 1 GHz and a much higher ultimate distance
accuracy as a consequence. First measurements performed with a
prototype pixel array of 3x3 pixels in a 0.8 micron technology
confirm the suitability of the pixels for applications in the
field of 3D-imaging. Depth accuracies in the sub centimeter range
have already been achieved.
KEYWORDS: Optical coherence tomography, Sensors, Signal to noise ratio, Demodulation, Signal detection, Mirrors, Photons, Image sensors, 3D image processing, Modulation
Optical Coherence Tomography (OCT) is an optical imaging technique allowing the acquisition of three-dimensional images with micrometer resolution. It is very well suited to cross-sectional imaging of highly scattering materials, such as most biomedical tissues. A novel custom image sensor based on smart pixels dedicated to parallel OCT (pOCT) is presented. Massively parallel detection and signal processing enables a significant increase in the 3D frame rate and a reduction of the mechanical complexity of the complete setup compared to conventional point-scanning OCT. This renders the parallel OCT technique particularly advantageous for high-speed applications in industrial and biomedical domains while also reducing overall system costs. The sensor architecture presented in this article overcomes the main challenges for OCT using parallel detection such as data rate, power consumption, circuit size, and optical sensitivity. Each pixel of the pOCT sensor contains a low-power signal demodulation circuit allowing the simultaneous detection of the envelope and the phase information of the optical interferometry signal. An automatic photocurrent offset-compensation circuit, a synchronous sampling stage, programmable time averaging, and random pixel accessing are also incorporated at the pixel level. The low-power demodulation principle chosen as well as alternative implementations are discussed. The characterization results of the sensor exhibit a sensitivity of at least 74 dB, which is within 4 dB of the theoretical limit of a shot-noise limited OCT system. Real-time high-resolution three-dimensional tomographic imaging is demonstrated along with corresponding performance measurements.
The relentless progress of semiconductor technology makes it possible to provide image sensors and pixels with additional analog and digital functionality. Growing experience with such photosensor functionality leads to the development of modular building blocks that can be employed for smart pixels, single-chip digital cameras and
functional image sensors. Examples given include a non-linear pixel response circuit for high-dynamic range imaging offering a dynamic range of more than 180 dB, low-noise amplifiers and avalanche-effect pixels for high-sensitivity detection performance that approaches single-photoelectron resolution, lock-in pixels for optical time-of-flight range cameras with sub-centimeter distance resolution and in-pixel demodulation circuits for optical coherence tomography
imaging. The future is seen in even higher levels of integration, such as system-on-a-chip machine vision cameras (“seeing chips”), post-processing with non-silicon materials for the extension of the detection range to the X-ray, ultraviolet and infrared spectrum, the exploitation of all properties of the incident light and imaging of fields other than electromagnetic radiation
A novel concept for video-rate parallel acquisition of optical coherence tomography imaging is presented based on in-pixel demodulation. The main restrictions for parallel detection such as data rate, power consumption, circuit size and poor sensitivity are overcome with a smart pixel architecture incorporating an offset compensation circuit, a synchronous sampling stage, programmable time averaging and random pixel accessing, allowing envelope and phase detection in large 1D and 2D arrays.
CMOS image sensors offer over the standard and ubiquitous charge-coupled devices several advantages, in terms of power consumption, miniaturization, on-chip integration of analog- to-digital converters and signal processing for dedicated functionality. Due to the typically higher readout noise of CMOS cameras compared to CCD cameras applications demanding ultimate sensitivity were so far not accessible to CMOS cameras. This paper present an analysis of major noise sources, concepts to reduce them, and results obtained ona single chip digital camera with a QCIF resolution of 144 by 176 pixels and a dynamic range in excess of 120 dB.
An active pixel sensor array (APS) with programmable resolution was realized in standard 0.5 micrometers CMOS technology. For operation under poor lighting conditions, the change of sub-regions of 2 by 2 respectively 4 by 4 pixels can be summed, yielding a corresponding sensitivity enhancement. In that way the maximum resolution of 1024 by 1024 can be reduced to 512 by 512 or 256 by 256. Based on a charge skimming mechanism, the required circuitry can be implemented in any logic CMOS technology without process modifications. Output through 1, 2 or 4 analog channels clocked at a pixel at up to 40 MHz each allows a frame rate up to 160 frames/sec at an overall power dissipation of 70 mW.
This article presents the design and realization of a CMOS digital image sensor optimized for button-battery powered applications. First, a pixel with local analog memory was designed, allowing efficient sensor global shutter operation. The exposure time becomes independent on the readout speed and a lower readout frequency can be used without causing image distortion. Second, a multi-path readout architecture was developed, allowing an efficient use of the power consumption in sub-sampling modes. These techniques were integrated in a 0.5 um CMOS digital image senor with a resolution of 648 by 648 pixels. The peak supply current is 7 mA for a readout frequency of 4 Mpixel/s at Vdd equals 3V. Die size is 55 mm2 and overall SNR is 55 dB. The global shutter performance was demonstrated by acquiring pictures of fast moving objects without observing any distortion, even at a low readout frequency of 4 MHz.
A new generation of smart pixels, so-called demodulation or lock-in pixels is introduced in this paper. These devices are capable of measuring phase, amplitude and offset of modulated light up to some tens of MHz, making them ideally suited to be used as receivers in 3D time-of-flight (TOF) distance measurement systems. Different architectures of such devices are presented and their specific advantages and disadvantages are discussed. Furthermore, a simple model is introduced giving the shot noise limited range resolution of a range camera working with these demodulation pixels. Finally, a complete TOF range camera based on an array of one of the new lock-in pixels will be described. This TOF- camera uses only standard components and does not need any mechanically scanning parts. With this camera non- cooperative targets can be measured with a few centimeters resolution over a distance of up to 20 meters.
Novel optical and optoelectronic technologies make it possible to provide real-time, highly precise metrological tools for microsystems fabrication. Custom light sources with unprecedented efficiency, polymer replication of micro-optical components, optical monoblocks of glass or polymers comprising many optical functions, assembly techniques adopted from microelectronics and smart CMOS photosensors are the basis of this development. It is illustrated with three practical examples: (1) An absolute, high-precision, low-cost optical encoder, (2) low-noise, low-power and high-speed minicameras, and (3) real-time range imaging with micron resolution based on low-coherence optical tomography.
A complete range camera system, working with the time-of- flight principle, is introduced. This ranging system uses modulated LEDs as active illumination source and a new lock-in CCD sensor as demodulator and detector. It requires no mechanically scanning parts because every pixel of the sensor contains a lock-in amplifier, enabling both intensity and range measurement for all pixels in parallel. Two such lock-in imagers are realized in 2.0 micrometer CMOS/CCD technology, (1) a line sensor with 108 pixels and an optical fill factor of 100% and (2) a 64 X 25 pixel image sensor with 20% fill factor. The basics of time-of-flight ranging are introduced with a detailed description of the elements necessary. Shot noise limitation to ranging resolution is deduced and confirmed by simulation. An optical power budget is offered giving the relation between the number of photons in a pixel depending on the light source, the observed object, and several camera parameters. With the described lab setup, non- cooperative targets can be measured over a distance of several meters with a resolution of some centimeters.
Avalanche photodiode (APD) imaging arrays offering programmable gain are a long awaited achievement in electronic imaging. In view of the recent boom in CMOS imaging, a logical next step for increasing responsivity was to integrate APDs in CMOS. Once the feasibility of these diodes has been prove, we can combine the devices with control and readout circuitry, thus creating an integrated 2D APD array. Such arrays exploit the sub-Geiger mode, where the applied voltage is just slightly less than the breakdown voltage. The diodes used in the 2D array were implemented in a standard 2 micrometers BiCMOS process. To keep the readout circuitry simple, a small transimpedance amplifier has been designed, taking into account that there is a significant trade off between noise performance and silicon area. A with other CMOS imagers, we use a random access active pixel sensor readout. The compete imaging array consists of 12 by 24 pixels, each of size 71.5 micrometers by 154 micrometers to fit on a 5 mm2 chip. First images prove the feasibility of avalanche photodiode imaging using standard BiCMOS technology. Thus important data to improve sensor operation has been collected. The complexity of the imager design is increased by special noise and high voltage requirements. Area and calibration restrictions must be considered also for this photo-sensor array.
As a step towards a complete CMOS avalanche photodiode imager, various avalanche photodiodes have been integrated in a commercially available CMOS process. In this paper, design considerations are discussed and experimental results are compared for a wide variety of diodes. The largest restriction is that no process change is allowed. Even with such a restriction, gains of more than 1000 at an incident wavelength of 637 nm using 83 V for one diode type and 45 V for another one has been shown. Thus, the feasibility of CMOS compatible avalanche photodiodes has been proved, allowing us to proceed towards the next step of integrating controlling circuits, readout circuits and avalanche photodiodes on the same chip. Further developments in this area is already in progress.
Today's semiconductor industry is based on silicon as a semiconductor with excellent mechanical, chemical and electrical properties. Additionally, silicon is very effective in converting photons in the wavelength range of 0.1 - 1150 nm into charge carrier pairs. While this photoconversion process occurs essentially noise-free, the electronic detection of the collected photocharge is effectively responsible for the photodetection noise. The limiting physical effect is Johnson (resistor) noise in the channel of the first detection transistor, which depends on the input capacitance, the temperature and the detection bandwidth. This relationship can be exploited in several ways for the realization of image sensors in CCD and CMOS technology that exhibit sub-electron detection noise, reaching the ultimate physical limit of single-photon detection. Additionally, a physical effect can be employed for the amplification of charge signals before the actual electronic detection process: avalanche multiplication. Many of the described low-noise image sensors can be implemented in standard CCD or CMOS fabrication processes, opening up exciting prospects for affordable optical microsystems performing at the physical photodetection limits.
Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.
The operation of optical position encoders relies on careful mechanical alignment of the detection system relative to a scale. We present a novel optical position encoder based on a glass scale, a dedicated photodetector array and a micro- optical imaging system. The complete detection unit is small enough to fit into the housing of the detector heads of standard Moire based incremental position encoders. Operating with a scale of 20 micrometer period, our working demonstrator achieves a resolution of up to 10 nm while offering a tolerance of plus or minus 80 micrometer in the distance from scale to detection system and a high angular mounting tolerance. The interpolation error was experimentally determined to be below 0.1 micrometer for an angular misalignment of plus or minus 12 mrad. The position encoder system is equally well suited for the setup of high- resolution linear and rotary incremental encoders which are employed in precision machine tools.
A novel CMOS active pixel sensor structure has been designed, fabricated and characterized. It greatly increases the working range for all imaging applications in which the optical signal information to be detected is superimposed on a large DC offset signal. This is achieved by subtracting an offset current at each pixel's photosite. The offset current can be programmed individually by an external programming voltage. Experimental results from a single pixel test-cell fabricated on a standard 2 (mu) CMOS-process show a programmable offset signal range of > 150 dB with a dynamic range of > 60 dB of the photo detector itself. To achieve a similar performance using conventional imaging techniques would require an imager with a dynamic range of > 150 dB.
Modern semiconductor technologies make it possible to fabricate photosensors whose geometry and functionality are adapted to specific sensing tasks of various optical measurement techniques. Several such smart sensors, realized with commercially available
CMOS/CCD processes, have been demonstrated successfully in different optical sensing applications: (1) a novel absolute position encoder with 10-nm resolution and 50-nm differential accuracy, (2) a dynamic CCD image sensor whose pixel size and shape can be varied in real time, (3) an image sensor capable of carrying out convolutions with arbitrary kernel sizes on-chip and during the exposure, and (4) a 2-D, synchronous detector/demodulator ("lock-in CCD") for applications in heterodyne interferometry and time-of-flight (laser-radar) depth imaging. Several of these devices and techniques rely on dynamic, real-time clocking schemes, for which a universal smart sensor controller was developed, capable of controlling and driving almost any existing CCD/CMOS image sensor at clock rates of up to 25 MHz. The availability of design and fabrication facilities for custom photosensors and imagers, produced quickly and quite inexpensively even in prototype quantities, opens up new possibilities for a wide range of traditional and novel optical measurement techniques.
Two types of image sensors are described that exploit the CCD's capability of spatially moving photocharge simultaneously with the exposure to a scene. The lock-in CCD is a two- dimensional array of pixels, each of which is a synchronous detector for oscillating optical wave fields with a spatially varying distribution of phase, amplitude and background offset. Main applications of this novel CCD type are in time-of-flight and heterodyne interferometric range imaging without moving parts. The convolver CCD consists of a two-dimensional array of pixels, connected with their nearest neighbors through short CCD lines, with which photocharge can be transferred vertically and horizontally during the exposure. By suitably timing these charge shifts, freely programmable convolutions with kernels of arbitrary size become possible. Tap weight accuracies of typically 2% of the largest tap value have been obtained for a variety of linear filters that are commonly used in machine vision. An integral part of these CCDs is a programmable, microcontroller-based driver system, capable of generating dynamic pulse sequences and driving virtually any image sensor available commercially or custom designed for special applications.
KEYWORDS: Video, Analog electronics, Power supplies, Imaging systems, Cameras, Signal to noise ratio, Clocks, Data acquisition, Digital signal processing, Sensors
This paper describes the requirements, design, and results of a modular data acquisition system with a resolution of 12 bits at up to 20 MHz sampling frequency. The modularity enables the analog-digital conversion to be separated from the digital processing/storage. This allows the latest, best performing ADC (analog-digital converter) to be easily integrated into the system by a re-design of only the AD board with the rest of the system unchanged. The converter employed operates at frequencies up to 20 MHz. The complete system produces measured quantization noise figures of -75 dB and integral non-linearity of -72 dB. The unit can sample video or non-video waveforms. For video applications, an active clamping system is used to ensure that the black level is accurately maintained. The framestore is connected externally using a high-speed digital data bus. This facilitates the inclusion of real-time digital processing units. The framestore used is doubly buffered to permit simultaneous acquisition and readout. The store is 8 Mbytes to accommodate HDTV images and has an input data rate of 40 Mbytes per second.
The realization of an integrated, flexible, and robust CIM vision system, suitable for performing quality-assurance surface inspections is discussed. The optimized combination of advanced optics, optomechanics, and flexible image sensor realizes a high 'virtual resolution' without penalizing the pixel transfer rate. High computation rates are obtained by complementing the fractal inspection algorithm with a dynamic hologram, a modular data flow processor, and the system computer. The integrated vision system is validated for the surface quality inspection of concrete tiles in an industrial environment. The overall system performance is discussed in detail and the potential of the system for other application fields will be addressed.
In the course of ESPRIT II project No. 2103 (MASCOT) a high performance color CCD camera was developed. It is based on a 1K X 1K frame-transfer CCD imager whose pixels are covered with an optimized dielectric filter stripe pattern. A microscanning optical unit is employed to displace the image, with a reproducibility of 1/200th of the pixel period, for programmable color image acquisition with a maximum resolution of 3K X 3K color (RGB, XYZ, etc.) pixels. The CCD's output is immediately digitized to 10 bits using an in- house developed ADC subsystem whose performance of 67 dB S/N at 20 MHz is ideal for this application. The data is stored in one of three fast framestores. The raw data is read out simultaneously from these three framestores at a data rate of 30 MBytes per second and processed, fully digitally, in a special color processor. After non-linear transformations to compensate for detector non-linearities, color matrixing is carried out using one set of 16 matrix parameters which have been optimized for different illumination conditions and color temperatures. They also enable the selection of the type of output data to be generated e.g., RGB for specific phosphors, CIE XYZ tristimulus values, etc. After matrixing, a non-linear table-lookup can be used to introduce gamma correction or other calibration functions. The color processor produces 8-bit color pixels at a rate of 20 MBytes per second, writing these data directly into an 8 MBytes commercial framestore plugged into a PC/AT.
A novel active vision system for CIM production and inspection applications has been developed in the framework of ESPRIT II project No. 5194 (CIVIS). The system consists of a unique, integrated combination of novel components: camera head, data acquisition electronics, a custom digital image processor, control hardware and a commercial framestore, all under the direction of control and processing software on a PC-486 platform. The camera head incorporates a fast zoom lens in combination with a pan/tilt mirror system, allowing region-of-interest acquisition. The special 256 X 256 MOS image sensor offers programmable resolution and random pixel access. The unique combination of optics, optomechanics and versatile image sensor has a high `virtual resolution,' corresponding to more than 1k X 1k pixels but without the overhead of a high pixel transfer rate. The fast computation of the algorithm employed for the fractal inspection of surfaces is realized with an unusual combination of an electrically switchable hologram (for performing all linear operations at the speed of light in the optical domain), a module-based digital processor and the host computer. In this way, active vision for the inspection of concrete tile surfaces has been implemented by acquiring only relevant image data and elegantly processing them in the most appropriate domain.
As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.
Photosensitive elements with well-chosen geometry, combined with suitable analog and digital circuitry on the same CMOS/CCD chip, lead to 'smart image sensors' with interesting capabilities and properties. All our smart sensors were fabricated with commercially available multi-process wafer services of CMOS process, one of them with a buried-channel CCD option. Measurement of the optoelectronic properties of standard CMOS/CCD processes (wavelength-dependent quantum efficiency, lateral homogeneity of quantum efficiency/photo- conductivity, CCD charge transport efficiency, etc.) show excellent performance. The smartness that lies in the geometry is illustrated with a single-chip motion detector, a 3-D depth video camera, a single-chip planar distance sensor, and a sine/cosine (Fourier) transform sensor for fast optical phase measurements. The concept of problem-adapted geometry is also shown with a dynamic frame-transfer CCD whose pixel size and shape can be changed electrically in real-time through charge-binning. Based on the wavelength-dependent absorption of silicon, all-solid-state color pixels are demonstrated by properly arranging the available pn-junctions in the third (bulk) dimension. Moderate color measurement performance is achieved using an unmodified CMOS/CCD process, with a CIE general color-rendering index of Ra equals 69.5.
A new approach to the robust recognition of objects is presented. The fundamental picture primitives employed are local orientations, rather than the more traditionally used edge positions. A simple technique of feature-matching is used, based on the accumulation of evidence in binary channels (similar to the Hough transform) followed by a weighted non- linear sum of the evidence accumulators (matched filters, similar to those used in neural networks). By layering this simple feature-matcher, a hierarchical scheme is produced whose base is a binary representation of local orientations. The individual layers represent increasing levels of abstraction in the search for an object, so that the object can be arbitrarily complex. The universal algorithm presented can be implemented in less than 100 lines of a high-level programming language (e.g., Pascal). As evidenced by practical examples of various complexities, objects can be reliably and robustly identified in a wide variety of surroundings.
A CCD camera based optical metrology system has been developed for the accurate measurement of a railway locomotive''s wheel movements with respect to the rails. The system is based on the light-sectioning method implemented with four laser diodes projecting light sheets onto the wheel and rail. A high-resolution CCD camera views the four profiles simultaneously using an appropriately folded and combined beam-path. To minimize the effects of ambient light a special narrow-band dielectric filter was designed manufactured and fitted in front of the camera lens. The desired measurement accuracy requires pixel-synchronous acquisition of the CCD video data. This is realized with a custom-built universal CCD data acquistion system with which profile tracking data compression and storage at 12. 5 Hz (half frame-rate) is made possible. A prototype system was built and tested on railway tracks at up to 140 km/h. In laboratory experiments the system surpassed the required measurement accuracies about fivefold attaining an accuracy of 0. 02 mm in relative position and better than 0. 1 mrad in relative angle. 2.
The use of CCD sensors in optical metrology requires synchronous sampling of the image with a good signalnoise
performance. . A system has been developed to digitize optimally the signals from high-resolution CCD
sensors. The data acquisition system is split into two parts -the first is a storage unit for the IBM PC/AT family
of computers with a fast, digital, input-output interface, with 8-bit transmission speed DC-4OMHz and 16 bit
operation DC-2OMHs. The digitization of the analogue signal is performed on separate units, up to 2m from the
computer. Separating the analogue processing from the computer and using a separate power supply not only
reduces the electrical noise from the digital electronics to a minimum but also allows greater flexibility in designing
custom 'front ends' for a wide range of sensors.
The storage card has two 1M byte banks of memory. These are normally used to provide double buffering of
1M pixel images, but can also be used to store 2M byte images without double buffering.
Practical experience, using 8 and 10 bit video front ends, indicates that the geometrical resolution possible
with modern CCD sensors is approaching 1/100 of the pixel period. The digital signal processing required for this
performance does not depend on the CCD camera's PSF and it is insensitive to variations offocus and orientation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.