A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.
We present different compact analog VLSI motion sensors that compute the 1-D velocity of optical stimuli over a large range and are suitable for integration in focal plane arrays. They have been extensively tested and optimized for robust performance under varying light conditions. Since their output signals are only weakly dependent on contrast, they directly extract optical flow data from an image. Focal plane arrays of such sensors are particularly interesting for application in single-chip systems that perform navigation tasks for moving robots or vehicles, where light weight, low power consumption, and real-time processing are crucial. Several monolithic motion-processing systems based on such velocity sensors have been built and tested. We describe here three chips, designed for the determination of the focus of expansion, the estimation of the time to contact, and the detection of motion discontinuities respectively. The first two systems have been specifically designed for vehicle navigation tasks. The choice of this application domain allows us to make a priori assumptions about the optical flow field that simplifies the structure of the systems and improves their overall performance. The motion-discontinuity-detection system can be more generally used to segment images based on the velocities of its different domains with respect to the camera. It is particularly useful for background-foreground segregation in the case of ego-motion of an autonomous system in a static environment. Tests results of the three systems are presented and their performance is evaluated.
Conference Committee Involvement (10)
Biosensing and Nanomedicine II
12 August 2012 | San Diego, California, United States
Biosensing and Nanomedicine
21 August 2011 | San Diego, California, United States
Bioelectronics, Biomedical, and Bio-inspired Systems
19 April 2011 | Prague, Czech Republic
Biosensing III
1 August 2010 | San Diego, California, United States
Biosensing II
4 August 2009 | San Diego, California, United States
Bioengineered and Bioinspired Systems
4 May 2009 | Dresden, Germany
Biosensing
12 August 2008 | San Diego, California, United States
Bioengineered and Bioinspired Systems
2 May 2007 | Maspalomas, Gran Canaria, Spain
Bioengineered and Bioinspired Systems II
9 May 2005 | Sevilla, Spain
Bioengineered and Bioinspired Systems
19 May 2003 | Maspalomas, Gran Canaria, Canary Islands, Spain
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.