An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.
The development of a vision system for an autonomous ground vehicle designed and constructed for the Intelligent
Ground Vehicle Competition (IGVC) is discussed. The requirements for the vision system of the autonomous vehicle
are explored via functional analysis considering the flows (materials, energies and signals) into the vehicle and the
changes required of each flow within the vehicle system. Functional analysis leads to a vision system based on a laser
range finder (LIDAR) and a camera. Input from the vision system is processed via a ray-casting algorithm whereby the
camera data and the LIDAR data are analyzed as a single array of points representing obstacle locations, which for the
IGVC, consist of white lines on the horizontal plane and construction markers on the vertical plane. Functional analysis
also leads to a multithreaded application where the ray-casting algorithm is a single thread of the vehicle's software,
which consists of multiple threads controlling motion, providing feedback, and processing the data from the camera and
LIDAR. LIDAR data is collected as distances and angles from the front of the vehicle to obstacles. Camera data is
processed using an adaptive threshold algorithm to identify color changes within the collected image; the image is also
corrected for camera angle distortion, adjusted to the global coordinate system, and processed using least-squares
method to identify white boundary lines. Our IGVC robot, MAX, is utilized as the continuous example for all methods
discussed in the paper. All testing and results provided are based on our IGVC robot, MAX, as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.