Expeditionary environments create special challenges for perception systems in autonomous ground vehicles. To address these challenges, a perception pipeline has been developed that fuses data from multiple sensors (color, thermal, LIDAR) with different sensing modalities and spatial resolutions. The paper begins with in-depth discussion of the multi-sensor calibration procedure. It then follows the flow of data through the perception pipeline, detailing the process by which the sensor data is combined in the world model representation. Topics of interest include stereo filtering, stereo and LIDAR ground segmentation, pixel classification, 3D occupancy grid aggregation, and costmap generation
Resupplying forward-deployed units in rugged terrain in the presence of hostile forces creates a high threat to manned air and ground vehicles. An autonomous unmanned ground vehicle (UGV) capable of navigating stealthily at night in off-road and on-road terrain could significantly increase the safety and success rate of such resupply missions for warfighters. Passive night-time perception of terrain and obstacle features is a vital requirement for such missions. As part of the ONR 30 Autonomy Team, the Jet Propulsion Laboratory developed a passive, low-cost night-time perception system under the ONR Expeditionary Maneuver Warfare and Combating Terrorism Applied Research program. Using a stereo pair of forward looking LWIR uncooled microbolometer cameras, the perception system generates disparity maps using a local window-based stereo correlator to achieve real-time performance while maintaining low power consumption. To overcome the lower signal-to-noise ratio and spatial resolution of LWIR thermal imaging technologies, a series of pre-filters were applied to the input images to increase the image contrast and stereo correlator enhancements were applied to increase the disparity density. To overcome false positives generated by mixed pixels, noisy disparities from repeated textures, and uncertainty in far range measurements, a series of consistency, multi-resolution, and temporal based post-filters were employed to improve the fidelity of the output range measurements. The stereo processing leverages multi-core processors and runs under the Robot Operating System (ROS). The night-time passive perception system was tested and evaluated on fully autonomous testbed ground vehicles at SPAWAR Systems Center Pacific (SSC Pacific) and Marine Corps Base Camp Pendleton, California. This paper describes the challenges, techniques, and experimental results of developing a passive, low-cost perception system for night-time autonomous navigation.
Under the Urban Environment Exploration project, the Space and Naval Warfare Systems Center Pacic (SSC-
PAC) is maturing technologies and sensor payloads that enable man-portable robots to operate autonomously
within the challenging conditions of urban environments. Previously, SSC-PAC has demonstrated robotic capabilities to navigate and localize without GPS and map the ground
oors of various building sizes.1 SSC-PAC has
since extended those capabilities to localize and map multiple multi-story buildings within a specied area. To
facilitate these capabilities, SSC-PAC developed technologies that enable the robot to detect stairs/stairwells,
maintain localization across multiple environments (e.g. in a 3D world, on stairs, with/without GPS), visualize
data in 3D, plan paths between any two points within the specied area, and avoid 3D obstacles. These technologies have been developed as independent behaviors under the Autonomous Capabilities Suite, a behavior
architecture, and demonstrated at a MOUT site at Camp Pendleton. This paper describes the perceptions and
behaviors used to produce these capabilities, as well as an example demonstration scenario.
Under various collaborative efforts with other government labs, private industry, and academia, SPAWAR Systems
Center Pacific (SSC Pacific) is developing and testing advanced autonomous behaviors for navigation, mapping, and
exploration in various indoor and outdoor settings. As part of the Urban Environment Exploration project, SSC
Pacific is maturing those technologies and sensor payload configurations that enable man-portable robots to
effectively operate within the challenging conditions of urban environments. For example, additional means to
augment GPS is needed when operating in and around urban structures. A MOUT site at Camp Pendleton was
selected as the test bed because of its variety in building characteristics, paved/unpaved roads, and rough terrain.
Metrics are collected based on the overall system's ability to explore different coverage areas, as well as the
performance of the individual component behaviors such as localization and mapping. The behaviors have been
developed to be portable and independent of one another, and have been integrated under a generic behavior
architecture called the Autonomous Capability Suite. This paper describes the tested behaviors, sensors, and
behavior architecture, the variables of the test environment, and the performance results collected so far.
The fusion of multiple behavior commands and sensor data into intelligent and cohesive robotic movement
has been the focus of robot research for many years. Sequencing low level behaviors to create high level
intelligence has also been researched extensively. Cohesive robotic movement is also dependent on other
factors, such as environment, user intent, and perception of the environment. In this paper, a method for
managing the complexity derived from the increase in sensors and perceptions is described. Our system
uses fuzzy logic and a state machine to fuse multiple behaviors into an optimal response based on the
robot's current task. The resulting fused behavior is filtered through fuzzy logic based obstacle avoidance
to create safe movement. The system also provides easy integration with any communications protocol,
plug-and-play devices, perceptions, and behaviors. Most behaviors and the obstacle avoidance parameters
are easily changed through configuration files. Combined with previous work in the area of navigation and
localization a very robust autonomy suite is created.
Sensors commonly mounted on small unmanned ground vehicles (UGVs) include visible light and thermal cameras,
scanning LIDAR, and ranging sonar. Sensor data from these sensors is vital to emerging autonomous robotic behaviors.
However, sensor data from any given sensor can become noisy or erroneous under a range of conditions, reducing the
reliability of autonomous operations. We seek to increase this reliability through data fusion. Data fusion includes
characterizing the strengths and weaknesses of each sensor modality and combining their data in a way such that the
result of the data fusion provides more accurate data than any single sensor. We describe data fusion efforts applied to
two autonomous behaviors: leader-follower and human presence detection. The behaviors are implemented and tested
in a variety of realistic conditions.
Many envisioned applications of mobile robotic systems require the robot to navigate in complex urban environments. This need is particularly critical if the robot is to perform as part of a synergistic team with human forces in military operations. Historically, the development of autonomous navigation for mobile robots has targeted either outdoor or indoor scenarios, but not both, which is not how humans operate. This paper describes efforts to fuse component technologies into a complete navigation system, allowing a robot to seamlessly transition between outdoor and indoor environments. Under the Joint Robotics Program's Technology Transfer project, empirical evaluations of various localization approaches were conducted to assess their maturity levels and performance metrics in different exterior/interior settings. The methodologies compared include Markov localization, global positioning system, Kalman filtering, and fuzzy-logic. Characterization of these technologies highlighted their best features, which were then fused into an adaptive solution. A description of the final integrated system is discussed, including a presentation of the design, experimental results, and a formal demonstration to attendees of the Unmanned Systems Capabilities Conference II in San Diego in December 2005.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.