A number of committees globally, and the Regulatory Agencies they support, are active delivering and updating performance standards for vision system: Enhanced, Synthetic and Combined, as they apply to both Fixed Wing and, more recently, Rotorcraft operations in low visibility. We provide an overview of each committee’s present and past work, as well as an update of recent activities and future goals.
We describe the model SIM-100 PC-based simulator, for imaging sensors used, or planned for use, in Enhanced Vision
System (EVS) applications. Typically housed in a small-form-factor PC, it can be easily integrated into existing out-the-window
visual simulators for fixed-wing or rotorcraft, to add realistic sensor imagery to the simulator cockpit. Multiple
bands of infrared (short-wave, midwave, extended-midwave and longwave) as well as active millimeter-wave RADAR
systems can all be simulated in real time. Various aspects of physical and electronic image formation and processing in
the sensor are accurately (and optionally) simulated, including sensor random and fixed pattern noise, dead pixels,
blooming, B-C scope transformation (MMWR). The effects of various obscurants (fog, rain, etc.) on the sensor imagery
are faithfully represented and can be selected by an operator remotely and in real-time. The images generated by the
system are ideally suited for many applications, ranging from sensor development engineering tradeoffs (Field Of View,
resolution, etc.), to pilot familiarization and operational training, and certification support. The realistic appearance of
the simulated images goes well beyond that of currently deployed systems, and beyond that required by certification
authorities; this level of realism will become necessary as operational experience with EVS systems grows.
We present an Integrated Multisensor Synthetic Imaging System (IMSIS) developed for low-visibility, low-level
operations, tailored to Army rotorcraft. IMSIS optimally avails itself of a variety of image-information
sources: FLIR, mm-Wave RADAR and synthetic imagery are all presented to the flying crew
in real time, on a fused display. Synthetic imagery is validated in real time by a 3D terrain sensing radar, to
ensure that the a priori stored database is valid, and to eliminate any possible aircraft positioning errors
with respect to the database. Extensive human factor evaluations were performed on the fused display
concept. All pilots agreed that IMSIS would be a valuable asset in reduced visibility conditions and that the
validated SVS display was rated nearly as flyable as a good FLIR display. The pilots also indicated that the
ability to select and fuse terrain information sources was an important feature.
IMSIS increases situational awareness at night and in all weather conditions, while considerably reducing
pilot workload compared to separately monitoring each sensor and enhancing low-level flight safety by
updating the terrain in real time.
While specifically designed for helicopter low-level flight and navigation, it can aid hover and touchdown
and landing for both fixed and rotary wing platforms as well as aid navigation even in non-airborne
domains.
Two topics are discussed in this paper. The first is the Integrated Multi-sensor Synthetic Imagery System
(IMSIS), being developed under an Army SBIR contract. The system updates on-board, pre-stored, terrain
elevation data with 3D terrain elevation sensor data (such as radar). The system also merges 2D image
contrast sensor data (such as infrared imagery) with the updated 3D terrain elevation data to render a
synthetic image of the terrain on the rotorcraft pilot's display. The second topic is the testing of a new flight
path marker, to show the pilot the predicted location of the aircraft with respect to the synthetic terrain (at
100m distance), as well as the predicted height above the terrain, the desired height above the terrain, and the
point on the terrain the aircraft is expected to fly over. The Altitude and ground Track Predicting Flight Path
Marker (ATP-FPM) symbol takes advantage of knowledge of terrain elevations ahead of the aircraft from a
synthetic vision system, such as IMSIS. In simulation, the maximum low altitude error and maximum ground
track error were both reduced by a factor of 2 with the ATP-FPM compared to the traditional instantaneous
flight path marker. Pilot-to-pilot variations in performance were reduced and workload decreased with the
ATP-FPM.
The utility of Near-Infrared (NIR) sensors for Enhanced Vision System (EVS) applications has been identified and well documented. In particular, such sensors are well suited to detecting runway approach lighting, and often outperform the pilot's vision for this task.
We present the results of field tests of very low cost NIR sensors, based on sensitive visible-light cameras, for this application; the cost/benefit tradeoffs of these sensors are so favorable that they may well form the core of a basic EVS system, or an effective enhancement to EVS systems based on other primary vision sensors.
Useful processing techniques for imagery from these sensors, in the presence of cooperative sensors, or as a standalone system, are also presented.
The 1997 Final Report of the 'White House Commission on Aviation Safety and Security' challenged industrial and government concerns to reduce aviation accident rates by a factor of five within 10 years. In the report, the commission encourages NASA, FAA and others 'to expand their cooperative efforts in aviation safety research and development'. As a result of this publication, NASA has since undertaken a number of initiatives aimed at meeting the stated goal. Among these, the NASA Aviation Safety Program was initiated to encourage and assist in the development of technologies for the improvement of aviation safety. Among the technologies being considered are certain sensor technologies that may enable commercial and general aviation pilots to 'see to land' at night or in poor visibility conditions. Infrared sensors have potential applicability in this field, and this paper describes a system, based on such sensors, that is being deployed on the NASA Langley Research Center B757 ARIES research aircraft. The system includes two infrared sensors operating in different spectral bands, and a visible-band color CCD camera for documentation purposes. The sensors are mounted in an aerodynamic package in a forward position on the underside of the aircraft. Support equipment in the aircraft cabin collects and processes all relevant sensor data. Display of sensor images is achieved in real time on the aircraft's Head Up Display (HUD), or other display devices.
KEYWORDS: Video, Sensors, Image sensors, Data storage, Heads up displays, Analog electronics, Clocks, Digital recording, Digital video recorders, Calibration
We describe a sophisticated for the collection of numerous streams of image and aircraft state data from an airborne platform. The system collects and stores 7 different sources of analog video data; 3 separate sources of digital video data at aggregate rates of up 32 Megabytes per second, to a removable tape device of 100 Gigabytes or 50 minutes' capacity; low-bandwidth aircraft state information from inertial sources; and other ancillary data. Data from all sources are timestamped with a common time source for synchronization. The task of accurately timestamping multiple disparate data sources is a challenging one, and it is discussed in some detail. Although the technology that can be applied to this kind of effort advances and changes rapidly, certain design paradigms remain valid independent of the specific implementation hardware. General principles of design and operation, as well as system specifics, are described; it is hoped that this record will be a useful reference for future efforts of this kind.
The use of various imaging sensors which can penetrate obscuring visual phenomena (such as fog, snow, smoke, etc.) has been proposed to enable aircraft landings in low visibility conditions. In this paper, we examine the use of two particular sensors, infrared and millimeter wave imaging radar, to aid in the landing task. Several image processing strategies are discussed and demonstrated which could improve the efficacy of an operational concept in which an image is presented to the pilot on a Head-Up Display. The possible strategies include the use of aircraft navigation information to help improve image quality, warping of the images to be geometrically conformal with the pilot's eye-point, use of a priori knowledge about the landing geometry to aid with sensor registration and processing, and fusion of multiple image sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.