KEYWORDS: LIDAR, Doppler effect, Beam steering, Telescopes, Signal to noise ratio, Wavefront distortions, Space operations, Solar system, Signal attenuation, Scattering
Transparent electro-optic (EO) devices can potentially be used for efficient, high speed, and low mass non-mechanical beam steering. Transmission through EO devices will affect the wavefront quality and aberration of both the transmitted and received laser light. The consequences of wavefront distortions on coherent detection are not fully understood. Therefore, we have tested the performance of coherent detection lidar using liquid crystal polarization grating (LCPG) beam steering device. We will discuss the impact of LCPG beam steering devices on the performance of NASA’s Navigation Doppler Lidar, built for providing altitude and vector velocity data to aerial and space vehicles.
The operation of a coherent Doppler lidar, developed by NASA for missions to planetary bodies, is analyzed and its projected performance is described. The lidar transmits three laser beams at different but fixed directions and measures line-of-sight range and velocity along each beam. The three line-of-sight measurements are then combined in order to determine the three components of the vehicle velocity vector and its altitude relative to the ground. Operating from over five kilometers altitude, the NDL provides velocity and range data with a few cm/sec and a few meters precision, respectively, depending on the vehicle dynamics. This paper explains the sources of measurements error and analyzes the impacts of vehicle dynamics on the lidar performance.
This paper describes a coherent Doppler lidar developed by NASA to address a need for a high-performance, compact, and cost-effective velocity and altitude sensor onboard its landing vehicles. Future robotic and manned missions to solar system bodies require precise ground-relative velocity vector and altitude data to execute complex descent maneuvers and soft landing at a pre-designated site. This lidar sensor, referred to as Navigation Doppler Lidar (NDL), transmits three laser beams at different pointing angles toward the ground and measures range and velocity along each beam using a frequency modulated continuous wave (FMCW) technique. The three line-of-sight measurements are then combined in order to determine the three components of the vehicle velocity vector and its altitude relative to the ground with about 2 cm/sec and 2 meters precision, respectively, dominated by the vehicle motion. The NDL can also benefit terrestrial aerial vehicles that cannot rely on GPS for position and velocity data. The NDL offers a viable option for enabling aircraft operation in areas where the GPS signal can be blocked or jammed by intentional or unintentional interference. A modified version of the NDL incorporating a beam steering device can produce 3-dimensional range and Doppler images that are critical for safe andefficient operation of autonomous ground vehicles. This paper describes the design of the NDL and its capabilities as demonstrated through extensive ground tests and flight tests onboard helicopters and autonomous rocket-powered vehicles. Then, the utilization of the NDL technologies for terrestrial vehicles will be discussed.
A coherent Doppler lidar has been developed to address NASA’s need for a high-performance, compact, and cost-effective velocity and altitude sensor onboard its landing vehicles. Future robotic and manned missions to solar system bodies require precise ground-relative velocity vector and altitude data to execute complex descent maneuvers and safe, soft landing at a pre-designated site. This lidar sensor, referred to as a Navigation Doppler Lidar (NDL), meets the required performance of the landing missions while complying with vehicle size, mass, and power constraints. Operating from up to four kilometers altitude, the NDL obtains velocity and range precision measurements reaching 2 cm/sec and 2 meters, respectively, dominated by the vehicle motion. Terrestrial aerial vehicles will also benefit from NDL data products as enhancement or replacement to GPS systems when GPS is unavailable or redundancy is needed. The NDL offers a viable option to aircraft navigation in areas where the GPS signal can be blocked or jammed by intentional or unintentional interference. The NDL transmits three laser beams at different pointing angles toward the ground to measure range and velocity along each beam using a frequency modulated continuous wave (FMCW) technique. The three line-of-sight measurements are then combined in order to determine the three components of the vehicle velocity vector and its altitude relative to the ground. This paper describes the performance and capabilities that the NDL demonstrated through extensive ground tests, helicopter flight tests, and onboard an autonomous rocket-powered test vehicle while operating in closedloop with a guidance, navigation, and control (GN and C) system.
KEYWORDS: LIDAR, Sensors, Space operations, Doppler effect, Solar system, Mars, Imaging systems, 3D image processing, Navigation systems, Algorithm development
NASA has been pursuing flash lidar technology for autonomous, safe landing on solar system bodies and for automated rendezvous and docking. During the final stages of landing, from about 1 km to 500 m above the ground, the flash lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes. The onboard flight computer can then use the 3-D map of terrain to guide the vehicle to a safe location. As an automated rendezvous and docking sensor, the flash lidar can provide relative range, velocity, and bearing from an approaching spacecraft to another spacecraft or a space station. NASA Langley Research Center has developed and demonstrated a flash lidar sensor system capable of generating 16k pixels range images with 7 cm precision, at a 20 Hz frame rate, from a maximum slant range of 1800 m from the target area. This paper describes the lidar instrument and presents the results of recent flight tests onboard a rocket-propelled free-flyer vehicle (Morpheus) built by NASA Johnson Space Center. The flights were conducted at a simulated lunar terrain site, consisting of realistic hazard features and designated landing areas, built at NASA Kennedy Space Center specifically for this demonstration test. This paper also provides an overview of the plan for continued advancement of the flash lidar technology aimed at enhancing its performance to meet both landing and automated rendezvous and docking applications.
KEYWORDS: Sensors, LIDAR, Data modeling, Detection and tracking algorithms, Statistical modeling, Navigation systems, Algorithm development, Evolutionary algorithms, Monte Carlo methods, 3D image processing
NASA’s Autonomous Landing and Hazard Avoidance Technologies (ALHAT) project is currently developing the critical technologies to safely and precisely navigate and land crew, cargo and robotic spacecraft vehicles on and around planetary bodies. One key element of this project is a high-fidelity Flash Lidar sensor that can generate three-dimensional (3-D) images of the planetary surface. These images are processed with hazard detection and avoidance and hazard relative navigation algorithms, and then are subsequently used by the Guidance, Navigation and Control subsystem to generate an optimal navigation solution. A complex, high-fidelity model of the Flash Lidar was developed in order to evaluate the performance of the sensor and its interaction with the interfacing ALHAT components on vehicles with different configurations and under different flight trajectories. The model contains a parameterized, general approach to Flash Lidar detection and reflects physical attributes such as range and electronic noise sources, and laser pulse temporal and spatial profiles. It also provides the realistic interaction of the laser pulse with terrain features that include varying albedo, boulders, craters slopes and shadows. This paper gives a description of the Flash Lidar model and presents results from the Lidar operating under different scenarios.
Landing mission concepts that are being developed for exploration of solar system bodies are increasingly ambitious in
their implementations and objectives. Most of these missions require accurate position and velocity data during their
descent phase in order to ensure safe, soft landing at the pre-designated sites. Data from the vehicle’s Inertial
Measurement Unit will not be sufficient due to significant drift error after extended travel time in space. Therefore, an
onboard sensor is required to provide the necessary data for landing in the GPS-deprived environment of space. For this
reason, NASA Langley Research Center has been developing an advanced Doppler lidar sensor capable of providing
accurate and reliable data suitable for operation in the highly constrained environment of space. The Doppler lidar
transmits three laser beams in different directions toward the ground. The signal from each beam provides the platform
velocity and range to the ground along the laser line-of-sight (LOS). The six LOS measurements are then combined in
order to determine the three components of the vehicle velocity vector, and to accurately measure altitude and attitude
angles relative to the local ground. These measurements are used by an autonomous Guidance, Navigation, and Control
system to accurately navigate the vehicle from a few kilometers above the ground to the designated location and to
execute a gentle touchdown. A prototype version of our lidar sensor has been completed for a closed-loop demonstration
onboard a rocket-powered terrestrial free-flyer vehicle.
The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the
ground, and approach velocity can enable safe landing of robotic and manned vehicles with a high degree of precision.
Currently, NASAis developing novel lidar sensors aimed at the needs of future planetary landing missions.These lidar
sensors are a 3-Dimensional Imaging Flash Lidar, a Doppler Lidar, and a Laser Altimeter. The Flash Lidar is capable of generating elevation maps of theterrain toindicate hazardous features such as rocks, craters, and steep slopes. The
elevation maps, which arecollected during the approach phase of a landing vehicle from about 1 km above the ground,
can be used to determine the most suitable safe landing site. The Doppler Lidar provides highly accurate ground relative
velocity and distance data thusenablingprecision navigation to the landing site. Our Doppler lidar utilizes three laser
beams that are pointed indifferent directions to measure line-of-sight velocities and ranges to the ground from altitudes
of over 2 km.Starting at altitudes of about 20km and throughout the landing trajectory,the Laser Altimeter can provide
very accurate ground relative altitude measurements that are used to improve the vehicle position knowledge obtained from the vehicle'snavigation system. Betweenaltitudesof approximately 15 km and 10 km, either the Laser Altimeter
or the Flash Lidar can be used to generate contour maps of the terrain, identifying known surface features such as craters to perform Terrain relative Navigation thus further reducing the vehicle's relative position error. This paper describes the
operational capabilities of each lidar sensorand provides a status of their development.
KEYWORDS: Sensors, Velocity measurements, Doppler effect, LIDAR, Navigation systems, Signal processing, Receivers, Global Positioning System, Signal to noise ratio, Control systems
An all fiber Navigation Doppler Lidar (NDL) system is under development at NASA Langley Research Center
(LaRC) for precision descent and landing applications on planetary bodies. The sensor produces high-resolution
line of sight range, altitude above ground, ground relative attitude, and high precision velocity vector measurements.
Previous helicopter flight test results demonstrated the NDL measurement concepts, including measurement
precision, accuracies, and operational range. This paper discusses the results obtained from a recent campaign to
test the improved sensor hardware, and various signal processing algorithms applicable to real-time processing. The
NDL was mounted in an instrumentation pod aboard an Erickson Air-Crane helicopter and flown over various
terrains. The sensor was one of several sensors tested in this field test by NASA's Autonomous Landing and Hazard
Avoidance Technology (ALHAT) project.
A novel method for enhancement of the spatial resolution of 3-diminsional Flash Lidar images is being proposed for
generation of elevation maps of terrain from a moving platform. NASA recognizes the Flash LIDAR technology as
an important tool for enabling safe and precision landing in future unmanned and crewed lunar and planetary
missions. The ability of the Flash LIDAR to generate 3-dimensional maps of the landing site area during the final
stages of the descent phase for detection of hazardous terrain features such as craters, rocks, and steep slopes is
under study in the frame of the Autonomous Landing and Hazard Avoidance (ALHAT) project. Since single frames
of existing FLASH LIDAR systems are not sufficient to build a map of entire landing site with acceptable spatial
resolution and precision, a super-resolution approach utilizing multiple frames has been developed to overcome the
instrument's limitations. Performance of the super-resolution algorithm has been analyzed through a series of
simulation runs obtained from a high fidelity Flash LIDAR model and a high resolution synthetic lunar elevation
map. For each simulation run, a sequence of FLASH LIDAR frames are recorded and processed as the spacecraft
descends toward the landing site. Simulations runs having different trajectory profiles and varying LIDAR look
angles of the terrain are also analyzed. The results show that adequate levels of accuracy and precision are achieved
for detecting hazardous terrain features and identifying safe areas of the landing site.
KEYWORDS: LIDAR, Space operations, Sensors, Detection and tracking algorithms, Data modeling, Super resolution, Image processing, 3D image processing, Image resolution, Signal processing
In this paper a new image processing technique for flash LIDAR data is presented as a potential tool to enable
safe and precise spacecraft landings in future robotic or crewed lunar and planetary missions. Flash LIDARs
can generate, in real-time, range data that can be interpreted as a 3-dimensional (3-D) image and transformed
into a corresponding digital elevation map (DEM). The NASA Autonomous Landing and Hazard Avoidance
(ALHAT) project is capitalizing on this new technology by developing, testing and analyzing flash LIDARs
to detect hazardous terrain features such as craters, rocks, and slopes during the descent phase of spacecraft
landings. Using a flash LIDAR for this application looks very promising, however through theoretical and
simulation analysis the ALHAT team has determined that a single frame, or mosaic, of flash LIDAR data may
not be sufficient to build a landing site DEM with acceptable spatial resolution, precision, size, or for a mosaic,
in time, to meet current system requirements. One way to overcome this potential limitation is by enhancing
the flash LIDAR output images. We propose a new super-resolution algorithm applicable to flash LIDAR range
data that will create a DEM with sufficient accuracy, precision and size to meet current ALHAT requirements.
The performance of our super-resolution algorithm is analyzed by processing data generated during a series of
simulation runs by a high fidelity model of a flash LIDAR imaging a high resolution synthetic lunar elevation
map. The flash LIDAR model is attached to a simulated spacecraft by a gimbal that points the LIDAR to a
target landing site. For each simulation run, a sequence of flash LIDAR frames is recorded and processed as
the spacecraft descends toward the landing site. Each run has a different trajectory profile with varying LIDAR
look angles of the terrain. We process the output LIDAR frames using our SR algorithm and the results show
that the achieved level of accuracy and precision of the SR generated landing site DEM is more than adequate
for detecting hazardous terrain features and identifying safe areas.
Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in realtime and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than-human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for realtime performance. We are using a realtime implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests.
Advances in space robotics technology hinge to a large extent upon the development and deployment of sophisticated new vision-based methods for automated in-space mission operations and scientific survey. To this end, we have developed a new concept for automated terrain analysis that is based upon a generic image enhancement platform-multi-scale retinex (MSR) and visual servo (VS) processing. This pre-conditioning with the MSR and the VS produces a "canonical" visual representation that is largely independent of lighting variations, and exposure errors. Enhanced imagery is then processed with a biologically inspired two-channel edge detection process, followed by a smoothness based criteria for image segmentation. Landing sites can be automatically determined by examining the results of the smoothness-based segmentation which shows those areas in the image that surpass a minimum degree of smoothness. Though the MSR has proven to be a very strong enhancement engine, the other elements of the approach-the VS, terrain map generation, and smoothness-based segmentation-are in early stages of development. Experimental results on data from the Mars Global Surveyor show that the imagery can be processed to automatically obtain smooth landing sites. In this paper, we describe the method used to obtain these landing sites, and also examine the smoothness criteria in terms of the imager and scene characteristics. Several examples of applying this method to simulated and real imagery are shown.
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology
Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were
acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using
the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with
visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has
been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer),
orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging.
The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the
degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends
relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance
for the various image classes. Overall results support the idea that in most cases that do not involve extreme
reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for
very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for
establishing performance parameters.
Aerial imagery of the Earth is an invaluable tool for the assessment of ground features, especially during times of disaster. Researchers at NASA's Langley Research Center have developed techniques which have proven to be useful for such imagery. Aerial imagery from various sources, including Langley's Boeing 757 Aries aircraft, has been studied extensively. This paper discusses these studies and demonstrates that better-than-observer imagery can be obtained even when visibility is severely compromised. A real-time, multi-spectral experimental system will be described and numerous examples will be shown.
The Multiscale Retinex With Color Restoration (MSRCR) is a non-linear image enhancement algorithm that provides simultaneous dynamic range compression, color constancy and rendition. The overall impact is to brighten up areas of poor contrast/lightness but not at the expense of saturating areas of good contrast/brightness. The downside is that with the poor signal-to-noise ratio that most image acquisition devices have in dark regions, noise can also be greatly enhanced thus affecting overall image quality. In this paper, we will discuss the impact of the MSRCR on the overall quality of an enhanced image as a function of the strength of shadows in an image, and as a function of the root-mean-square (RMS) signal-to-noise (SNR) ratio of the image.
KEYWORDS: Digital signal processing, Cameras, Image processing, Signal processing, Short wave infrared radiation, Visibility, Image enhancement, Sensors, Long wavelength infrared, Video
Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these
conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At
NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office
and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines
image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This
system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function
of the system is to enhance and fuse the sensor data in order to increase the information content and quality
of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For
image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for
improving low-contrast range imagery typically seen during poor visibility poor visibility conditions. In general,
real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a
single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In
this paper we give an overview of the EVS and its performance requirements for real-time enhancement and
fusion and we discuss our current real-time Retinex implementations on DSPs.
KEYWORDS: Unmanned aerial vehicles, Video, Cameras, Global Positioning System, Image processing, Data acquisition, Light sources and illumination, Improvised explosive devices, Data centers, Nose
In recent years, small unmanned aerial vehicles (UAVs) have been used
for more than the thrill they bring to model airplane enthusiasts.
Their flexibility and low cost have made them a viable option for
low-altitude reconnaissance. In a recent effort, we acquired video data
from a small UAV during several passes over the same flight path. The
objective of the exercise was to determine if objects had been added to
the terrain along the flight path between flight passes. Several
issues accrue to this simple-sounding problem: (1) lighting variations
may cause false detection of objects because of changes in shadow
orientation and strength between passes; (2) variations in the flight
path due to wind-speed, and heading change may cause misalignment of
gross features making the task of detecting changes between the frames
very difficult; and (3) changes in the aircraft orientation and altitude
lead to a change in size of the features from frame-to-frame making a
comparison difficult. In this paper, we discuss our efforts to perform
this change detection, and the lessons that we learned from this
exercise.
Current still image and video systems are typically of limited use in poor visibility conditions such as in rain, fog, smoke, and haze. These conditions severely limit the range and effectiveness of imaging systems because of the severe reduction in contrast. The NASA Langley Research Center’s Visual Information Processing Group has developed an image enhancement technology based on the concept of a visual servo that has direct applications to the problem of poor visibility conditions. This technology has been used in cases of severe image turbidity in air as well as underwater with dramatic results. Use of this technology could result in greatly improved performance of perimeter surveillance systems, military, security, and law enforcement operations, port security, both on land and below water, and air and sea rescue services, resulting in improved public safety.
The current X-ray systems used by airport security personnel for the detection of contraband, and objects such as knives and guns that can impact the security of a flight, have limited effect because of the limited display quality of the X-ray images. Since the displayed images do not possess optimal contrast and sharpness, it is possible for the security personnel to miss potentially hazardous objects. This problem is also common to other disciplines such as medical X-rays, and can be mitigated, to a large extent, by the use of state-of-the-art image processing techniques to enhance the contrast and sharpness of the displayed image. The NASA Langley Research Center's Visual Information Processing Group has developed an image enhancement technology that has direct applications to the problem of inadequate display quality. Airport security X-ray imaging systems would benefit considerably by using this novel technology, making the task of the personnel who have to interpret the X-ray images considerably easier, faster, and more reliable. This improvement would translate into more accurate screening as well as minimizing the screening time delays to airline passengers. This technology, Retinex, has been optimized for consumer applications but has been applied to medical X-rays on a very preliminary basis. The resultant technology could be incorporated into a new breed of commercial x-ray imaging systems which would be transparent to the screener yet allow them to see subtle detail much more easily, reducing the amount of time needed for screening while greatly increasing the effectiveness of contraband detection and thus improving public safety.
Classical segmentation algorithms subdivide an image into its
constituent components based upon some metric that defines commonality
between pixels. Often, these metrics incorporate some measure of
"activity" in the scene, e.g. the amount of detail that is in a region.
The Multiscale Retinex with Color Restoration (MSRCR) is a general
purpose, non-linear image enhancement algorithm that significantly
affects the brightness, contrast and sharpness within an image. In this
paper, we will analyze the impact the (MSRCR) has on segmentation results
and performance.
KEYWORDS: Digital signal processing, Image processing, Video, Image enhancement, Field programmable gate arrays, Signal processing, Cameras, Video processing, Detection and tracking algorithms, Data processing
The Retinex is a general-purpose image enhancement algorithm that is used to produce good visual representations
of scenes. It performs a non-linear spatial/spectral transform that synthesizes strong local contrast
enhancement and color constancy. A real-time, video frame rate implementation of the Retinex is required to meet the needs of various potential users. Retinex processing contains a relatively large number of complex
computations, thus to achieve real-time performance using current technologies requires specialized hardware
and software. In this paper we discuss the design and development of a digital signal processor (DSP) implementation
of the Retinex. The target processor is a Texas Instruments TMS320C6711 floating point DSP. NTSC video is captured using a dedicated frame-grabber card, Retinex processed, and displayed on a standard
monitor. We discuss the optimizations used to achieve real-time performance of the Retinex and also describe our future plans on using alternative architectures.
Noise is the primary visibility limit in the process of non-linear image
enhancement, and is no longer a statistically stable additive noise in
the post-enhancement image. Therefore novel approaches are needed to
both assess and reduce spatially variable noise at this stage in overall
image processing. Here we will examine the use of edge pattern analysis
both for automatic assessment of spatially variable noise and as a
foundation for new noise reduction methods.
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate
at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is
obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
A new approach to sensor fusion and enhancement is presented. The retinex image enhancement algorithm is used to jointly enhance and fuse data from long wave infrared, short wave infrared and visible wavelength sensors. This joint optimization results in fused data which contains more information than any of the individual data streams. This is especially true in turbid weather conditions, where the long wave infrared sensor would conventionally be the only source of usable information. However, the retinex algorithm can be used to pull out the details from the other data streams as well, resulting in greater overall information. The fusion uses the multiscale nature of the algorithm to both enhance and weight the contributions of the different data streams forming a single output data stream.
Mass memory system based on rewriteable optical disk media are expected to play an important role in meeting the data system requirements for future NASA spaceflight missions. NASA has established a program to develop a high performance (high rate, large capacity) optical disk recorder focused on use aboard unmanned Earth orbiting platforms. An expandable, adaptable system concept is proposed based on disk drive modules and a modular controller. Drive performance goals are 10 gigabyte capacity, 300 megabit per second transfer rate, 10-12 corrected bit error rate, and 150 millisecond access time. This performance is achieved by writing eight data tracks in parallel on both sides of a 14 inch optical disk using two independent heads. System goals are 160 gigabyte capacity, 1.2 gigabits per second data rate with concurrent I/O, 250 millisecond access time, and two to five year operating life on orbit. The system can be configured to meet various applications. This versatility is provided by the controller. The controller provides command processing, multiple drive synchronization, data buffering, basic file management, error processing, and status reporting. Technology developments, design concepts, current status including a computer model of the system and a controller breadboard, and future plans for the drive and controller are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.