Star cameras represent a well-known class of attitude determination sensors. At this time, they achieve excellent
accuracy within arc-seconds. However their size, mass, power, and cost make current commercial versions
unacceptable for use on nano-satellites. Here, the concept of developing a small star camera with very modest
accuracy requirements for future nano-satellite missions is studied. A small commercial cmos sensor with
minimal commercial optics is presented. The cmos imager has an active array area of 5.7 × 4.3mm, with a
focal length of 6mm and an aperture ratio of 1.4. This camera's field-of-view is approximately 50 × 40 degrees
and can capture stars of magnitudes smaller than 3 with acquisition times of 100ms. The accuracy of attitude
determination methods using data collected by this camera was tested by taking photos of the night sky under
terrestrial conditions. The camera attitude was determined using offline image processing and star field attitude
determination algorithms. Preliminary attitude accuracy results were determined and they are presented.
The idea that Linear Covariance techniques can be used to predict the accuracy of attitude determination systems
and assist in their design is investigated. By using the sensor's estimated parameter accuracy, one could calculate
the total standard deviation of the attitude determination that is resulting from these uncertainties by simple Root-
Sum-Square of the attitude standard deviation resulting from the respective uncertainties. Generalized Matrix
Laboratory (MATLAB) M-functions using this technique are written in order to provide a tool for estimating the
attitude determination accuracy of a small spacecraft and to identify major contributions to the attitude
determination uncertainty. This tool is applied to a satellite dynamics truth model developed in order to quantify
the effects of sensor uncertainties on this particular spacecraft's attitude determination accuracy. The result of this
study determines the standard deviation of the attitude determination as a function of the sensor uncertainties.
KEYWORDS: LIDAR, Interference (communication), Sensors, Systems modeling, Receivers, Signal to noise ratio, Optical amplifiers, Imaging systems, Error analysis, Data modeling
The development of an experimental full-waveform LADAR system has been enhanced with the assistance of the
LadarSIM system simulation software. The Eyesafe LADAR Test-bed (ELT) was designed as a raster scanning,
single-beam, energy-detection LADAR with the capability of digitizing and recording the return pulse waveform
at up to 2 GHz for 3D off-line image formation research in the laboratory. To assist in the design phase, the
full-waveform LADAR simulation in LadarSIM was used to simulate the expected return waveforms for various
system design parameters, target characteristics, and target ranges. Once the design was finalized and the ELT
constructed, the measured specifications of the system and experimental data captured from the operational
sensor were used to validate the behavior of the system as predicted during the design phase.
This paper presents the methodology used, and lessons learned from this "design, build, validate" process.
Simulated results from the design phase are presented, and these are compared to simulated results using measured
system parameters and operational sensor data. The advantages of this simulation-based process are also
presented.
A new experimental full-waveform LADAR system has been developed that fuses a pixel-aligned color imager within
the same optical path. The Eye-safe LADAR Test-bed (ELT) consists of a single beam energy-detection LADAR that
raster scans within the same field of view as an aperture-sharing color camera. The LADAR includes a pulsed 1.54 μm
Erbium-doped fiber laser; a high-bandwidth receiver; a fine steering mirror for raster scanning; and a ball joint gimbal
mirror for steering over a wide field of regard are all used. The system has a 6 inch aperture and the LADAR has pulse
rate of up to 100 kHz. The color imager is folded into the optical path via a cold mirror. A novel feature of the ELT is its
ability to capture LADAR and color data that are registered temporally and spatially. This allows immediate direct
association of LADAR-derived 3D point coordinates with pixel coordinates of the color imagery. The mapping allows
accurate pointing of the instrument at targets of interest and immediate insight into the nature and source of the LADAR
phenomenology observed. The system is deployed on a custom van designed to enable experimentation with a variety of
objects.
In this work we examine the dynamic implications of active and attentive scanning for LADAR based automatic
target/object recognition and show that a dynamically constrained, scanner based, ATR system's ability to
identify objects in real-time is improved through attentive scanning. By actively and attentively scanning only
salient regions of an image at the density required for recognition, the amount of time it takes to find a target
object in a random scene is reduced. A LADAR scanner's attention is guided by identifying areas-of-interest using
a visual saliency algorithm on electro-optical images of a scene to be scanned. Identified areas-of-interest are
inspected in order of decreasing saliency by scanning the most salient area and saccading to the next most salient
area until the object-of-interest is recognized. No ATR algorithms are used; instead, an object is considered to
be recognized when a threshold density of pixels-on-target is reached.
The autonomous close-in maneuvering necessary for the rendezvous and docking of two spacecraft requires a relative navigation sensor system that can determine the relative position and orientation (pose) of the controlled spacecraft with respect to the target spacecraft. Lidar imaging systems offer the potential for accurately measuring the relative six degree-of-freedom positions and orientations and the associated rates.
In this paper, we present simulation results generated using a high fidelity modeling program. A simulated lidar system is used to capture close-proximity range images of a model target spacecraft, producing 3-D point cloud data. The sequentially gathered point-clouds are compared with the previous point-cloud using a real-time point-plane correspondence-less variant of the Iterative Closest Points (ICP) algorithm. The resulting range and pose estimates are used in turn to prime the next time-step iteration of the ICP algorithm. Results from detailed point-plane simulations and will be presented. The implications for real-time implementation are discussed.
This paper presents a generic simulation model for a ladar scanner with up to three scan elements, each having a steering, stabilization and/or pattern-scanning role. Of interest is the development of algorithms that automatically generate commands to the scan elements given beam-steering objectives out of the ladar aperture, and the base motion of the sensor platform. First, a straight-forward single-element body-fixed beam-steering methodology is presented. Then a unique multi-element redirective and reflective space-fixed beam-steering methodology is explained. It is shown that standard direction cosine matrix decomposition methods fail when using two orthogonal, space-fixed rotations, thus demanding the development of a new algorithm for beam steering. Finally, a related steering control methodology is presented that uses two separate optical elements mathematically combined to determine the necessary scan element commands. Limits, restrictions, and results on this methodology are presented.
USU LadarSIM Release 2.0 is a ladar simulator that has the ability to feed high-level mission scripts into a processor
that automatically generates scan commands during flight simulations. The scan generation depends on specified flight
trajectories and scenes consisting of terrain and targets. The scenes and trajectories can either consist of simulated or
actual data. The first modeling step produces an outline of scan footprints in xyz space. Once mission goals have been
analyzed and it is determined that the scan footprints are appropriately distributed or placed, specific scans can then be
chosen for the generation of complete radiometry-based range images and point clouds. The simulation is capable of
quickly modeling ray-trace geometry associated with (1) various focal plane arrays and scanner configurations and (2)
various scene and trajectories associated with particular maneuvers or missions.
In recent years, NASA's interest in autonomous rendezvous and docking operations with impaired or non-cooperative spacecraft has grown extensively. In order to maneuver and dock, a servicing spacecraft must be able to determine the relative 6 degree-of-freedom (6 DOF) motion between the vehicle and the target spacecraft. One method to determine the relative 6 DOF position and attitude is through lidar imaging. A flash lidar sensor system can capture close-proximity range images of the target spacecraft, producing 3-D point cloud data sets. These sequentially collected point-cloud data sets can be compared to a point cloud image of the target at a known location using a point correspondence-less variant of the Iterative Closest Points (ICP) algorithm to determine the relative 6 DOF displacements. Simulation experiments indicate that the MSE, angular error, mean, and standard deviations for position and orientation estimates did not vary as a function of position and attitude. Furthermore, the computational times required by this algorithm were comparable to previously reported variants of the point-to-point and point-to-plane based ICP variants.
KEYWORDS: Control systems, Robotics, Global Positioning System, Actuators, Mobile robots, Vehicle control, Sensors, Kinematics, System integration, Feedback loops
A systematic approach to ground vehicle automation is presented, combining low-level controls, trajectory generation and closed-loop path correction in an integrated system. Development of cooperative robotics for precision agriculture at Utah State University required the automation of a full-scale motorized vehicle. The Triton Predator 8- wheeled skid-steering all-terrain vehicle was selected for the project based on its ability to maneuver precisely and the simplicity of controlling the hydrostatic drivetrain. Low-level control was achieved by fitting an actuator on the engine throttle, actuators for the left and right drive controls, encoders on the left and right drive shafts to measure wheel speeds, and a signal pick-off on the alternator for measuring engine speed. Closed loop control maintains a desired engine speed and tracks left and right wheel speeds commands. A trajectory generator produces the wheel speed commands needed to steer the vehicle through a predetermined set of map coordinates. A planar trajectory through the points is computed by fitting a 2D cubic spline over each path segment while enforcing initial and final orientation constraints at segment endpoints. Acceleration and velocity profiles are computed for each trajectory segment, with the velocity over each segment dependent on turning radius. Left and right wheel speed setpoints are obtained by combining velocity and path curvature for each low-level timestep. The path correction algorithm uses GPS position and compass orientation information to adjust the wheel speed setpoints according to the 'crosstrack' and 'downtrack' errors and heading error. Nonlinear models of the engine and the skid-steering vehicle/ground interaction were developed for testing the integrated system in simulation. These test lead to several key design improvements which assisted final implementation on the vehicle.
KEYWORDS: Neural networks, Control systems, Systems modeling, Complex systems, Antennas, Servomechanisms, Analog electronics, Nonlinear control, Computing systems, Intelligence systems
A control design technique known as the Model Reference Neural Network (MRNN) method has recently been developed. In this method, neural network controllers are trained so that the controlled system response mimics that of a desired reference model. Since the controller can be trained using experimental test data consisting of command and response state data, it is equally applicable to linear and nonlinear systems. The MRNN procedure was experimentally evaluated by applying it to several systems which demonstrated nonlinear behavior typically found in servosystems, including significant stick-slip friction, backlash, and positionally dependent gravitational torques. The performance of the MRNN was then compared to both PID and linear model reference controllers. Experimental results indicate that the accuracy of the MRNN controller typically equals or exceeds the linear model reference controllers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.