PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper describes the visualization of 3D Doppler radar with global, with high-resolution terrain. This is the first time such data have been displayed together in a real-time environment. Associated data such as buildings and maps are displayed along with the weather data and the terrain. Requirements for effective 3D visualization for weather forecasting are identified. The application presented in this paper meets most of these requirements. In particular the application provides end-to-end real-time capability, integrated browsing and analysis, and integration of relevant data in a combined visualization. The last capability will grow in importance as researchers develop sophisticated models of storm development that yield rules for how storms behave in the presence of hills or mountains and other features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are several significant and related automation trends in the evolution of the tactical battlefield, necessary to support greatly increased mobility of our land forces. One relates to the increased. automation and distributed functionality of the nerve center or tactical operation center (TOC), with the introduction of intelligent software agents. The anticipated dynamics of the future battlefield will require greatly increased mobility, information flow, information assimilation, and decision action of these centers. The second relates to the digitization of battlefield platforms. This digitization greatly reduces the uncertainty concerning these platforms and enables automated information exchange between these platforms and their TOC. The third is the rapid development of robotic or physical agents for numerous hazardous battlefield visualization to exploit the potential synergy and unification of these disparate developments. Battlefield visualization programs are currently focused on effectively representing the physical environment to support planning, mission rehearsal, and situational awareness. As intelligent agents are developed, battlefield visualization must be enhanced to include the state, behavior, collaboration and results of these agents. An initial representation of software and physical agents within a single battlefield visualization is presented. The major challenges to attaining this level of automation, in particular human interaction and trust will be addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Temporal and spatial analysis has been applied to a sequence of cloud top pressure (CTP) images and cloud optical thickness (TAU) images stored in the International Satellite Cloud Climatology Project D1 database located at the NASA Goddard Institute for Space Studies. Each pixel in the D1 data set has a resolution of 2.5 degrees or 280 kilometers. These images were collected in consecutive three-hour intervals for the entire month of April 1989 and April of 1994. The primary objective of this project was to develop a sequence of storm tacks from the satellite images that could be compared tracks developed from sea level historical records. Composite images where created by projecting ahead in time and substituting the first available valid pixel for missing data and a variety of CTP and TAU cut-off values were used to identify regions of interest. Region correspondences were determined from one time frame to another yielding the coordinates of storm centers. These tracks were compared to storm tracks computed from sea level pressure data obtain from the NMC by first matching the result in time and then in spatial distance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper 'real-time 3D data' refers to volumetric data that are acquired and used as they are produced. Large scale, real-time data are difficult to store and analyze, either visually or by some other means, within the time frames required. Yet this is often quite important to do when decision-makers must receive and quickly act on new information. An example is weather forecasting, where forecasters must act on information received on severe storm development and movement. To meet the real-time requirements crude heuristics are often used to gather information from the original data. This is in spite of the fact that better and better real-time data are becoming available, the full use of which could significantly improve decisions. The work reported here addresses these issues by providing comprehensive data acquisition, analysis, and storage components with time budgets for the data management of each component. These components are put into a global geospatial hierarchical structure. The volumetric data are placed into this global structure, and it is shown how levels of detail can be derived and used within this structure. A volumetric visualization procedure is developed that conforms to the hierarchical structure and uses the levels of detail. These general methods are focused on the specific case of the VGIS global hierarchical structure and rendering system,. The real-time data considered are from collections of time- dependent 3D Doppler radars although the methods described here apply more generally to time-dependent volumetric data. This paper reports on the design and construction of the above hierarchical structures and volumetric visualizations. It also reports result for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented results for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented for display of time-dependent fields as the user visually navigates and explores the geospatial database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a semi-automated procedure for generating correctly located 3D tree objects form overhead imagery. Cross-platform software partitions arbitrarily large, geocorrected and geolocated imagery into management sub- images. The user manually selected tree areas from one or more of these sub-images. Tree group blobs are then narrowed to lines using a special thinning algorithm which retains the topology of the blobs, and also stores the thickness of the parent blob. Maxima along these thinned tree grous are found, and used as individual tree locations within the tree group. Magnitudes of the local maxima are used to scale the radii of the tree objects. Grossly overlapping trees are culled based on a comparison of tree-tree distance to combined radii. Tree color is randomly selected based on the distribution of sample tree pixels, and height is estimated form tree radius. The final tree objects are then inserted into a terrain database which can be navigated by VGIS, a high-resolution global terrain visualization system developed at Georgia Tech.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new proposition that locally 3D information can be reconstructed from 2D information is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Western European Union Satellite Center coordinates the development of an operational end-to-end production, management and delivery chain for highly complex and sensitive Digital Geographic Information (DGI). The chain permits converting satellite imagery, cartography, collateral data and imagery analysis results into user- friendly raster and vector DGI products suitable to assist civilian and military decision making process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SGI Vizserver enables remote visualization in a manner transparent to the user application by producing rendered output at geographically remote locations while utilizing the powerful pipeline and expansive memory of an Onyx2 Infinite Reality machine located at some centralized place. Since the communication of visualization imagery typically requires enormous bandwidth, Vizserver offers two built-in options for compression which provide high-quality images at interactive frame rates for local-area networks with bandwidths of 100 Mbps. However, these built-in compressors are not well suited to truly remote users who are separated from the server by great distances and connected through low- and very-low-bandwidth links. In this paper, we propose two external compression algorithms that connect to Vizserver via an API to achieve 1) greater flexibility in terms of the user's control over distortion and bandwidth performance, and 2) better overall performance for truly remote users. Of all the techniques considered, we find that a simple frame- differencing scheme is best suited to very-low-bandwidth operation in that it achieves visually lossless performance at a frame rate higher than that of the built-in options, which saturate the network and incur substantial amounts of frame dropping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a simulation which has been developed over the last couple years for weapons trainers, in which a gunner can pan across a terrain scenario containing moving military vehicles. The terrain imagery is obtained by stitching together a panorama from photographs. Associated with this imagery are a wireframe model of the ground and a range image incorporating both the ground and other objects in the scene. This data allows military vehicles to be rendered at the correct scale and aspect for insertion in the scene, with occlusion by photographic scene objects as appropriate. Controls for contrast, brightness, and focus adjustment which are part of the weapons systems are included in the simulation, as well as rudimentary environmental effects. Achieving real-time performance with smooth user interaction on standard PC hardware posed some challenges. Our methodology assigns some of the processing steps and partial result storage to the graphics card, and others to the system CPU and memory. The methodology also makes frequent use of precomputed and previous fame partial results, so that most frames do not require all the processing step to be applied to the whole visible area. We present an overview of the frame construction methodology, and discuss in more detail the scheme for background precomputation of focus processing results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automatic storage and extraction of high-level information characters of 3D entity is important to Geoinformation Visualization system. To address this problem, an effort has ben taken to develop a 3D visualization system integrated with a database when we are focusing on depicting large information spaces. Instead of using serial separated files, we explore the issues in the integration of database with the visualization system; make the database as the kernel of the system to specialize in the storage and management of all types of data. Although database management system does not have the analytical and visualization capabilities of the visualization system, it plays an integral part in our visualization system due to its data management capabilities, and helps to provide the user with a single data model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CTHRU is a high performance visualization tool that is used to visualize temporal 3D datasets in an immersion environment. It was developed by Mississippi State University to visualize ocean models. This paper describes the design, development, and implementation of an enhanced version of CTHRU in order to add new collaborative capabilities. The CTHRU-C software package allows remote users to view and manipulate temporal datasets in real-time. Developing a multi-collaborative package with homogeneous software on both sides may run a risk of message duplication, which would lead to every message being repeatedly sent over the network and since, the options work in a toggled manner this would wreck havoc on the systems. To solve this problem a new mutual exclusion algorithms was developed. The paper also describes the network design scheme including propagation techniques. To implement a multi-user environment one master is needed that everyone would connect to. The paper also describes the network design scheme including propagation techniques. To implement a multi-user environment one mater is needed that everyone would connect to. The master would farm out all connections. This connection was implemented in CTHRU-C as a floating master. The software is designed around OpenGL, CAVERNsoft and CAVElibs to interface to the immersion environment. The tool was successfully tested on local machines at the Central of Higher Learning and Mississippi State University. The paper also addresses the guidelines that should \be taken into consideration while designing similar packages if one the design objectives is to add collaborative mode in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is an ongoing need for more sensitive sensors and instrumentation to monitor parameters of human physiological functions. Often it is desired that this measurement process be as non-intrusive as possible, thereby requiring lightweight, high performance sensors and data acquisition systems. In the current application it is desired to measure and extract as much information from a single non-intrusive sensor as possible to avoid encumbering mission personnel with motion-inhibiting harnesses. One particular signal processing task of interest is extraction of respiration information by analysis of heart rate. Thus, signal- processing algorithms are described which perform this task using communication, non-uniform sampling, and spectrum analysis techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Regions of the human body placed in the neighborhood of arteries or veins are periodically deflected by the passage of dilatation pulses and by body movements due to breathing. Remote, non-invasive measurement of the deflection is performed by means of optical techniques. The skin in the studied reign of the body is illuminated by a laser beam. The backscattered speckle pattern in recorded by a TV camera. The digitized images are numerically processed in order to determine the contrast of the light intensity distribution in each frame, which is a decreasing function of the instantaneous angular velocity of the reflected beam. The plot of time-integrated contrast vs. recording time is shown to closely resemble direct records of the angular deflection of the illuminated region. Also the breathing pattern is plotted in the same graphs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An acoustic sensor attached to a person's neck can extract heart and breath sounds, as well as voice and other physiology related to their health and performance. Soldiers, firefighters, law enforcement, and rescue personnel, as well as people at home or in health care facilities, can benefit form being remotely monitored. ARLs acoustic sensor, when worn around a person's neck, picks up the carotid artery and breath sounds very well by matching the sensor's acoustic impedance to that of the body via a gel pad, while airborne noise is minimized by an impedance mismatch. Although the physiological sounds have high SNR, the acoustic sensor also responds to motion-induced artifacts that obscure the meaningful physiology. To exacerbate signal extraction, these interfering signals are usually covariant with the heart sounds, in that as a person walks faster the heart tends to beat faster, and motion noises tend to contain low frequency component similar to the heart sounds. A noise-canceling configuration developed by ARL uses two acoustic sensor on the front sides of the neck as physiology sensors, and two additional acoustic sensor on the back sides of the neck as noise references. Breath and heart sounds, which occur with near symmetry and simultaneously at the two front sensor, will correlate well. The motion noise present on all four sensor will be used to cancel the noise on the two physiology sensors. This report will compare heart rate variability derived from both the acoustic array and from ECG data taken simultaneously on a treadmill test. Acoustically derived breath rate and volume approximations will be introduced as well. A miniature 3- axis accelerometer on the same neckband provides additional noise references to validate footfall and motion activity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present tissue modulated Raman spectroscopy as a technique for noninvasively measuring the concentration of blood analytes in vivo. We present preliminary data used to determine the best methods for analyzing our data. These experiments provide additional proof that we are indeed able to obtain the spectra of human blood in vivo and noninvasively. We discuss differences between our spectra and spectra of bulk blood in vitro. We also discuss the variations between individuals and the impact of those variations on our noninvasive blood glucose measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geoffrey S. F. Ling M.D., Ronald G. Riechers Sr., Krishna M. Pasala, Jeremy Blanchard, Masako Nozaki, Anthony Ramage, William Jackson, Michael Rosner, Patricia Garcia-Pinto, et al.
Proceedings Volume Visualization of Temporal and Spatial Data for Civilian and Defense Applications, (2001) https://doi.org/10.1117/12.438118
A novel method for identifying pneumothorax is presented. This method is based on a novel device that uses electromagnetic waves in the microwave radio frequency (RF) region and a modified algorithm previously used for the estimation of the angle of arrival of radar signals. In this study, we employ this radio frequency triage tool (RAFT) to the clinical condition of pneumothorax, which is a collapsed lung. In anesthetized pigs, RAFT can detect changes in the RF signature from a lung that is 20 percent or greater collapsed. These results are compared to chest x-ray. Both studies are equivalent in their ability to detect pneumothorax in pigs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Far Forward Life Support System (FFLSS) is intended for US Army use in far forward, battlefield situations. The primary patient population is young, otherwise healthy, adult males. The FFLSS must provide stabilizing medical care in the far forward environment. The device must be easily operated, highly mobile, compact and rugged, and provide automated, definitive support for a minimum of one hour. This project design, fabricated and tested a prototype FFLSS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the development of a comprehensive human modeling environment, the Virtual Human, which will be used initially to model the human respiratory system for purposes of predicting pulmonary disease or injury using lung sounds. The details of the computational environment, including the development of a Virtual Human Thorax, a database for storing models, model parameters, and experimental data, and a Virtual Human web interface are outlined. Preliminary progress in developing this environment will be presented. A separate paper at the conference describes the modeling of sound generation using computational fluid dynamics and the modeling of sound propagation in the human respiratory system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kara L. Kruse, Paul T. Williams, Glenn O. Allgood, Richard C. Ward, Shaun S. Gleason, Michael J. Paulus, Nancy B. Munro, Gnanamanika Mahinthakumar, Chandrasegaran Narasimhan, et al.
Proceedings Volume Visualization of Temporal and Spatial Data for Civilian and Defense Applications, (2001) https://doi.org/10.1117/12.438122
Fundamental to the understanding of the various transport processes within the respiratory system, airway fluid dynamics plays an important role in biomedical research. When air flows through the respiratory tract, it is constantly changing direction through a complex system of curved and bifurcating tubes. As a result, numerical simulations of the airflow through this tracheobronchial system must be capable of resolving such fluid dynamic phenomena as flow separation, recirculation, secondary flows due to centrifugal instabilities, and shear stress variation along the airway surface. Anatomic complexities within the tracheobronchial tree, such as sharp carinal regions at asymmetric bifurcations, have motivated the application of the incompressible Computational Fluid Dynamics code PHI3D to the modeling of airflow. Developed at ORNL, PHI3D implements the new Continuity Constraint Method. Using a finite-element methodology, complex geometries can be easily simulated with PHI3D using unstructured grids. A time- accurate integration scheme allows the simulation of both transient and steady-state flows. A realistic geometry model of the central airways for the fluid flow studies was obtained from pig lungs using a new high resolution x-ray computed tomography system developed at ORNL for generating 3D images of the internal structure of laboratory animals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we present the photophysical and photochemical investigation of the same sensititizers used in phototherapy. On an example water-soluble phthalocyanines of zinc some laws of influence of structure on the spectral characteristics and condition of dye in a solution are investigated. The quantum chemistry method of intermediate neglect of differential overlap with spectroscopic parameterization is used to study the spectroscopic- luminescent and physical characteristics of other sensitizers-psoralen, its isomers and metoxysubstitutes. Efficient intersystem conversion, which cause significant population of triplets, is observed for the examined compounds. Effects of isomerism and metoxysubstitution on the energy level diagram, photoelectron spectrum, and dipole moments are demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The capabilities of the Georgia Tech Virtual Geographic Information System (GTVGIS) have been extended recently to fake full advantage of the internal client-server internal structure that we have used in our stand-alone visualization capabilities. Research is underway in the creation of 2 additional client-server modes that will allow GTVGIS capabilities to be accessed by laptop client systems with high quality rendering capability and by very inexpensive lightweight client laptop and possibly hand held computing platforms that need only to support standard web browsers. Interface to large, remote, databases is over IP protocols is necessary for these new GTVGIS modes. This paper describes each mode of GTVGIS and the capabilities and requirements for hardware and software for each of the modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.