PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8060, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a software infrastructure made up of tools and libraries designed to assist developers in implementing computational dynamics applications running on heterogeneous and distributed computing environments. Together, these tools and libraries compose a so called Heterogeneous Computing Template (HCT). The heterogeneous and distributed computing hardware infrastructure is assumed herein to be made up of a combination of CPUs and Graphics Processing Units (GPUs). The computational dynamics applications targeted to execute on such a hardware topology include many-body dynamics, smoothed-particle hydrodynamics (SPH) fluid simulation, and fluid-solid interaction analysis. The underlying theme of the solution approach embraced by HCT is that of partitioning the domain of interest into a number of subdomains that are each managed by a separate core/accelerator (CPU/GPU) pair. Five components at the core of HCT enable the envisioned distributed computing approach to large-scale dynamical system simulation: (a) the ability to partition the problem according to the one-to-one mapping; i.e., spatial subdivision, discussed above (pre-processing); (b) a protocol for passing data between any two co-processors; (c) algorithms for element proximity computation; and (d) the ability to carry out post-processing in a distributed fashion. In this contribution the components (a) and (b) of the HCT are demonstrated via the example of the Discrete Element Method (DEM) for rigid body dynamics with friction and contact. The collision detection task required in frictional-contact dynamics (task (c) above), is shown to benefit on the GPU of a two order of magnitude gain in efficiency when compared to traditional sequential implementations. Note: Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not imply its endorsement, recommendation, or favoring by the United States Army. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Army, and shall not be used for advertising or product endorsement purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As multithreaded and reconfigurable logic architectures play an increasing role in high-performance computing (HPC),
the scientific community is in need for new programming models for efficiently mapping existing applications to the new
parallel platforms. In this paper, we show how we can effectively exploit tightly coupled fine-grained parallelism in architectures
such as GPU and FPGA to speedup applications described by uniform recurrence equations. We introduce the
concept of rolling partial-prefix sums to dynamically keep track of and resolve multiple dependencies without having to
evaluate intermediary values. Rolling partial-prefix sums are applicable in low-latency evaluation of dynamic programming
problems expressed as uniform or affine equations. To assess our approach, we consider two common problems in
computational biology, hidden Markov models (HMMER) for protein motif finding and the Smith-Waterman algorithm.
We present a platform independent, linear time solution to HMMER, which is traditionally solved in bilinear time, and a
platform independent, sub-linear time solution to Smith-Waterman, which is normally solved in linear time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math
processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with
excellent FLOPS-to-watt ratio. High-level sparse linear algebra operations are computationally intense, often requiring
large amounts of parallel operations and would seem a natural fit for the processing power of the GPU. Our work is on a
GPU accelerated implementation of sparse linear algebra routines. We present results from both direct and iterative
sparse system solvers.
The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring
between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear
algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order
parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a
processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is
accomplished by allowing each platform to do the work it performs most naturally. For example, the CPU is responsible
for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra
routines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a software platform for the rapid development of general purpose GPU (GPGPU) computing applications
within the MATLAB computing environment, C, and C++: Jacket. Jacket provides thousands of
GPU-tuned function syntaxes within MATLAB, C, and C++, including linear algebra, convolutions, reductions,
and FFTs as well as signal, image, statistics, and graphics libraries. Additionally, Jacket includes a compiler
that translates MATLAB and C++ code to CUDA PTX assembly and OpenGL shaders on demand at runtime.
A facility is also included to compile a domain specific version of the MATLAB language to CUDA assembly at
build time. Jacket includes the first parallel GPU FOR-loop construction and the first profiler for comparative
analysis of CPU and GPU execution times. Jacket provides full GPU compute capability on CUDA hardware
and limited, image processing focused compute on OpenGL/ES (2.0 and up) devices for mobile and embedded applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The modern battlespace is populated with a variety of sensors and sensing modalities. The design and tasking
of a given sensor is therefore increasingly dependent on the performance of other sensors in the mix. The
volume of sensor data is also forcing an increased reliance on sensor data exploitation and content analysis
algorithms (e.g., detecting, labeling, and tracking objects). Effective development and use of interconnected
and algorithmic (i.e., limited human role) sensing processes depends on sensor performance models (e.g., for
offline optimization over design and employment options and for online sensor management and data fusion).
Such models exist in varying forms and fidelities. This paper develops a framework for defining model roles
and describes an assessment process for quantifying fidelity and related properties of models. A key element
of the framework is the explicit treatment of the Operating Conditions (OCs - i.e., target, environment and
sensor properties that affect exploitation performance) that are available for model development, testing data,
and model users. The assessment methodology is a comparison of model and reference performance, but is made
non-trivial by reference limitations (availability for OC distributions of interest) and differences in reference and
model OC representations. A software design of the assessment process is also described. Future papers will
report assessment results for specific models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-frame algorithms for the removal of atmospheric turbulence have proven effective under ideal conditions where
the scene remains static; however, movement of the camera across a scene often introduces undesirable effects that
degrade the quality of processed imagery to the point where it becomes unusable. This paper discusses the development
of two solutions to this problem, each with different computational costs and levels of efficacy. We discuss a solution to
this problem that uses robust registration methods to align a window of input images to each other and processes them to
obtain a single improved frame, repeating the sequence of realignment and processing each time a new frame arrives.
While this approach produces high quality results, the associated computational cost precludes real-time implementation,
even on accelerated platforms. An alternative solution involves measuring scene movement through lightweight
registration and quantification. Registration results are used to make a global determination of "safe" approaches to
processing in order to avoid degraded results. This particular method is computationally inexpensive at the cost of
efficacy. We discuss the performance of both of these modifications against the original, uncompensated algorithm in
terms of computational cost and quality of output imagery. Additionally, we will briefly discuss future goals which aim
to minimize additional computation while maximizing processing efficacy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The continuing miniaturization and parallelization of computer hardware has facilitated the development of mobile and
field-deployable systems that can accommodate terascale processing within once prohibitively small size and weight
constraints. General-purpose Graphics Processing Units (GPUs) are prominent examples of such terascale devices.
Unfortunately, the added computational capability of these devices often comes at the cost of larger demands on power,
an already strained resource in these systems. This study explores power versus performance issues for a workload that
can take advantage of GPU capability and is targeted to run in field-deployable environments, i.e., Synthetic Aperture
Radar (SAR). Specifically, we focus on the Image Formation (IF) computational phase of SAR, often the most compute
intensive, and evaluate two different state-of-the-art GPU implementations of this IF method. Using real and simulated
data sets, we evaluate performance tradeoffs for single- and double-precision versions of these implementations in terms
of time-to-solution, image output quality, and total energy consumption. We employ fine-grain direct-measurement
techniques to capture isolated power utilization and energy consumption of the GPU device, and use general and radarspecific
metrics to evaluate image output quality. We show that double-precision IF can provide slight image
improvement to low-reflective areas of SAR images, but note that the added quality may not be worth the higher power
and energy costs associated with higher precision operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A radar system created using an embedded computer system needs testing. The way to test an embedded computer
system is different from the debugging approaches used on desktop computers. One way to test a radar system is to feed
it artificial inputs and analyze the outputs of the radar. More often, not all of the building blocks of the radar system are
available to test. This will require the engineer to test parts of the radar system using a "black box" approach. A
common way to test software code on a desktop simulation is to use breakpoints so that is pauses after each cycle
through its calculations. The outputs are compared against the values that are expected. This requires the engineer to
use valid test scenarios. We will present a hardware-in-the-loop simulator that allows the embedded system to think it is
operating with real-world inputs and outputs. From the embedded system's point of view, it is operating in real-time.
The hardware in the loop simulation is based on our Desktop PC Simulation (PCS) testbed. In the past, PCS was used
for ground-based radars. This embedded simulation, called Embedded PCS, allows a rapid simulated evaluation of
ground-based radar performance in a laboratory environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Air Force is consistently evolving to support current and future operations through the planning and execution
of intelligence, surveillance and reconnaissance (ISR) missions. However, it is a challenge to maintain a precise
awareness of current and emerging ISR capabilities to properly prepare for future conflicts. We present a decisionsupport
tool for acquisition managers to empirically compare ISR capabilities and approaches to employing them,
thereby enabling the DoD to acquire ISR platforms and sensors that provide the greatest return on investment. We have
developed an analysis environment to perform modeling and simulation-based experiments to objectively compare
alternatives. First, the analyst specifies an operational scenario for an area of operations by providing terrain and threat
information; a set of nominated collections; sensor and platform capabilities; and processing, exploitation, and
dissemination (PED) capacities. Next, the analyst selects and configures ISR collection strategies to generate collection
plans. The analyst then defines customizable measures of effectiveness or performance to compute during the
experiment. Finally, the analyst empirically compares the efficacy of each solution and generates concise reports to
document their conclusions, providing traceable evidence for acquisition decisions. Our capability demonstrates the
utility of using a workbench environment for analysts to design and run experiments. Crafting impartial metrics enables
the acquisition manager to focus on evaluating solutions based on specific military needs. Finally, the metric and
collection plan visualizations provide an intuitive understanding of the suitability of particular solutions. This facilitates
a more agile acquisition strategy that handles rapidly changing technology in response to current military needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low driving voltage and high-speed electro-optic (EO) modulators are of great interest due to
their wide variety of applications including broadband communication, RF-photonic links,
millimeter wave imaging and phased-array radars. In this paper we propose symmetric design,
analysis, and optimization of a novel, high speed and ultra-low driving voltage traveling wave
EO modulator based on a dual RF-photonic slot waveguide. Preliminary simulation results
demonstrate the DC electro-optic response and half-wavelength voltage-production Vπ-L of
0.1~0.2 V•cm can be achieved for this design. The electro-optic response demonstrates the
proposed device is capable of ultra-high speed operation that covers entire RF spectrum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modeling and simulation has been established as a cost-effective means of supporting the development of requirements,
exploring doctrinal alternatives, assessing system performance, and performing design trade-off analysis. The Army's
constructive simulation for the evaluation of equipment effectiveness in small combat unit operations is currently limited
to representation of situation awareness without inclusion of the many uncertainties associated with real world combat
environments. The goal of this research is to provide an ability to model situation awareness and decision process
uncertainties in order to improve evaluation of the impact of battlefield equipment on ground soldier and small combat
unit decision processes. Our Army Probabilistic Inference and Decision Engine (Army-PRIDE) system provides this
required uncertainty modeling through the application of two critical techniques that allow Bayesian network technology
to be applied to real-time applications. (Object-Oriented Bayesian Network methodology and Object-Oriented Inference
technique). In this research, we implement decision process and situation awareness models for a reference scenario
using Army-PRIDE and demonstrate its ability to model a variety of uncertainty elements, including: confidence of
source, information completeness, and information loss. We also demonstrate that Army-PRIDE improves the realism of
the current constructive simulation's decision processes through Monte Carlo simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simulating cyber warfare is critical to the preparation of decision-makers for the challenges posed by cyber attacks.
Simulation is the only means we have to prepare decision-makers for the inevitable cyber attacks upon the
information they will need for decision-making and to develop cyber warfare strategies and tactics. Currently, there
is no theory regarding the strategies that should be used to achieve objectives in offensive or defensive cyber warfare,
and cyber warfare occurs too rarely to use real-world experience to develop effective strategies. To simulate cyber
warfare by affecting the information used for decision-making, we modify the information content of the rings that are
compromised during in a decision-making context. The number of rings affected and value of the information that is
altered (i.e., the closeness of the ring to the center) is determined by the expertise of the decision-maker and the
learning outcome(s) for the simulation exercise. We determine which information rings are compromised using the
probability that the simulated cyber defenses that protect each ring can be compromised. These probabilities are
based upon prior cyber attack activity in the simulation exercise as well as similar real-world cyber attacks. To
determine which information in a compromised "ring" to alter, the simulation environment maintains a record of the
cyber attacks that have succeeded in the simulation environment as well as the decision-making context. These two
pieces of information are used to compute an estimate of the likelihood that the cyber attack can alter, destroy, or
falsify each piece of information in a compromised ring. The unpredictability of information alteration in our
approach adds greater realism to the cyber event.
This paper suggests a new technique that can be used for cyber warfare simulation, the ring approach for modeling
context-dependent information value, and our means for considering information value when assigning cyber
resources to information protection tasks. The first section of the paper introduces the cyber warfare simulation
challenge and the reasons for its importance. The second section contains background information related to our
research. The third section contains a discussion of the information ring technique and its use for simulating cyber
attacks. The fourth section contains a summary and suggestions for research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simulation environments serve many purposes, but they are only as good as their content. One of the most
challenging and pressing areas that call for improved content is the simulation of bot armies (botnets) and their
effects upon networks and computer systems. Botnets are a new type of malware, a type that is more powerful and
potentially dangerous than any other type of malware. A botnet's power derives from several capabilities including
the following: 1) the botnet's capability to be controlled and directed throughout all phases of its activity, 2) a
command and control structure that grows increasingly sophisticated, and 3) the ability of a bot's software to be
updated at any time by the owner of the bot (a person commonly called a bot master or bot herder.) Not only is a
bot army powerful and agile in its technical capabilities, a bot army can be extremely large, can be comprised of
tens of thousands, if not millions, of compromised computers or it can be as small as a few thousand targeted
systems. In all botnets, their members can surreptitiously communicate with each other and their command and
control centers. In sum, these capabilities allow a bot army to execute attacks that are technically sophisticated,
difficult to trace, tactically agile, massive, and coordinated. To improve our understanding of their operation and
potential, we believe that it is necessary to develop computer security simulations that accurately portray bot army
activities, with the goal of including bot army simulations within military simulation environments. In this paper,
we investigate issues that arise when simulating bot armies and propose a combination of the biologically inspired
MSEIR infection spread model coupled with the jump-diffusion infection spread model to portray botnet
propagation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The National Operational Environment Model (NOEM) is a strategic analysis/assessment tool that provides insight into
the complex state space (as a system) that is today's modern operational environment. The NOEM supports baseline
forecasts by generating plausible futures based on the current state. It supports what-if analysis by forecasting
ramifications of potential "Blue" actions on the environment. The NOEM also supports sensitivity analysis by
identifying possible pressure (leverage) points in support of the Commander that resolves forecasted instabilities, and by
ranking sensitivities in a list for each leverage point and response. The NOEM can be used to assist Decision Makers,
Analysts and Researchers with understanding the inter-workings of a region or nation state, the consequences of
implementing specific policies, and the ability to plug in new operational environment theories/models as they mature.
The NOEM is built upon an open-source, license-free set of capabilities, and aims to provide support for pluggable
modules that make up a given model. The NOEM currently has an extensive number of modules (e.g. economic,
security & social well-being pieces such as critical infrastructure) completed along with a number of tools to exercise
them. The focus this year is on modeling the social and behavioral aspects of a populace within their environment,
primarily the formation of various interest groups, their beliefs, their requirements, their grievances, their affinities, and
the likelihood of a wide range of their actions, depending on their perceived level of security and happiness. As such,
several research efforts are currently underway to model human behavior from a group perspective, in the pursuit of
eventual integration and balance of populace needs/demands within their respective operational environment and the
capacity to meet those demands. In this paper we will provide an overview of the NOEM, the need for and a description
of its main components. We will also provide a detailed discussion of the model and sample use cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Department of Defense (DOD) is exercising a risk-based process for verifying, validating and accrediting models
and simulations (M&S) used in system acquisition. Test and laboratory facilities can potentially have even greater
potential negative consequences to a program than M&S if there are errors present in the test and analysis results, since
test results are usually considered closer to the "truth" than M&S results. This paper will discuss how the risk-based
M&S verification, validation and accreditation (VV&A) process is being applied to test and laboratory facilities, issues
associated with this different application of the process, and thoughts on the broader applicability of risk-based VV&A
beyond the current application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To bring reality into models and simulations (M&S), the Department of Defense (DOD) combines constructive M&S
with real equipment operated by humans in field environments. When such a live, virtual, and constructive distributed
environment (LVC-DE) is assembled, there exist ample opportunities for success or failure depending on many issues.
Each M&S tool, along with the means used to connect it to the others, must be examined independently. The combined
M&S, the interfaces, and the data they exchange must be tested to confirm that the entire system is interoperable and is
achieving its intended goals. Verification and Validation (V&V) is responsible for systematically investigating, creating,
and documenting the artifacts needed to assess the credibility of such LVC-DE. The ultimate goal for V&V is to
evaluate the capability, the accuracy and the usability of such LVC-DE.
The Battlespace Modeling and Simulation V&V Branch has extensive experience performing V&V of LVC-DEs. In a
recent project, the task consisted of conducting V&V of the LVC-DE, the supporting infrastructure, and the legacy M&S
tools. From a V&V perspective, many things were done correctly; however, several adjustments were necessary to
improve the credibility of the LVC-DE. This paper will discuss lessons learned during the implementation and provide
recommendations for future LVC-DE applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Anti-ship Missile (ASM) threat to be faced by ships will become more diverse and difficult. Intelligence, rules of
engagement constraints, fast reaction-time for effective softkill solution require specific tools to design Electronic
Warfare (EW) systems and to integrate it onboard ship.
SAGEM Company provides decoy launcher system [1] and its associated Naval Electronic Warfare Simulation tool
(NEWS) to permit softkill effectiveness analysis for anti-ship missile defence.
NEWS tool generates virtual environment for missile-ship engagement and counter-measure simulator over a wide
spectrum: RF, IR, EO. It integrates EW Command & Control (EWC2) process which is implemented in decoy launcher
system and performs Monte-Carlo batch processing to evaluate softkill effectiveness in different engagement situations.
NEWS is designed to allow immediate EWC2 process integration from simulation to real decoy launcher system. By
design, it allows the final operator to be able to program, test and integrate its own EWC2 module and EW library
onboard, so intelligence of each user is protected and evolution of threat can be taken into account through EW library
update.
The objectives of NEWS tool are also to define a methodology for trial definition and trial data reduction. Growth
potential would permit to design new concept for EWC2 programmability and real time effectiveness estimation in EW
system. This tool can also be used for operator training purpose.
This paper presents the architecture design, the softkill programmability facility concept and the flexibility for onboard
integration on ship. The concept of this operationally focused simulation, which is to use only one tool for design,
development, trial validation and operational use, will be demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Snort can only give an alarm about the attack in windows platform, but it can not linkage firewall in real-time, so this
paper propose a novelty method of building a distribution internet security defense system which based on windows
IPSec and IDS in this paper. This method combines two of them successfully by adding keywords, new rules and
making new IP security policy; Besides, an encryption algorithm named twofish is applied to encrypt the data which can
effective protect host. At last, an attack experiment is shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LADAR (Laser Detection and Ranging) is widely used for reconnaissance or target detection by being mounted on
various moving vehicles in the defense field. During the design and development process of a LADAR system, system
simulation is typically performed to assess its performance and to provide test data for the real applications. In order to
generate simulated LADAR data with a high degree of reliability and accuracy, it is required to derive the precise
geometric model of the sensors and to calculate the locations where the rays (laser pulses) reflected using the geometric
model. As ten thousands of laser beams are transmitted to the targets every second during the real operation, a LADAR
simulator should perform a tremendous amount of geometric computations to determine the intersections between the
rays and the targets. In this study, we present an attempt to develop an efficient method for such geometric computation
for LADAR simulation. In the computational process, we first search for the candidate facets which possess a high
possibility to intersect with a ray then determine the actual intersecting facet, and further compute the intersection. To
reduce the computational time, we employ an incremental algorithm and parallel processing based on a CUDA enabled
GPU. We expect that our proposed approaches will enhance LADAR simulator software to be able to run in near realtime.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.