Military operations invariably involve devices at the edge (e.g. sensors, drones, handsets of soldiers, etc.) In edge environments, good network connectivity cannot be assumed due to Denied, Degraded, Intermittent, or Low-bandwidth (DDIL) conditions. A DDIL environment poses unique challenges for deploying AI applications at the edge, particularly in the execution of Machine Learning Operations (MLOps). In this paper, we present a framework to address these challenges by considering three important dimensions: (i)the ML model lifecycle activities, (ii) specific DDIL induced challenges at the edge and (iii) the application stack. We discuss three realistic use cases in detail to explain the use of this approach to identify the underlying design patterns. We believe that use of this framework can lead to a responsive and reliable AI deployment under varying operational conditions.
We consider a scenario in multi-domain operations where the users in one domain need to perform searches on information hosted by a provider in another domain. It is very common in many scenarios that information cannot be shared openly across different domains, and users may want to obfuscate their searches and prevent the search provider from learning the intent of their searches. In scenarios where search privacy is important, the use of a large language model can help implement an obfuscation approach relying on generation of decoy queries to obfuscate the real query. In this paper, we consider different alternative approaches to use large language models for search privacy, compare their strengths and weaknesses, and discuss their effectiveness.
KEYWORDS: Analytics, Network architectures, Space reconnaissance, Sensors, Information technology, Failure analysis, Databases, Data storage, Data processing, Computer architecture
By extending the Software Defined Networking (SDN), the Distributed Analytics and Information Sciences International Technology Alliance (DAIS ITA) https://dais-ita.org/pub has introduced a new architecture called Software Defined Coalitions (SDC) to share communication, computation, storage, database, sensor and other resources among coalition forces. Reinforcement learning (RL) has been shown to be effective for managing SDC. Due to link failure or operational requirements, SDC may become fragmented and reconnected again over time. This paper shows how data and knowledge acquired in the disconnected SDC domains during fragmentation can be used via transfer learning (TL) to significantly enhance the RL after fragmentation ends. Thus, the combined RL-TL technique enables efficient management and control of SDC despite fragmentation. The technique also enhances the robustness of the SDC architecture for supporting distributed analytics services.
Network structure represents a vital component in wide-ranging aspects of Multi-Domain Operations (MDO). One specific type of network that holds promise in understanding the behavior of complex environments such as MDO consists of ones where nodes are combined with both positive ties and negative ties. Positive ties are edges that promote nodes to become similar to each other, or homophilous, while negative ties are edges that promote nodes to be dissimilar to each other. Such a model of influence among the nodes can be used to explain various phenomena happening within a society, modeling peer influences, spread of memes, or to model incidents of violence. In this paper, we propose a Positive-Negative tie network model to analyze terrorism incidents in India, and investigate the role of this network in general network classification and situation understanding contexts.
One of the key factors affecting any multi-domain operation concerns the influence of unorganized militias, which may often counter a more advanced adversary by means of terrorist incidents. In order to ensure the achievement of strategic objectives, the actions and influence of such violent activities need to be taken into account. However, in many cases, full information about the incidents that may have affected civilians and non-government organizations is hard to determine. In the situation of asymmetric warfare, or when planning a multi-domain operation, often the identity of the perpetrator may not themselves be known. In order to support a coalition commander's mandate, one could use AI/ML techniques to provide the missing details about incidents in the field which may only be partially understood or analyzed. In this paper, we examine the goal of predicting the identity of the perpetrator of a terrorist incident using AI/ML techniques on historical data, and discuss how well the AI/ML models can work to help clean the data available to the commander for data analysis.
In multi-domain operations, different domains get different modalities of input signals, and as a result end up training different models for the same decision-making task. The input modalities could be overlapping with each other, which leads to the situation that models created in one domain may be reusable partially for tasks being conducted in other domains. In order to share the knowledge embedded in different models trained independently in each individual domain, we propose the concept of hybrid policy-based ensembles, in which the heterogeneous models from different domains are combined into an ensemble whose operations are controlled by policies specifying which subset of the models ought to be used for an operation. We show how these policies can expressed based on properties of training datasets, and discuss the performance of these hybrid policy-based ensembles on a dataset used for training network intrusion detection models.
KEYWORDS: Data modeling, Artificial intelligence, Process modeling, Ecosystems, Machine learning, Data archive systems, Roads, Data processing, Systems modeling, Video
Machine Learning systems rely on data for training, input and ongoing feedback and validation. Data in the field can come from varied sources, often anonymous or unknown to the ultimate users of the data. Whenever data is sourced and used, its consumers need assurance that the data accuracy is as described, that the data has been obtained legitimately, and they need to understand the terms under which the data is made available so that they can honour them. Similarly, suppliers of data require assurances that their data is being used legitimately by authorised parties, in accordance with their terms, and that usage is appropriately recompensed. Furthermore, both parties may want to agree on a specific set of quality of service (QoS) metrics, which can be used to negotiate service quality based on cost, and then receive affirmation that data is being supplied within those agreed QoS levels. Here we present a conceptual architecture which enables data sharing agreements to be encoded and computationally enforced, remuneration to be made when required, and a trusted audit trail to be produced for later analysis or reproduction of the environment. Our architecture uses blockchainbased distributed ledger technology, which can facilitate transactions in situations where parties do not have an established trust relationship or centralised command and control structures. We explore techniques to promote faith in the accuracy of the supplied data, and to let data users determine trade-offs between data quality and cost. Our system is exemplified through consideration of a case study using multiple data sources from different parties to monitor traffic levels in urban locations.
KEYWORDS: Artificial intelligence, Machine learning, Data modeling, Chemical elements, Data acquisition, Defense and security, Systems modeling, Distance measurement
Real-life AI based solutions usually consist of a complex chain of processing elements, which may include a mixture of machine learning based approaches and traditional programmed knowledge. The solution uses this chain of processing elements to convert input information into an output decision. When information is provided for a specific solution, the impact of the information on the decision can be measured quantitatively as a Value of Information (VoI) metric. In prior work, we have considered how the VoI metric can be defined for a single AI-based processing element. To be useful in real-life solution instances, the VoI metric needs to be enhanced to handle a complex chain of processors, and be extended to AI-based solutions, as well as supporting elements that may not necessarily use AI. In this paper, we propose a definition of VoI that can be used across AIbased processing, as well as non AI based processing, and show how the construct can be used to analyze and understand the impact of a piece of information on a chain of processing elements.
Autonomous systems are expected to have a major impact in future coalition operations. These systems are enabled by a variety of Artificial Intelligence (AI) learning algorithms that contextualize and adapt in varying, possibly unforeseen situations to assist humans in achieving complex tasks. Moreover, these systems will be required to operate in dynamic and challenging environments that impose certain constraints such as task formation and collaboration, ad-hoc resource availability, rapidly changing environmental conditions and the requirement to abide by mission objectives. Therefore, such systems require the capability to adapt and evolve so that they can behave autonomously at the edge of the network in new situations. Crucially, autonomous systems have to understand the bounds in which they can operate based on their own capability and the constraints of the environment. Policies are typically used by systems to define their behavior and constraints and often these policies are manually configured and managed by humans. AI-enabled systems will require novel approaches to rapidly learn, create, augment, and model emerging policies to support real-time decision making. Recent work has shown that such policy model generations are possible through symbolic learning to shallow and deep learning approaches for different classes of problems. Motivated by this observation, in this paper, we propose to apply recent advances in explainable-AI to develop an approach which is agnostic to the learning algorithm, thus enabling seamless policy generation in the coalition environment.
Tactical edge environments are highly distributed with a large number of sensing, computational, and communication nodes spread across large geographical regions, governments, and situated in unique operational environments. In such settings, a large number of observations and actions may occur across a large number of nodes but each node may only have a small number of these data locally. Further, there may be technical as well as policy constraints in aggregating all observations to a single node. Learning from all of the data may uncover critical correlations and insights. However, without having access to all the data, this is not possible. Recently proposed federated averaging approaches allow for learning a single model from data spread across multiple nodes and achieve good results on image classification tasks. However, this still assumes a sizable amount of data on each node and a small number of nodes. This paper investigates the properties of federated averaging for neural networks relative to batch sizes and number of nodes. Experimental results on a human activity dataset finds that (1) accuracy indeed drops as the number of nodes increase but only slightly, however (2) accuracy is highly sensitive to the batch size only in the federated averaging case.
A Federated Learning approach consists of creating an AI model from multiple data sources, without moving large amounts of data across to a central environment. Federated learning can be very useful in a tactical coalition environment, where data can be collected individually by each of the coalition partners, but network connectivity is inadequate to move the data to a central environment. However, such data collected is often dirty and imperfect. The data can be imbalanced, and in some cases, some classes can be completely missing from some coalition partners. Under these conditions, traditional approaches for federated learning can result in models that are highly inaccurate. In this paper, we propose approaches that can result in good machine learning models even in the environments where the data may be highly skewed, and study their performance under different environments.
When training data for machine learning is obtained from many different sources, not all of which may be trusted, it is difficult to determine which training data to accept and which to reject. A policy-based approach for data curation, where the policies are generated after examining the properties of the offered data, can provide a way to only accept selected data for creating a machine learning model. In this paper, we discuss the challenges associated with generating policies that can manage training data from different sources. An efficient policy generation scheme needs to determine the order in which information is received, must have an approach to determine the trustworthiness of each partner, must have an approach to decide how to quickly assess which data subset can add value to a complex model, and must address several other issues. After providing an overview of the challenges, we propose approaches to solve them and study the properties of those approaches.
KEYWORDS: Artificial intelligence, Data modeling, Performance modeling, Machine learning, Process modeling, Data processing, Safety, Sensors, Data communications
The Army is evolving its warfighting concepts to militarily compete, penetrate, dis-integrate, and exploit adversaries as part of a Multi-Domain Operations (MDO) Joint Force. Artificial Intelligence/Machine Learning (AI/ML) is critical to the Armys vision for AI-enabled capabilities to achieve MDO but has significant challenges and risks. The Army faces rapidly changing, never-before-seen situations, where pre-existing training data will quickly become ineffective; tactical training data is noisy, incomplete, and erroneous; and adversaries will employ deception. This is especially challenging at the Tactical Edge that operates in complex urban settings that are dynamic, distributed, resource-constrained, fast-paced, contested, and often physically and virtually isolated. Federated machine learning is collaborative training where training data is not exchanged in order to overcome constraints on training data sharing (policy, security, coalition constraints) and/or insufficient network capacity that are prevalent at the Tactical Edge. We describe the applicability of federated machine learning to MDO using a motivating scenario and identify when it is advantageous to be used. The attributes and design inputs for the deployment of AI/ML (learn-infer-act process), the factors that impact learning-inference processes, and the operational factors impacting the deployment of machine learning are identified. We propose strategies for six AI/ML deployment regimes that are at the intersection of total uncertainty (model and environmental) and the operational timeliness that is required, and map AI/ML techniques to address these challenges and requirements. Scientific research questions that must be answered to fill critical knowledge gaps are identified, and ongoing research approaches to answer them are highlighted.
Machine learning approaches like deep neural networks have proven to be very successful in many domains. However, they require training on a huge volumes of data. While these approaches work very well in a few selected domains where a large corpus of training data exists, they shift the bottleneck in development of machine learning applications to the data acquisition phase and are difficult to use in domains where training data is hard to acquire. For sensor fusion applications in coalition operations, access to good training data that will be suitable for real-life applications is hard to get. The training data sets available are limited in size. For these domains, we need to explore approaches for machine learning which can work with small amounts of data. In this paper, we will look at the current and emerging approaches which allow us to build machine learning models when access to the training data is limited. The approaches examined include statistical machine learning, transfer learning, synthetic data generation, semi-supervised learning and one-shot learning.
AI (Artificial Intelligence)-based algorithms have great potential for inter-operation of coalition ISR (intelligence, surveillance, and reconnaissance) systems, but rely on realistic data for training and validation. Getting such data for coalition scenarios is hampered by military regulations and is a significant hurdle in conducting basic research. We discuss an approach whereby training data can be obtained by means of scenario-driven simulations, which result in traces for network devices, ISR sensors and other infrastructure components. This generated data can be used for both training and comparison of different AI based algorithms. Coupling the synthetic data generator with a data curation system further increases its applicability.
Coalition operations of the future will see an increased use of autonomous vehicles, mules and UAVs in different kinds of contexts. Because of the scalability and dynamicity of operations at the tactical edge, such vehicles along with the supporting infrastructure at base-camps and other forward operating bases would need to support an increased degree of autonomy. In this paper, we look at one specific scenario where a surveillance mission needs to be performed sharing resources borrowed from multiple coalition partners. In such an environment, experts who can define security and other types of policies for devices are hard to find. One way to address this problem is to use generative policies – an approach where the devices generate policies for their operations themselves without requiring human involvement as the configuration of the system changes. We show how access control policies can be created automatically by the different devices involved in the mission, with only high-level guidance provided by humans. The generative policy architecture can enable rapid reconfiguration of security policies needed to address dynamic changes from features such as auto-scaling. It can also support improved security in coalition contexts by enabling the solutions to use approaches like moving target defense. In this paper, we would discuss a general architecture which allows the generative policy approach to be used in many different situations, a simulation implementation of the architecture and lessons learnt from the implementation of the simulation.
In this paper, we discuss the problem of distributed learning for coalition operations. We consider a scenario where different coalition forces are running learning systems independently but want to merge the insights obtained from all the learning systems to share knowledge and use a single model combining all of their individual models. We consider the challenges involved in such fusion of models, and propose an algorithm that can find the right fused model in an efficient manner.
It is envisioned that the success of future military operations depends on the better integration, organizationally and operationally, among allies, coalition members, inter-agency partners, and so forth. However, this leads to a challenging and complex environment where the heterogeneity and dynamism in the operating environment intertwines with the evolving situational factors that affect the decision-making life cycle of the war fighter. Therefore, the users in such environments need secure, accessible, and resilient information infrastructures where policy-based mechanisms adopt the behaviours of the systems to meet end user goals. By specifying and enforcing a policy based model and framework for operations and security which accommodates heterogeneous coalitions, high levels of agility can be enabled to allow rapid assembly and restructuring of system and information resources. However, current prevalent policy models (e.g., rule based event-condition-action model and its variants) are not sufficient to deal with the highly dynamic and plausibly non-deterministic nature of these environments. Therefore, to address the above challenges, in this paper, we present a new approach for policies which enables managed systems to take more autonomic decisions regarding their operations.
In order to address the unique requirements of sensor information fusion in a tactical coalition environment, we are proposing a new architecture -- one based on the concept of invocations. An invocation is a combination of a software code and a piece of data, both managed using techniques from Information Centric networking. This paper will discuss limitations of current approaches, present the architecture for an invocation oriented architecture, illustrate how it works with an example scenario, and provide reasons for its suitability in a coalition environment.
At the current time, interfaces between humans and machines use only a limited subset of senses that humans are capable of. The interaction among humans and computers can become much more intuitive and effective if we are able to use more senses, and create other modes of communicating between them. New machine learning technologies can make this type of interaction become a reality. In this paper, we present a framework for a holistic communication between humans and machines that uses all of the senses, and discuss how a subset of this capability can allow machines to talk to humans to indicate their health for various tasks such as predictive maintenance.
A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations
In order to get the modularity and reconfigurability for sensor information fusion services in modern battle-spaces, dynamic service composition and dynamic topology determination is needed. In the current state-of-the-art, such information fusion services are composed manually and in a programmatic manner. In this paper, we consider an approach towards more automation by assuming that the topology of a solution is provided, and automatically choosing the different types and kinds of algorithms which can be used at each step. This includes the use of contextual information and techniques such as multi-arm bandits for investing the exploration and exploitation tradeoff.
KEYWORDS: Network security, Computer security, Data communications, Data modeling, Databases, Analytical research, Systems modeling, Intelligence systems, Information security, Distributed computing, Communication and information technologies
To support dynamic communities of interests in coalition operations, new architectures for efficient sharing of ISR assets are needed. The use of blockchain technology in wired business environments, such as digital currency systems, offers an interesting solution by creating a way to maintain a distributed shared ledger without requiring a single trusted authority. In this paper, we discuss how a blockchain-based system can be modified to provide a solution for dynamic asset sharing amongst coalition members, enabling the creation of a logically centralized asset management system by a seamless policy-compliant federation of different coalition systems. We discuss the use of blockchain for three different types of assets in a coalition context, showing how blockchain can offer a suitable solution for sharing assets in those environments. We also discuss the limitations in the current implementations of blockchain which need to be overcome for the technology to become more effective in a decentralized tactical edge environment.
Behavioral Analytics (BA) relies on digital breadcrumbs to build user profiles and create clusters of entities that exhibit a large degree of similarity. The prevailing assumption is that an entity will assimilate the group behavior of the cluster it belongs to. Our understanding of BA and its application in different domains continues to evolve and is a direct result of the growing interest in Machine Learning research. When trying to detect security threats, we use BA techniques to identify anomalies, defined in this paper as deviation from the group behavior. Early research papers in this field reveal a high number of false positives where a security alert is triggered based on deviation from the cluster learned behavior but still within the norm of what the system defines as an acceptable behavior. Further, domain specific security policies tend to be narrow and inadequately represent what an entity can do. Hence, they: a) limit the amount of useful data during the learning phase; and, b) lead to violation of policy during the execution phase. In this paper, we propose a framework for future research on the role of policies and behavior security in a coalition setting with emphasis on anomaly detection and individual's deviation from group activities.
Over the last 70 years there has been a major shift in the threats to global peace. While the 1950’s and 1960’s were characterised by the cold war and the arms race, many security threats are now characterised by group behaviours that are disruptive, subversive or extreme. In many cases such groups are loosely and chaotically organised, but their ideals are sociologically and psychologically embedded in group members to the extent that the group represents a major threat. As a result, insights into how human groups form, emerge and change are critical, but surprisingly limited insights into the mutability of human groups exist. In this paper we argue that important clues to understand the mutability of groups come from examining the evolutionary origins of human behaviour. In particular, groups have been instrumental in human evolution, used as a basis to derive survival advantage, leaving all humans with a basic disposition to navigate the world through social networking and managing their presence in a group. From this analysis we present five critical features of social groups that govern mutability, relating to social norms, individual standing, status rivalry, ingroup bias and cooperation. We argue that understanding how these five dimensions interact and evolve can provide new insights into group mutation and evolution. Importantly, these features lend themselves to digital modeling. Therefore computational simulation can support generative exploration of groups and the discovery of latent factors, relevant to both internal group and external group modelling. Finally we consider the role of online social media in relation to understanding the mutability of groups. This can play an active role in supporting collective behaviour, and analysis of social media in the context of the five dimensions of group mutability provides a fresh basis to interpret the forces affecting groups.
Video surveillance applications are examples of complex distributed coalition tasks. Real-time capture and analysis of
image sensor data is one of the most important tasks in a number of military critical decision making scenarios. In
complex battlefield situations, there is a need to coordinate the operation of distributed image sensors and the analysis of
their data as transmitted over a heterogeneous wireless network where bandwidth, power, and computational capabilities
are constrained. There is also a need to automate decision making based on the results of the analysis of video data.
Declarative Networking is a promising technology for controlling complex video surveillance applications in this sort of
environment. This paper presents a flexible and extensible architecture for deploying distributed video surveillance
applications using the declarative networking paradigm, which allows us to dynamically connect and manage distributed
image sensors and deploy various modules for the analysis of video data to satisfy a variety of video surveillance
requirements. With declarative computing, it becomes possible for us not only to express the program control structure
in a declarative fashion, but also to simplify the management of distributed video surveillance applications.
The management of sensor networks in coalition settings has been treated in a piecemeal fashion in the current literature
without taking a comprehensive look at the complete life cycle of coalition networks, and determining the different
aspects of network management that need to be taken into account for the management of sensor networks in those
contexts. In this paper, we provide a holistic approach towards managing sensor networks encountered in the context of
coalition operations. We describe how the sensor networks in a coalition ought to be managed at various stages of the
life cycle, and the different operations that need to be taken into account for managing various aspects of the networks.
In particular, we look at the FCAPS model for network management, and assess the applicability of the FCAPS model
to the different aspects of sensor network management in a coalition setting.
Mobile Ad-Hoc Networks (MANETs), that do not rely on pre-existing infrastructure and that can adapt rapidly to
changes in their environment, are coming into increasingly wide use in military applications. At the same time, the large
computing power and memory available today even for small, mobile devices, allows us to build extremely large,
sophisticated and complex networks. Such networks, however, and the software controlling them are potentially
vulnerable to catastrophic failures because of their size and complexity. Biological networks have many of these same
characteristics and are potentially subject to the same problems. But in successful organisms, these biological networks
do in fact function well so that the organism can survive. In this paper, we present a MANET architecture developed
based on a feature, called homeostasis, widely observed in biological networks but not ordinarily seen in computer
networks. This feature allows the network to switch to an alternate mode of operation under stress or attack and then
return to the original mode of operation after the problem has been resolved. We explore the potential benefits such an
architecture has, principally in terms of the ability to survive radical changes in its environment using an illustrative
example.
The ability of a sensor device is affected significantly by the surroundings and environment in which it is placed.
In almost all sensor modalities, some directions are better observed by a sensor than others. Furthermore, the
exact impact on the sensing ability of the device is dependent on the position assigned to the sensor. While
the problem of determining good coverage schemes for sensors of a field have many good solutions, not many
approaches are known to address the challenges arising due to location specific distortion. In this paper, we look
at the problem of incorporating terrain specific challenges in sensor coverage, and propose a geometric solution
to address them.
KEYWORDS: Sensors, Control systems, Intelligence systems, Sensor networks, Receivers, Information security, Network security, Defense and security, Computing systems, Data fusion
A light-weight messaging fabric can be used to interconnect several different types of sensors together in a light-weight manner using existing products. However, existing commercial solutions for interconnecting sensors do not provide an easy method to enforce communication flow policies among several different methods, nor do they provide an easy interface for auto-configuration of sensor flows to enforce messaging policies. In this paper, we describe an approach that can add the features of self-configuration and policy based security controls to a sensor network built atop a message queue infrastructure. We describe the architecture for providing policy control and self-configuration network management functions to the sensor messaging fabric.
One of the challenges in military wireless sensor networks is the determination of an information collection
infrastructure that minimizes battery power consumption while being highly resilient against sensor and link failures. In
our previous work we have proposed a heuristic for constructing an information flow graph in wireless sensor networks
based on the mammalian circulatory system, with the goal of minimizing the energy consumption. In this paper we
focus mainly on the resilience benefits that can be achieved when constructing such information flow graphs. We
analyze the resilience of circulatory graphs constructed on top of regular as well as random topologies. We assume two
modes of failure, random and targeted attacks, and we compare the resilience of the circulatory graphs against tree
graphs.
One of the main goals of sensor networks is to provide accurate information about a sensing field for an extended
period of time. This requires collecting measurements from as many sensors as possible to have a better view
of the sensor surroundings. However, due to energy limitations and to prolong the network lifetime, the number
of active sensors should be kept to a minimum. To resolve this conflict of interest, sensor selection schemes
are used. In this paper, we survey different schemes that are used to select sensors. Based on the purpose of
selection, we classify the schemes into (1) coverage schemes, (2) target tracking and localization schemes, (3)
single mission assignment schemes and (4) multiple missions assignment schemes. We also look at solutions to
relevant problems from other areas and consider their applicability to sensor networks. Finally, we take a look
at the open research problems in this field.
Ad-hoc sensor networks need to create their own network after deployment. Various schemes have been suggested
for sensors to create a better coverage pattern than if they are randomly deployed. A better coverage pattern
translates into a geometry of having disks cover an area completely and even redundantly. In this paper, we
present two coverage arrangements which turn out to be equivalent to grid lattice arrangements and analyze
their efficacy.
A simulation environment is very useful in analyzing sensor networks, but the development of a sensor simulation
environment which can scale to a very large number of elements is hard to obtain using traditional simulation systems,
or customized simulation environments. The desired level of scalability and high volumes are hard to achieve in
customized simulation environments. One possible approach to obtain scalable simulation is by using commercially
available messaging systems. Such messaging systems, e.g. IBM WebSphere MQ system or OSMQ, are designed to
operate at a very high bandwidth of message transfers rate and number of interacting message queue end-points.
However, the communications abstractions offered by message queue systems are very different from the
communications abstractions required by sensor networks.
In this paper, we describe an approach to map the communication abstractions of sensor network simulation systems to
those of underlying message queue systems. We describe how issues related to message localization, propagation delays
and error rates can be effectively handled, and a highly scalable infrastructure for message simulation be deployed.
KEYWORDS: Network architectures, Asynchronous transfer mode, Local area networks, Video, Databases, Switches, Environmental monitoring, Roads, Broadband telecommunications, Complex systems
The advent of high speed networks has stimulated development and deployment of many new distributed applications, such as multiparty video conferencing. At the same time, the networking community is rapidly moving towards new high speed networking architectures that offer advanced features such as guaranteed bandwidth and connection performance guarantees. The performance of may applications would be improved significantly if features offered by these new networks are utilized. While it is desirable to use the features of the new protocols provided by the emerging high speed networks, these protocols have not yet reached the same degree of stability and maturity as the existing protocols. As new networks with advanced features are deployed, schemes that take advantage of the advanced network capabilities, are necessary to migrate existing applications to the new networking interfaces. In this paper, several application migration paths are outlined. The concept of a Bandwidth Server, that provides transparent application migration, is introduced, e.g. transparent migration does not require an application to be rewritten, recompiled, or relinked. We explore the design of a simple and efficient Bandwidth Server that allows TCP/IP applications, written using the well-known socket-interface, to execute across a B-ISDN network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.