In a defense landscape driven by increasing automation, larger operation scale, higher op-tempo, and tighter integrations across multiple domains, how do emerging advances in computing technology empower future defense concepts and operations? The paper overviews the notion of a multi-domain operations (MDO) effect loop as an organizing principle for military operations and information-driven decision processes. It then highlights recent advances in artificial intelligence, information theory, distributed sensing, and network optimization that significantly enhance the capabilities of different loop components, as illustrated by notional defense scenarios.
The paper presents a research agenda on supporting machine intelligence at the tactical network edge, and overviews early results in that space developed under the Internet of Battlefield Things Collaborative Research Alliance (IoBT CRA); a collaboration between the US Army Research Labs and a consortium of academia and industry led by the University of Illinois. It is becoming evident today that the use of artificial intelligence and machine learning components in future military operations will be inevitable. Yet, at present, the dependability limitations and failure modes of these components in a complex multi-domain battle environment are poorly understood. Most civilian research investigates solutions that exceed the SWaP (Size, Weight, and Power) limitations of tactical edge devices, and/or require communication with a central back-end. Resilience to adversarial inputs is not well developed. The need for significant labeling to train the machine slows down agility and adaptation. Cooperation between resource-limited devices to attain reliable intelligent functions is not a central theme. These gaps are filled by recent research emerging from the IoBT CRA. The paper reviews the field and presents the most interesting early accomplishments of the Alliance aiming to bridge the aforementioned capability gaps for future military operations.
The paper describes a vision for dependable application of machine learning-based inferencing on resource-constrained edge devices. The high computational overhead of sophisticated deep learning learning techniques imposes a prohibitive overhead, both in terms of energy consumption and sustainable processing throughput, on such resource-constrained edge devices (e.g., audio or video sensors). To overcome these limitations, we propose a “cognitive edge” paradigm, whereby (a) an edge device first autonomously uses statistical analysis to identify potential collaborative IoT nodes, and (b) the IoT nodes then perform real-time sharing of various intermediate state to improve their individual execution of machine intelligence tasks. We provide an example of such collaborative inferencing for an exemplar network of video sensors, showing how such collaboration can significantly improve accuracy, reduce latency and decrease communication bandwidth compared to non-collaborative baselines. We also identify various challenges in realizing such a cognitive edge, including the need to ensure that the inferencing tasks do not suffer catastrophically in the presence of malfunctioning peer devices. We then introduce the soon-to-be deployed Cognitive IoT testbed at SMU, explaining the various features that enable empirical testing of various novel edge-based ML algorithms.
The proliferation of real-time information on social media opens up unprecedented opportunities for situation awareness that arise from extracting unfolding physical events from their social media footprints. The paper describes experiences with a new social media analysis toolkit for detecting and tracking such physical events. A key advantage of the explored analysis algorithms is that they require no prior training, and as such can operate out-of-the-box on new languages, dialects, jargon, and application domains (where by "new", we mean new to the machine), including detection of protests, natural disasters, acts of terror, accidents, and other disruptions. By running the toolkit over a period of time, patterns and anomalies are also detected that offer additional insights and understanding. Through analysis of contemporary political, military, and natural disaster events, the work explores the limits of the training-free approach and demonstrates promise and applicability.
The paper discusses challenges in exploiting geotagged social media posts (such as Instagram images) for purposes of target (event) tracking. The argument for social media exploitation for tracking lies in that physical events, such as protests, acts of terror, or natural disasters elicit a response on social media in the neighborhood of the event. However, the density of social media posts is proportional to the local population density. Hence, inferred event locations based on the ensuing distribution of posts are skewed by disparities in population density around the true event location. The paper describes an unsupervised approach to neutralize the effect of uneven population density. Evaluation using Instagram footprints of recent events shows that the approach leads to a much more accurate estimation of real event trajectories.
This paper describes characteristics of information flow on social channels, as a function of content type and relations
among individual sources, distilled from analysis of Twitter data as well as human subject survey results. The working
hypothesis is that individuals who propagate content on social media act (e.g., decide whether to relay information or
not) in accordance with their understanding of the content, as well as their own beliefs and trust relations. Hence, the
resulting aggregate content propagation pattern encodes the collective content interpretation of the underlying group, as
well as their relations. Analysis algorithms are described to recover such relations from the observed propagation
patterns as well as improve our understanding of the content itself in a language agnostic manner simply from its
propagation characteristics. An example is to measure the degree of community polarization around contentious topics,
identify the factions involved, and recognize their individual views on issues. The analysis is independent of the
language of discourse itself, making it valuable for multilingual media, where the number of languages used may render
language-specific analysis less scalable.
KEYWORDS: Signal processing, Social networks, Sensors, Data processing, Web 2.0 technologies, Relays, Algorithm development, Error analysis, Reliability, Data fusion
Signal processing techniques such as filtering, detection, estimation and frequency domain analysis have long been applied to extract information from noisy sensor data. This paper describes the exploitation of these signal processing techniques to extract information from social networks, such as Twitter and Instagram. Specifically, we view social networks as noisy sensors that report events in the physical world. We then present a data processing stack for detection, localization, tracking, and veracity analysis of reported events using social network data. We show using a controlled experiment that the behavior of social sources as information relays varies dramatically depending on context. In benign contexts, there is general agreement on events, whereas in conflict scenarios, a significant amount of collective filtering is introduced by conflicted groups, creating a large data distortion. We describe signal processing techniques that mitigate such distortion, resulting in meaningful approximations of actual ground truth, given noisy reported observations. Finally, we briefly present an implementation of the aforementioned social network data processing stack in a sensor network analysis toolkit, called Apollo. Experiences with Apollo show that our techniques are successful at identifying and tracking credible events in the physical world.
KEYWORDS: Statistical analysis, Web 2.0 technologies, Analytical research, Chemical analysis, Prototyping, Reliability, Data modeling, Video, Social networks
Millions of people exchange user-generated information through online social media (SM) services. The prevalence of
SM use globally and its growing significance to the evolution of events has attracted the attention of the Army and other
agencies charged with protecting national security interests. The information exchanged in SM sites and the networks of
people who interact with these online communities can provide value to Army intelligence efforts. SM could facilitate
the Military Decision Making Process by providing ongoing assessment of military actions from a local citizen
perspective. Despite potential value, there are significant technological barriers to leveraging SM. SM collection and
analysis are difficult in the dynamic SM environment and deception is a real concern. This paper introduces a credibility
analysis approach and prototype fact-finding technology called the “Apollo Fact-finder” that mitigates the problem of
inaccurate or falsified SM data. Apollo groups data into sets (or claims), corroborating specific observations, then
iteratively assesses both claim and source credibility resulting in a ranking of claims by likelihood of occurrence. These
credibility analysis approaches are discussed in the context of a conflict event, the Syrian civil war, and applied to tweets
collected in the aftermath of the Syrian chemical weapons crisis.
KEYWORDS: Social networks, Samarium, Data modeling, Detection and tracking algorithms, Promethium, Chemical elements, Visualization, Analytical research, Information fusion, Intelligence systems
Information that propagates through social networks can carry a lot of false claims. For example, rumors
on certain topics can propagate rapidly leading to a large number of nodes reporting the same (incorrect)
observations. In this paper, we describe an approach for nding the rumor source and assessing the likelihood
that a piece of information is in fact a rumor, in the absence of data provenance information. We model the social
network as a directed graph, where vertices represent individuals and directed edges represent information
ow
(e.g., who follows whom on Twitter). A number of monitor nodes are injected into the network whose job is to
report data they receive. Our algorithm identies rumors and their sources by observing which of the monitors
received the given piece of information and which did not. We show that, with a sucient number of monitor
nodes, it is possible to recognize most rumors and their sources with high accuracy.
Provenance is the information about the origin of the data inputs and the data manipulations to a obtain a
final result. With the huge amount of information input and potential processing available in sensor networks,
provenance is crucial for understanding the creation, manipulation and quality of data and processes. Thus
maintaining provenance in a sensor network has substantial advantages. In our paper, we will concentrate on
showing how provenance improves the outcome of a multi-modal sensor network with fusion. To make the ideas
more concrete and to show what maintaining provenance provides, we will use a sensor network composed of
binary proximity sensors and cameras to monitor intrusions as an example. Provenance provides improvements
in many aspects such as sensing energy consumption, network lifetime, result accuracy, node failure rate. We
will illustrate the improvements in accuracy of the position of the intruder in a target localization network by
simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.