Vector Symbolic Architecture (VSA), a.k.a. Hyperdimensional Computing has transformative potential for advancing cognitive processing capabilities at the network edge. This paper presents a technology integration experiment, demonstrating how the VSA paradigm offers robust solutions for generation-after-next AI deployment at the network edge. Specifically, we show how VSA effectively models and integrates the cognitive processes required to perform intelligence, surveillance, and reconnaissance (ISR). The experiment integrates functions across the observe, orientate, decide and act (OODA) loop, including the processing of sensed data via both a neuromorphic event-based camera and a standard CMOS frame-rate camera; declarative knowledge-based reasoning in a semantic vector space; action planning using VSA cognitive maps; access to procedural knowledge via large language models (LLMs); and efficient communication between agents via highly-compact binary vector representations. In contrast to previous ‘point solutions’ showing the effectiveness of VSA for individual OODA tasks, this work takes a ‘whole system’ approach, demonstrating the power of VSA as a uniform integration technology.
Vector Symbolic Architecture (VSA), a.k.a. Hyperdimensional Computing (HDC) has transformative potential for advancing cognitive processing capabilities at the network edge. This paper examines how this paradigm offers robust solutions for AI and Autonomy within a future command, control, communications, computers, cyber, intelligence, surveillance and reconnaissance (C5ISR) enterprise by effectively modelling the cognitive processes required to perform Observe, Orient, Decide and Act (OODA) loop processing. The paper summarises the theoretical underpinnings, operational efficiencies, and synergy between VSA and current AI methodologies, such as neural-symbolic integration and learning. It also addresses major research challenges and opportunities for future exploration, underscoring the potential for VSA to facilitate intelligent decision-making processes and maintain information superiority in complex environments. The paper intends to serve as a cornerstone for researchers and practitioners to harness the power of VSA in creating next-generation AI applications, especially in scenarios that demand rapid, adaptive, and autonomous responses.
The distortion caused by turbulence in the atmosphere during long range imaging can result in low quality images and videos. This, in turn, greatly increases the difficulty of any post acquisition tasks such as tracking or classification. The mitigation of such distortions is therefore important, allowing any post processing steps to be performed successfully. We make use of the EDVR network, initially designed for video restoration and super resolution, to mitigate the effects of turbulence. This paper presents two modifications to the training and architecture of EDVR, that improve its applicability to turbulence mitigation: namely the replacement of the deformable convolution layers present in the original EDVR architecture, alongside the addition of perceptual loss. This paper also presents an analysis of common metrics used for image quality assessment and it evaluates their suitability for the comparison of turbulence mitigation approaches. In this context, traditional metrics such as Peak Signal-to-Noise Ratio can be misleading, as they could reward undesirable attributes, such as increased contrast instead of high frequency detail. We argue that the applications for which turbulence mitigated imagery is used should be the real markers of quality for any turbulence mitigation technique. To aid in this, we also present a new turbulence classification dataset that can be used to measure the classification performance before and after turbulence mitigation.
Despite the highly promising advances in Machine Learning (ML) and Deep Learning (DL) in recent years, DL requires significant hardware acceleration to be effective, as it is rather computationally expensive. Moreover, miniaturisation of electronic devices requires small form-factor processing units, with reduced SWaP (Size,Weight and Power) profile. Therefore, a completely new processing paradigm is needed to address both issues. In this context, the concept of neuromorphic (NM) engineering provides an attractive alternative, seen as the analog/digital implementation of biologically brain inspired neural networks. NM systems propagate spikes as means of processing data, with the information being encoded in the timing and rate of spikes generated by each neuron of a so-called spiking neural network (SNN). Based on this, the key advantages of SNNs are: less computational power required, more efficient and faster processing, much lower power consumption. This paper reports on the current state of the art in the field of NM systems, and it describes three application scenarios of SNN-based processing for security and defence, namely target detection and tracking, semantic segmentation, and control.
A new approach for imaging that is solely based on the time of flight of photons coming from the entire imaged scene, combined with a novel machine learning algorithm for image reconstruction: a spiking convolutional neural network (SCNN) named Spike-SPI (Spiking - Single Pixel Imager). The approach uses a single point detector and the corresponding time-counting electronics, which provide the arrival time of photons in the form of spikes distributed over time. This data is transformed into a temporal histogram containing the number of photons per arrival time. A SCNN that converts the 1D temporal histograms into a 3D image (2D image with depth map) by exploiting the feature extraction capabilities of convolutional neural networks (CNNs), the high dimensional compressed latent space representations of a variational encoder-decoder network structure, and the asynchronous processing capabilities of a spiking neural network (SNN). The performance of the proposed SCNN is analysed to demonstrate the state-of-the-art feature extraction capabilities of CNNs and the low latency asynchronous processing of SNNs that offer both higher throughput and higher accuracy in image reconstruction from the ToF data, when compared to standard ANNs. The results of Spike-SPI show an increase in spatial accuracy of 15% over then ANN, using the Intersection of Union (IoU) for the objects in the scene. While also delivering a 100% increase over then ANN in object reconstruction signal to noise ratio (RSNR) from ~3dB to ~6dB. These results are also consistent across a range of IRF (Instrument Response Functions) values and photo counts, highlighting the robust nature of the new network structure. Moreover, the asynchronous processing nature of the spiking neurons allow for a faster throughput and less computational overhead, benefiting from the operational sparsity in the single point sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.