Traditional approaches using Deep Neural Networks for classification, while unquestionably successful, struggle with more general intelligence tasks such as “on the fly” learning as demonstrated by biological systems. Organisms possess myriad sensory organs for interacting with their environment. By the time these diverse sensory signals reach the brain; however, they are all converted into a spiking information representation, over which the brain itself operates. In a similar manner, myriad machine learning (ML) algorithms today compute on equally diverse data modalities; but without a consistent information representation for their respective outputs, these algorithms are frequently used independently of each other. Consequently, there is growing interest in information representations to unify these algorithms, with the larger goal of designing ML modules that may be arbitrarily arranged to solve larger-scale ML problems, analogous to digital circuit design today. One promising information representation is that of a “symbol” expressed as a high-dimensional vector, thousands of elements long. Hyperdimensional computing (HDC) is an algebra for the creation, manipulation, and measurement of correlations among “symbols” expressed as hypervectors. Towards this goal, an external plexiform layer (EPL) network, echo state network (ESN), and modern Hopfield network were adapted to implement the mathematical operations of complex phasor based HDC. Further, since symbol error correction is an important consideration for computing with networks of ML modules, a task agnostic minimum query similarity for complete symbol error correction was measured as a function of hypervector length. Based on these results, problem-independent similarities have been established within which HDC equations should be designed. Lastly, these ANNs were tested against several tasks representative of online and “plug & play” ML among expeditionary robots. For all criteria considered, the modern Hopfield network was the most capable ANN evaluated for use with complex phasor based HDC, providing 100% symbol recovery in a single time step for nearly all parameter settings.
Hyperdimensional computing (HDC) is a type of machine learning algorithm but is not based on the ubiquitous artificial neural network (ANN) paradigm. Instead of neurons and synapses, HDC implements online learning via very large vectors manipulated to represent correlations among the various vectors, measured by a similarity metric. Yet this approach readily affords one-shot learning, transfer learning, and native error correction, which are standing challenges for traditional ANNs. Further, implementations using binary vectors {0,1} are particularly attractive for size, weight, and power (SWaP) constrained systems, particularly disposable robotics. The paper is the first to identify and formalize a method to completely clone trained hyperdimensional behavior vectors. Using shift maps, d-1 unique clones can be made from a parent vector of length d. Additionally, expeditionary robots with extraneous sensors were trained via HDC to solve a maze even when up to 75% of the sensors fed irrelevant data to the robot. Lastly, we demonstrated the resiliency of this encoding method to random bit flips and how different network topologies contribute to dynamic reprogramming of HDC robots. HDC is presented here though not to replace ANNs but to encourage integration of these complementary ML paradigms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.