Memristor arrays have been widely used to accelerate neural network algorithms for edge intelligence. Although previous study has revealed the potential of memristive Long Short-Term Memory (LSTM) neural network for sequential information processing, it is difficult to map its large weights to existing memristor arrays due to the limitation of the fabrication. Here we demonstrate several gate-variants that further simplify the original Gated Recurrent Unit (GRU) neural network, a simplified version of LSTM, by removing its specific weight matrix, which is used to lower the requirement of weights mapping in memristor arrays. We then performed software validation and offline inference device validation of these new networks using a variety of different datasets based on Mixed National Institute of Standards and Technology (MNIST) dataset segmentation, testing their effectiveness for classifying sequential datasets of different sizes and their robustness for device imperfection. Ultimately, we illustrate that these gate-variants of GRU networks can maintain the performance of the original network in different segmented MNIST datasets, while reducing the memristor array size for more than 50% on most datasets. This work will further advance the edge application of memristive Recurrent Neural Networks (RNN) with limited resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.