Optical reservoir computing (ORC) offers advantages, such as high computational speed, low power consumption, and high training speed, so it has become a competitive candidate for time series analysis in recent years. The current ORC employs single-dimensional encoding for computation, which limits input resolution and introduces extraneous information due to interactions between optical dimensions during propagation, thus constraining performance. Here, we propose complex-value encoding-based optoelectronic reservoir computing (CE-ORC), in which the amplitude and phase of the input optical field are both modulated to improve the input resolution and prevent the influence of extraneous information on computation. In addition, scale factors in the amplitude encoding can fine-tune the optical reservoir dynamics for better performance. We built a CE-ORC processing unit with an iteration rate of up to ∼1.2 kHz using high-speed communication interfaces and field programmable gate arrays (FPGAs) and demonstrated the excellent performance of CE-ORC in two time series prediction tasks. In comparison with the conventional ORC for the Mackey–Glass task, CE-ORC showed a decrease in normalized mean square error by ∼75%. Furthermore, we applied this method in a weather time series analysis and effectively predicted the temperature and humidity within a range of 24 h.
As a new emerging machine learning mechanism, optical diffractive deep neural network (OD2NN) has been intensively studied recently due to its incomparable advantages on speed and power efficiency. However, the training process of the OD2NN with traditional back-propagation (BP) method is always time-consuming. Here, we introduce the biologically plausible training methods without feedback to accelerate the training process of the hybrid OD2NN. Direct feedback alignment (DFA), error-sign-based DFA (sDFA) and direct random target projection (DRTP) are utilized and evaluated in the training process of the hybrid OD2NN respectively. For the hybrid OD2NN with 20 diffractive layers, about 160× (DFA; CPU), 30× (DFA; GPU), 170× (sDFA; CPU), 32× (sDFA; GPU), 158× (DRTP; CPU) and 32× (DRTP; GPU) accelerations are achieved respectively without significant loss of accuracy, compared with the training process using BP method on CPU or GPU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.