The recent explosive compute growth, mainly fueled by the boost of artificial intelligence (AI) and deep neural networks (DNNs), is currently instigating the demand for a novel computing paradigm that can overcome the insurmountable barriers imposed by conventional electronic computing architectures. Photonic neural networks (PNNs) implemented on silicon photonic integration platforms stand out as a promising candidate to endow neural network (NN) hardware, offering the potential for energy efficient and ultra-fast computations through the utilization of the unique primitives of light i.e. THz bandwidth, low-power and low-latency. Thus far, several demonstrations have revealed the huge potential of PNNs in performing both linear and non-linear NN operations at unparalleled speed and energy consumption metrics. Transforming this potential into a tangible reality for Deep Learning (DL) applications requires, however, a deep understanding of the basic PNN principles, requirements and challenges across all constituent architectural, technological and training aspects. In this paper we review the state-of-the-art photonic linear processors and project their challenges and solutions for future photonic-assisted machine learning engines. Additionally, recent experimental results using SiGe EAMs in a Xbar layout are presented, validating light's credentials to perform ultra-fast linear operations with unparalleled accuracy. Finally, we provide an holistic overview of the optics-informed NN training framework that incorporates the physical properties of photonic building blocks into the training process in order to improve the NN classification accuracy and effectively elevate neuromorphic photonic hardware into high-performance DL computational settings.
Photonic Neural Networks (PNNs) implemented on silicon photonic (SiPho) platforms stand out as a promising candidate to endow neural network hardware, offering the potential for energy efficient and ultra-fast computations through exploiting the unique primitives of light i.e., THz bandwidth, low-power and low-latency. In this paper, we review the state-of-the-art photonic linear processors discuss their challenges and propose solutions for future photonic-assisted machine learning engines. Additionally, we will present experimental results on the recently introduced SiPho 4x4 coherent crossbar (Xbar) architecture, that migrates from existing Singular Value Decomposition (SVD)-based schemes while offering single time-step programming complexity. The Xbar architecture utilizes silicon germanium (SiGe) Electro-Absorption Modulators (EAMs) as its computing cells and Thermo-Optic (TO) Phase Shifters (PS) for providing the sign information at every weight matrix node. Towards experimentally evaluating our Xbar architecture, we performed 10,024 arbitrary linear transformations over the SiPho processor, with the respective fidelity values converging to 100%. Followingly, we focus on the execution of the non-linear part of the NN by demonstrating a programmable analog optoelectronic circuit that can be configured to provide a plethora of non-linear activation functions, including tanh, sigmoid, ReLU and inverted ReLU at 2 GHz update rate. Finally, we provide a holistic overview on optics-informed neural networks towards improving the classification accuracy and performance of optics-specific Deep Learning (DL) computational tasks by leveraging the synergy of optical physics and DL.
Rapid developments in computer science have led to the increasing demand for efficient computing systems. Linear photonic systems rose as a favorable candidate for workload-demanding architectures, due to their small footprint and low energy consumption. Mach Zehnder Interferometers (MZI) serve as the foundational building block for several photonic circuits, and have been widely used as modulators, switches and variable power splitters. However, combining MZIs for realizing multiport splitters remains a challenge, since the exponential increase in the number of devices and the consequential increase in losses is limiting the performance of the MZI based multiport device. To overcome such limitations, incorporating alternative and low loss integration platforms combined with a generalized design of the MZI could allow the realization of a robust variable power splitter. In this work, we present for the first time a 4×4 Generalized Mach Zehnder Interferometer (GMZI) incorporated on a Si3N4 photonic integration platform and we experimentally demonstrate its operation as a variable power splitter. We developed an analytical model to describe the operation of the 4×4 GMZI, allowing us to evaluate the impact of several parameters to the overall performance of the device and investigate the device’s tolerance to fabrication imperfections and design alternations. Its experimental evaluation as a variable power splitter reveals a controlled imbalance that ranges up to 10 dB in multiple output ports of the device, validating the theoretically derived principles of operation.
The explosive volume growth of deep-learning (DL) applications has triggered an era in computing, with neuromorphic photonic platforms promising to merge ultra-high speed and energy efficiency credentials with the brain-inspired computing primitives. The transfer of deep neural networks (DNNs) onto silicon photonic (SiPho) architectures requires, however, an analog computing engine that can perform tiled matrix multiplication (TMM) at line rate to support DL applications with a large number of trainable parameters, similar to the approach followed by state-of-the-art electronic graphics processing units. Herein, we demonstrate an analog SiPho computing engine that relies on a coherent architecture and can perform optical TMM at the record-high speed of 50 GHz. Its potential to support DL applications, where the number of trainable parameters exceeds the available hardware dimensions, is highlighted through a photonic DNN that can reliably detect distributed denial-of-service attacks within a data center with a Cohen’s kappa score-based accuracy of 0.636.
The emergence of demanding machine learning and AI workloads in modern computational systems and Data Centers (DC) has fueled a drive towards custom hardware, designed to accelerate Multiply-Accumulate (MAC) operations. In this context, neuromorphic photonics have recently attracted attention as a promising technological candidate, that can transfer photonics low-power, high bandwidth credentials in neuromorphic hardware implementations. However, the deployment of such systems necessitates progress in both the underlying constituent building blocks as well as the development of deep learning training models that can take into account the physical properties of the employed photonic components and compensate for their non-ideal performance. Herein, we present an overview of our progress in photonic neuromorphic computing based on coherent layouts, that exploits the phase of the light traversing the photonic circuitry both for sign representation and matrix manipulation. Our approach breaks-through the direct trade-off of insertion loss and modulation bandwidth of State-Of-The-Art coherent architectures and allows high-speed operation in reasonable energy envelopes. We present a silicon-integrated coherent linear neuron (COLN) that relies on electro-absorption modulators (EAM) both for its on-chip data generation and weighting, demonstrating a record-high 32 GMAC/sec/axon compute linerate and an experimentally obtained accuracy of 95.91% in the MNIST classification task. Moreover, we present our progress on component specific neuromorphic circuitry training, considering both the photonic link thermal noise and its channel response. Finally, we present our roadmap on scaling our architecture using a novel optical crossbar design towards a 32×32 layout that can offer >;32 GMAC/sec/axon computational power in ~0.09 pJ/MAC.
Optical Random Access Memories (RAMs) have been conceived as high-bandwidth alternatives of their electronic counterparts, raising expectations for ultra-fast operation that can resolve the ns-long electronic RAM access bottleneck. In addition, with electronic Address Look-Up tables operating still at speeds of only up to 1 GHz, the constant increase in optical switch i/o data rates will yield severe latency and energy overhead during forwarding operations. In this invited paper, we present an overview of our recent research, introducing an all-optical RAM cell that performs both Write and Read functionalities at 10Gb/s, reporting on a 100% speed increase compared to state-of-the-art optical/electrical RAM demonstrations. Moreover, we present an all-optical Ternary-CAM cell that operates again at 10 Gb/s, doubling the speed of the fastest optical/electrical CAMs so far. To achieve this, we utilized a monolithically integrated InP optical Flip-Flop and a Semiconductor Optical Amplifier-Mach-Zehnder Interferometer (SOA-MZI) operating as an Access Gate to the RAM, and as an XOR gate to the T-CAM. These two demonstrations pave the way towards the vision of integrated photonic look-up memory architectures in order to relieve the memory bottlenecks.
KEYWORDS: Eye, Signal attenuation, Signal processing, Data conversion, Modulators, Optical filters, Bandpass filters, Amplitude modulation, Information science, Network architectures
The 5G-induced paradigm shift from traditional macro-cell networks towards ultra-dense deployment of small cells, imposes stringent bandwidth and latency requirements in the underlying network infrastructure. While state of the art TDM-PON e.g. 10G-EPON, have already transformed the fronthaul networks from circuit switched point-to-point links into packet based architectures of shared point-to-multipoint links, the 5G Ethernet-based fronthaul brings new requirements in terms of latency for an inherently bursty traffic. This is expected to promote the deployment of a whole new class of optical devices that can perform with burst-mode traffic while realizing routing functionalities at a low-latency and energy envelope, avoiding in this way the latency burden associated with a complete optoelectronic Ethernet routing process and acting as a fast optical gateway for ultra-low latency requiring signals. Wavelength conversion can offer a reliable option for ultra-fast routing in access and fronthaul networks, provided, however, that it can at the same time offer both packet power-level equalization to account for differences in optical path losses and comply with the typical, in optical fronthauling, NRZ format. In this paper, we demonstrate an optical Burst-Mode Wavelength Converter using a Differentially-Biased SOA-MZI that operates in the deeply saturated regime to provide optical output power equalization for different input signal powers. The device has been experimentally validated for 10Gb/s NRZ optical packets, providing error-free operation for an input packet peak-power dynamic range of more than 9dB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.