Paper
9 January 2008 Implementation of neural network hardware based on a floating point operation in an FPGA
Jeong-Seob Kim, Seul Jung
Author Affiliations +
Proceedings Volume 6794, ICMIT 2007: Mechatronics, MEMS, and Smart Materials; 679451 (2008) https://doi.org/10.1117/12.784122
Event: ICMIT 2007: Mechatronics, MEMS, and Smart Materials, 2007, Gifu, Japan
Abstract
This paper presents a hardware design and implementation of the radial basis function (RBF) neural network (NN) by the hardware description language. Due to its nonlinear characteristics, it is very difficult to implement for a system with integer-based operation. To develop nonlinear functions such sigmoid functions or exponential functions, floating point operations are required. The exponential function is designed based on the 32bit single-precision floating-point format. In addition, to update weights in the network, the back-propagation algorithm is also implemented in the hardware. Most operations are performed in the floating-point based arithmetic unit and accomplished sequentially by the instruction order stored in ROM. The NN is implemented and tested on the Altera FPGA "Cyclone2 EP2C70F672C8" for nonlinear classifications.
© (2008) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jeong-Seob Kim and Seul Jung "Implementation of neural network hardware based on a floating point operation in an FPGA", Proc. SPIE 6794, ICMIT 2007: Mechatronics, MEMS, and Smart Materials, 679451 (9 January 2008); https://doi.org/10.1117/12.784122
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Field programmable gate arrays

Intelligence systems

MATLAB

Control systems

Digital signal processing

Complex systems

Back to Top