Home // International Journal On Advances in Software, volume 10, numbers 3 and 4, 2017 // View article


Analysis of Hardware Implementations to Accelerate Convolutional and Recurrent Neuronal Networks

Authors:
Florian Kaestner
Osvaldo Navarro Guzman
Benedikt Janssen
Javier Hoffmann
Michael Huebner

Keywords: FPGA; Recurrent; Convolutional; Neural Network; ASIC

Abstract:
Hardware platforms, like FPGAs and ASICs, turned out to be a viable alternative to GPUs for the implementation of deep learning algorithms, especially in applications with strict power and performance constraints. In terms of flexibility, FPGAs are more beneficial, while ASICs can provide a better energy efficiency and higher performance. Deep Learning is a subgroup of machine learning algorithms that has a major impact on modern technology. Among these algorithms, Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have been of great interest due to their accuracy in comparison to other methods. In this article, we conducted an analysis of the hardware implementation of these two popular network types. Different types of neural networks offer different opportunities to create an optimized hardware implementation due to their specific characteristics. Therefore, we split the analysis into two parts, discussing CNN and RNN implementations separately. Our contribution is an inside view on several hardware approaches and a comparison of their architectural characteristics. We aim to propose hints for their implementations.

Pages: 308 to 323

Copyright: Copyright (c) to authors, 2017. Used with permission.

Publication date: December 31, 2017

Published in: journal

ISSN: 1942-2628