Article

FPGA Implementations of Feed Forward Neural Network by using Floating Point Hardware Accelerators

arrow_icon

Gabriele Maria Lozito, Antonino Laudani, Francesco Riganti Fulginei, Alessandro Salvini

arrow_icon

DOI: 10.15598/aeee.v12i1.831

Abstract

This paper documents the research towards the analysis of different solutions to implement a Neural Network architecture on a FPGA design by using floating point accelerators. In particular, two different implementations are investigated: a high level solution to create a neural network on a soft processor design, with different strategies for enhancing the performance of the process; a low level solution, achieved by a cascade of floating point arithmetic elements. Comparisons of the achieved performance in terms of both time consumptions and FPGA resources employed for the architectures are presented.

Full Text:

PDF

Cite this