This paper documents the research towards the analysis of different solutions to implement a Neural Network architecture on a FPGA design by using floating point accelerators. In particular, two different implementations are investigated: a high level solution to create a neural network on a soft processor design, with different strategies for enhancing the performance of the process; a low level solution, achieved by a cascade of floating point arithmetic elements. Comparisons of the achieved performance in terms of both time consumptions and FPGA resources employed for the architectures are presented. © 2014 Advances in electrical and electronic engineering.
FPGA implementations of feed forward neural network by using floating point hardware accelerators / Lozito G.M.; Laudani A.; Riganti-Fulginei F.; Salvini A.. - In: ADVANCES IN ELECTRICAL AND ELECTRONIC ENGINEERING. - ISSN 1336-1376. - ELETTRONICO. - 12:(2014), pp. 30-39. [10.15598/aeee.v12i1.831]
FPGA implementations of feed forward neural network by using floating point hardware accelerators
Lozito G. M.;
2014
Abstract
This paper documents the research towards the analysis of different solutions to implement a Neural Network architecture on a FPGA design by using floating point accelerators. In particular, two different implementations are investigated: a high level solution to create a neural network on a soft processor design, with different strategies for enhancing the performance of the process; a low level solution, achieved by a cascade of floating point arithmetic elements. Comparisons of the achieved performance in terms of both time consumptions and FPGA resources employed for the architectures are presented. © 2014 Advances in electrical and electronic engineering.I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.