Show simple item record

AuthorZaman, Kh Shahriya
AuthorReaz, Mamun Bin Ibne
AuthorMd Ali, Sawal Hamid
AuthorBakar, Ahmad Ashrif A
AuthorChowdhury, Muhammad Enamul Hoque
Available date2023-04-17T06:57:41Z
Publication Date2022
Publication NameIEEE Transactions on Neural Networks and Learning Systems
ResourceScopus
URIhttp://dx.doi.org/10.1109/TNNLS.2021.3082304
URIhttp://hdl.handle.net/10576/41939
AbstractThe staggering innovations and emergence of numerous deep learning (DL) applications have forced researchers to reconsider hardware architecture to accommodate fast and efficient application-specific computations. Applications, such as object detection, image recognition, speech translation, as well as music synthesis and image generation, can be performed with high accuracy at the expense of substantial computational resources using DL. Furthermore, the desire to adopt Industry 4.0 and smart technologies within the Internet of Things infrastructure has initiated several studies to enable on-chip DL capabilities for resource-constrained devices. Specialized DL processors reduce dependence on cloud servers, improve privacy, lessen latency, and mitigate bandwidth congestion. As we reach the limits of shrinking transistors, researchers are exploring various application-specific hardware architectures to meet the performance and efficiency requirements for DL tasks. Over the past few years, several software optimizations and hardware innovations have been proposed to efficiently perform these computations. In this article, we review several DL accelerators, as well as technologies with emerging devices, to highlight their architectural features in application-specific integrated circuit (IC) and field-programmable gate array (FPGA) platforms. Finally, the design considerations for DL hardware in portable applications have been discussed, along with some deductions about the future trends and potential research directions to innovate DL accelerator architectures further. By compiling this review, we expect to help aspiring researchers widen their knowledge in custom hardware architectures for DL. 2012 IEEE.
SponsorThis work was supported in part by the Research University Grant, Universiti Kebangsaan Malaysia, under Grant DPK-2021-001, Grant DIP-2020-004, and Grant MI-2020-002; and in part by the Qatar National Research Foundation (QNRF) under Grant NPRP12s-0227- 190164.
Languageen
PublisherInstitute of Electrical and Electronics Engineers Inc.
SubjectApplication-specific integrated circuit (ASIC)
deep learning (DL)
deep neural network (DNN)
energy-efficient architectures
field-programmable gate array (FPGA)
hardware accelerator
machine learning (ML)
neural network hardware
review
TitleCustom Hardware Architectures for Deep Learning on Portable Devices: A Review
TypeArticle
Pagination6068-6088
Issue Number11
Volume Number33


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record