عرض بسيط للتسجيلة

المؤلفChen J.
المؤلفLi K.
المؤلفBilal K.
المؤلفZhou X.
المؤلفLi K.
المؤلفYu P.S.
تاريخ الإتاحة2020-04-02T11:08:05Z
تاريخ النشر2019
اسم المنشورIEEE Transactions on Parallel and Distributed Systems
المصدرScopus
الرقم المعياري الدولي للكتاب10459219
معرّف المصادر الموحدhttp://dx.doi.org/10.1109/TPDS.2018.2877359
معرّف المصادر الموحدhttp://hdl.handle.net/10576/13787
الملخصBenefitting from large-scale training datasets and the complex training network, Convolutional Neural Networks (CNNs) are widely applied in various fields with high accuracy. However, the training process of CNNs is very time-consuming, where large amounts of training samples and iterative operations are required to obtain high-quality weight parameters. In this paper, we focus on the time-consuming training process of large-scale CNNs and propose a Bi-layered Parallel Training (BPT-CNN) architecture in distributed computing environments. BPT-CNN consists of two main components: (a) an outer-layer parallel training for multiple CNN subnetworks on separate data subsets, and (b) an inner-layer parallel training for each subnetwork. In the outer-layer parallelism, we address critical issues of distributed and parallel computing, including data communication, synchronization, and workload balance. A heterogeneous-aware Incremental Data Partitioning and Allocation (IDPA) strategy is proposed, where large-scale training datasets are partitioned and allocated to the computing nodes in batches according to their computing power. To minimize the synchronization waiting during the global weight update process, an Asynchronous Global Weight Update (AGWU) strategy is proposed. In the inner-layer parallelism, we further accelerate the training process for each CNN subnetwork on each computer, where computation steps of convolutional layer and the local weight training are parallelized based on task-parallelism. We introduce task decomposition and scheduling strategies with the objectives of thread-level load balancing and minimum waiting time for critical paths. Extensive experimental results indicate that the proposed BPT-CNN effectively improves the training performance of CNNs while maintaining the accuracy.
راعي المشروعThis research is partially funded by the National Key R&D Program of China (Grant No. 2016YFB0200201), the National Outstanding Youth Science Program of National Natural Science Foundation of China (Grant No. 61625202), the International Postdoctoral Exchange Fellowship Program (Grant No. 2018024), and the China Postdoctoral Science Foundation funded project (Grant No. 2018T110829). This work is also supported in part by NSF through grants IIS-1526499, IIS-1763325, CNS-1626432, and NSFC 61672313.
اللغةen
الناشرIEEE Computer Society
الموضوعbi-layered parallel computing
Big data
convolutional neural networks
deep learning
distributed computing
العنوانA Bi-layered parallel training architecture for large-scale convolutional neural networks
النوعArticle
الصفحات965-976
رقم العدد5
رقم المجلد30


الملفات في هذه التسجيلة

الملفاتالحجمالصيغةالعرض

لا توجد ملفات لها صلة بهذه التسجيلة.

هذه التسجيلة تظهر في المجموعات التالية

عرض بسيط للتسجيلة