عرض بسيط للتسجيلة

المؤلفShah Jahan, Muhammad
المؤلفKhan, Habib Ullah
المؤلفAkbar, Shahzad
المؤلفUmar Farooq, Muhammad
المؤلفGul, Sarah
المؤلفAmjad, Anam
تاريخ الإتاحة2022-12-27T10:57:17Z
تاريخ النشر2021-05-03
اسم المنشورScientific Programming
المعرّفhttp://dx.doi.org/10.1155/2021/6641832
الاقتباسShah Jahan, M., Khan, H. U., Akbar, S., Umar Farooq, M., Gul, S., & Amjad, A. (2021). Bidirectional Language Modeling: A Systematic Literature Review. Scientific Programming, 2021.
الرقم المعياري الدولي للكتاب1058-9244
معرّف المصادر الموحدhttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85106388570&origin=inward
معرّف المصادر الموحدhttp://hdl.handle.net/10576/37684
الملخصIn transfer learning, two major activities, i.e., pretraining and fine-tuning, are carried out to perform downstream tasks. The advent of transformer architecture and bidirectional language models, e.g., bidirectional encoder representation from transformer (BERT), enables the functionality of transfer learning. Besides, BERT bridges the limitations of unidirectional language models by removing the dependency on the recurrent neural network (RNN). BERT also supports the attention mechanism to read input from any side and understand sentence context better. It is analyzed that the performance of downstream tasks in transfer learning depends upon the various factors such as dataset size, step size, and the number of selected parameters. In state-of-the-art, various research studies produced efficient results by contributing to the pretraining phase. However, a comprehensive investigation and analysis of these research studies is not available yet. Therefore, in this article, a systematic literature review (SLR) is presented investigating thirty-one (31) influential research studies published during 2018-2020. Following contributions are made in this paper: (1) thirty-one (31) models inspired by BERT are extracted. (2) Every model in this paper is compared with RoBERTa (replicated BERT model) having large dataset and batch size but with a small step size. It is concluded that seven (7) out of thirty-one (31) models in this SLR outperforms RoBERTa in which three were trained on a larger dataset while the other four models are trained on a smaller dataset. Besides, among these seven models, six models shared both feedforward network (FFN) and attention across the layers. Rest of the twenty-four (24) models are also studied in this SLR with different parameter settings. Furthermore, it has been concluded that a pretrained model with a large dataset, hidden layers, attention heads, and small step size with parameter sharing produces better results. This SLR will help researchers to pick a suitable model based on their requirements.
اللغةen
الناشرHindawi
الموضوعRecurrent neural network (RNN)
Computational linguistics
Feedforward neural networks
العنوانBidirectional Language Modeling: A Systematic Literature Review
النوعArticle
رقم المجلد2021
ESSN1875-919X


الملفات في هذه التسجيلة

Thumbnail

هذه التسجيلة تظهر في المجموعات التالية

عرض بسيط للتسجيلة