This thesis presents a comprehensive exploration of Extractive Summarization techniques using different variants of Transformer models followed by a Bi-directional Long Short-Term Memory (BiLSTM) model. Three distinct approaches were investigated differ in the first stage: fine-tuned BERT, DistilBERT, and RoBERTa. The approaches were compared and evaluated based on their ROUGE scores, and the impact of different models on the performance was analyzed. The study found that while using BERT in the first stage showed promising results, DistilBERT provided an efficient trade-off between speed and resource utilization. RoBERTa exhibited better results in fine-tuning, though it faced challenges during BiLSTM training (the second stage), indicating room for further research. However, the work was limited by computational restrictions, preventing the use of larger-scale training datasets typically used in transformer model fine-tuning. Despite these limitations, the approaches demonstrated the potential and efficacy of transformer models in performing text summarization tasks.
-
Notifications
You must be signed in to change notification settings - Fork 0
shadi433/Two-stage-Model-for-Extractive-Summarization-as-Token-Classification
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published