An Efficient Deep Learning Approach for Text Classification of social media text

Authors

  • Rabia Rehman Department of Computer Science, University of Southern Punjab, Multan
  • Hadeesa Muskan National University of Science and Technology, Balochistan

Keywords:

Text Classification, Distilled Transformer, Transfer Learning, Data Augmentation

Abstract

Text classification, which involves assigning text to specific categories, is a crucial task in natural language processing (NLP) for applications like spam detection, sentiment analysis, and topic identification. While state-of-the-art models perform well, their reliance on large annotated datasets and high computational resources can be a significant limitation, particularly in low-resource environments. This paper presents a resource-saving framework for classifying social media texts by employing knowledge distillation, adapter modules, and improved data augmentation techniques like back-translation and synonym replacement. By using pre-trained transformer models, this approach facilitates effective learning from rich language resources. Experimental findings show that a Distilled Transformer model can achieve an accuracy of 86.5% , which is comparable to the Full BERT model's 88.7% accuracy , but with reduced training time, memory usage, and inference latency. This methodology, which leverages data augmentation and transfer learning, is suitable for operation on edge devices and in environments with limited resources.

Downloads

Published

2025-09-06

How to Cite

Rabia Rehman, & Hadeesa Muskan. (2025). An Efficient Deep Learning Approach for Text Classification of social media text. Dialogue Social Science Review (DSSR), 3(9), 1–14. Retrieved from http://www.dialoguessr.com/index.php/2/article/view/942

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.