Content area
Large Language Models (LLMs) have revolutionized natural language processing (NLP); however, their effectiveness remains limited for low-resource languages and dialects due to data scarcity. One such underrepresented variety is the Saudi dialect, a widely spoken yet linguistically distinct variant of Arabic. NLP models trained on Modern Standard Arabic (MSA) often struggle with dialectal variations, leading to suboptimal performance in real-world applications. This study aims to enhance LLM performance for the Saudi dialect by leveraging the MADAR dataset, applying data augmentation techniques, and fine-tuning a state-of-the-art LLM. Experimental results demonstrate the model’s effectiveness in Saudi dialect classification, achieving 91% accuracy, with precision, recall, and F1-scores all exceeding 0.90 across different dialectal variations. These findings underscore the potential of LLMs in handling dialectal Arabic and their applicability in tasks such as social media monitoring and automatic translation. Future research can further improve performance by refining fine-tuning strategies, integrating additional linguistic features, and expanding training datasets. Ultimately, this work contributes to democratizing NLP technologies for low-resource languages and dialects, bridging the gap in linguistic inclusivity within AI applications.
Details
Datasets;
Data augmentation;
Performance enhancement;
Large language models;
Natural language processing;
Effectiveness;
Language;
Text categorization;
Accuracy;
Computer science;
Sentiment analysis;
Social networks;
Classification;
Machine translation;
Phonology;
Information technology;
Dialects;
Social media;
Case studies;
Arabic language;
Language modeling;
Democratization;
Scarcity;
Languages