Review of Aspect-Based Sentiment Analysis Based on Data Augmentation and Pre-Trained Models

Authors

  • Lizhu Ye
  • Md Gapar Md Johar
  • Mohammed Hazim Alkawaz

DOI:

https://doi.org/10.62051/ijcsit.v3n2.34

Keywords:

Aspect-based sentiment analysis, Data augmentation, Pre-trained language model, BERT, RoBERTa

Abstract

This paper systematically reviews the aspect-based sentiment analysis techniques that integrate data augmentation and pre-trained language models. Aspect-based sentiment analysis aims to identify the sentiment tendency of specific aspects in texts. Traditional methods face challenges such as data sparsity and insufficient model generalization. Data augmentation and pre-trained language models bring opportunities to solve these problems. Data augmentation can alleviate data sparsity, and pre-trained language models have powerful feature extraction and transfer learning capabilities. This paper elaborates on the task definition of aspect-based sentiment analysis, focusing on specific methods based on data augmentation and pre-trained language models, including data augmentation strategies and methods, as well as methods based on pre-trained language models such as BERT, RoBERTa, BART, and XLNet, and explores how to combine data augmentation and pre-trained models to improve the performance of aspect-level sentiment analysis. Finally, it is pointed out that there are still some challenges and opportunities in this field, such as the diversity of data augmentation techniques, optimization of pre-trained models, multimodal sentiment analysis, interpretability, and credibility, which need to be further explored.

Downloads

Download data is not yet available.

References

Akram, A., & Sabir, A. J. I. (2023). Fine-tuning BERT for Aspect Extraction in Multi-domain ABSA. 47(9).

Bensoltane, R., Zaki, T. J. C. S., & Language. (2024). Neural multi-task learning for end-to-end Arabic aspect-based sentiment analysis. 101683.

Cao, D. H., Quang, V. D., & Ngoc, H. T. (2022). Aspect-category-opinion-sentiment extraction using generative transformer model. 2022 RIVF International Conference on Computing and Communication Technologies (RIVF).

Chauhan, G. S., Meena, Y. K., Gopalani, D., Nahta, R. J. M. T., & Applications. (2022). A mixed unsupervised method for aspect extraction using BERT. 81(22), 31881-31906.

Chen, D. Z., Faulkner, A., & Badyal, S. (2022). Unsupervised data augmentation for aspect-based sentiment analysis. Proceedings of the 29th International Conference on Computational Linguistics,

Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. J. a. p. a. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding.

Feng, Z., Zhou, H., Zhu, Z., & Mao, K. J. E. S. W. A. (2022). Tailored text augmentation for sentiment analysis. 205, 117605.

Goud, A., Garg, B. J. M. T., & Applications. (2023). A novel framework for aspect-based sentiment analysis using a hybrid BERT (HybBERT) model. 1-33.

Ismet, H. T., Mustaqim, T., & Purwitasari, D. J. S. J. I. (2022). Aspect-based sentiment analysis of product review using memory network. 9(1), 73-83.

Kumar, A., & Sharan, A. J. D. l.-b. a. f. s. a. (2020). Deep learning-based frameworks for aspect-based sentiment analysis. 139-158.

Kumar, B., Badiger, V. S., & Jacintha, A. D. (2024). Sentiment Analysis for Products Review based on NLP using Lexicon-Based Approach and Roberta. 2024 International Conference on Intelligent and Innovative Technologies in Computing, Electrical and Electronics (IITCEE).

Lee, C., Lee, H., Kim, K., Kim, S., & Lee, J. (2024). An Efficient Fine-tuning of Generative Language Model for Aspect-Based Sentiment Analysis. 2024 IEEE International Conference on Consumer Electronics (ICCE).

Lengkeek, M., van der Knaap, F., Frasincar, F. J. I. P., & Management. (2023). Leveraging hierarchical language models for aspect-based sentiment analysis on financial data. 60(5), 103435.

Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Zettlemoyer, L. J. a. p. a. (2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.

Li, J., Yu, J., & Xia, R. (2022). Generative cross-domain data augmentation for aspect and opinion co-extraction. Proceedings of the 2022 conference of the North American chapter of the Association for Computational Linguistics: Human Language Technologies.

Downloads

Published

19-07-2024

Issue

Section

Articles

How to Cite

Ye, L., Md Johar, M. G., & Hazim Alkawaz, M. (2024). Review of Aspect-Based Sentiment Analysis Based on Data Augmentation and Pre-Trained Models. International Journal of Computer Science and Information Technology, 3(2), 314-330. https://doi.org/10.62051/ijcsit.v3n2.34