Review of the Development of Input Word Prediction
DOI:
https://doi.org/10.62051/46z0gm92Keywords:
input method; forecast; natural language; processing technology; artificial intelligence; develop.Abstract
As one of the important applications in the field of human-computer interaction, input word prediction has made great progress in recent years.This paper reviews and summarizes the development process of input word prediction technology, from the early statistical model-based method to the current deep learning-based technology application, describes its development trajectory and the representative algorithms of each technology.In addition, this paper also analyzes the current problems and challenges faced by input method word prediction, such as language model modeling, user personalized needs, multi-language input and other aspects of the problem, and discusses the future development trend, including the combination of multi-modal information, fusion reinforcement learning and other new technologies.Finally, this paper also looks forward to the extensive application prospects of input word prediction technology in many fields, and its potential contribution to improving user input efficiency, improving user experience and promoting the development of natural language technology.
Downloads
References
X. Yang, X. Sun, & Z. Dong. A Survey of Word Prediction Techniques for Chinese Input Method. ACM Computing Surveys, 51(5), 1-32,2018.
F. Wu, C. Yang, X. Lan, et al. A Review and Prospect of Artificial Intelligence. China Science Foundation, 32(03), 243-250, 2018.
L. Hao, Z. Yu. Development and Application of Natural Language Processing Technology Based on Artificial Intelligence. Heilongjiang Science, 14(22), 124-126, 2020.
A. Graves, A. Mohamed, & Geoffrey Hinton. Speech recognition with deep recurrent neural networks. https://arxiv.org/pdf/1303.5778.pdf. Accessed December 12, 2023.
L. Lin. Development of Natural Language Processing Technology in the Era of Artificial Intelligence. Electronic World, (22), 24-25, 2020 .
Z. Yang, Zihang Dai, Yu Yang, et al. . Xlnet: Generalized autoregressive pretraining for language understanding. https://arxiv.org/pdf/1906.08237, 2020
T. Ma, Guoliang Zhang, & Xiaojun Guo. (2024). A Review of Research on Offensive and Defensive Natural Language Processing Based on Deep Learning. China-Arab Science and Technology Forum (Bilingual), 2024(01), 98-102.
J. Devlin, Ming-Wei Chang, Kenton Lee, et al. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
K. Onishi, Hiroshi Tanaka, & Satoshi Nakamura. Multimodal Voice Activity Prediction: Turn-taking Events Detection in Expert-Novice Conversation. In Proceedings of the 11th International Conference on Human-Agent Interaction 13-21,2023. DOI: https://doi.org/10.1145/3623809.3623837
Y. Yang. Application Research of Artificial Intelligence Natural Language Processing in Audio Textbooks. Audio Technology, 46(05), 29-35, 2022.
Z. Dai, J. Callan. Deeper text understanding for IR with contextual neural language modeling. In Proceedings of the 42nd international ACM SIGIR conference on research and development in information retrieval, 985-988, 2019. DOI: https://doi.org/10.1145/3331184.3331303
M. Samson Lakew, M. Cettolo, M. Federico. A comparison of transformer and recurrent neural networks on multilingual neural machine translation. arXiv preprint arXiv:1806.06957, 2018. DOI: https://doi.org/10.4000/ijcol.531
A. Can Kinaci. Spelling Correction using recurrent neural networks and character level n-gram. In 2018 International Conference on Artificial Intelligence and Data Processing (IDAP) 1-4. DOI: https://doi.org/10.1109/IDAP.2018.8620899
Z. Zhang, H. Zhao, R. Wang. Machine reading comprehension: The role of contextualized language models and beyond. arXiv preprint arXiv:2005.06249, 2020.
Downloads
Published
Conference Proceedings Volume
Section
License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.