Research on Incremental Learning Methods Based on Sample and Category Prototype Playback
DOI:
https://doi.org/10.62051/ijcsit.v2n2.04Keywords:
Incremental Learning; Catastrophic Oblivion; Sample Playback; Category Prototype Playback; SMOTE Algorithm Based on Global Sample RationalityAbstract
Machine learning technology has been successfully applied to computer vision, natural language processing and other fields, most of the existing machine learning models are deployed in actual production, because their categories and parameters are fixed, so they can only be applied to the categories that appear in the training set, and cannot incrementally learn the new categories that appear in the practical application. However, the advent of incremental learning has solved this problem. Therefore, this paper proposes an incremental learning method based on sample and category prototype replay, which aims to solve the forgetting problem in incremental learning while maintaining high accuracy and low computational complexity. Our method consists of two steps: sample playback and category prototype playback. In the sample playback stage, we record and store the historical data of the recent period, and in order to further reduce the category imbalance problem during playback, we propose a SMOTE algorithm based on global sample rationality, so that the model can learn the latest trends and changes. In the category prototype playback stage, we consider the importance of the representativeness of different samples and distinguish the important factors of different samples according to their dataset abundance, which makes it more accurate to calculate the prototype. The experimental results show that our method has good performance in both forgetting and prediction, and also has low computational complexity.
Downloads
References
CHANG LIU, YI ZHANG, ZHONGJUN DING, et al. Active Incremental Learning for Health State Assessment of Dynamic Systems With Unknown Scenarios[J]. IEEE transactions on industrial informatics, 2023 , 19(2) : 1863-1873 .
HAN ZHOU, HONGPENG YIN, DANDAN ZHAO, et al. Incremental Learning and Conditional Drift Adaptation for Nonstationary Industrial Process Fault Diagnosis[J]. IEEE transactions on industrial informatics, 2023 , 19(4) : 5935-5944 . DOI:10.1109/TII.2022.3179423.
Kim D, Han B. On the Stability-Plasticity Dilemma of Class-Incremental Learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 20196-20204.
Smith, J. et al. Incremental Bayesian for Classifier for Streaming Data with Concept Drift[J]. Journal of Machine Learning, 2022 , 45(3) : 45-60 .
Belilovsky E, Caccia M, Lin M, et al. Online continual learning with maximally interfered retrieval[J]. vol, 2019, 32 : 11849-11860 .
Kirkpatrick J, Pascanu R, Rabinowitz N, et al. Overcoming catastrophic forgetting in neural networks[J]. Proceedings of the national academy of sciences, 2017 , 114(13) : 3521-3526 .
Smith J S, Seymour Z, Chiu H P. Incremental learning with differentiable architecture and forgetting search[C]//2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022 : 01-08 .
Yan S, Xie J, He X. Der: Dynamically expandable representation for class incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 3014-3023 .
Lee A, Gomes H M, Zhang Y. Balancing the Stability-Plasticity Dilemma with Online Stability Tuning for Continual Learning[C]//2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022: 1-8.
Lopez-Paz D, Ranzato M. Gradient episodic memory for continual learning[C]//Proceedings of the Annual Conference on Neural Information Processing Systems. Long Beach, USA, 2017 : 6467-6476 .
Verwimp E, De Lange M, Tuytelaars T. Rehearsal revealed: The limits and merits of revisiting samples in continual learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 9385-9394 .
Shen G, Zhang S, Chen X, et al. Generative feature replay with orthogonal weight modification for continual learning[C]//2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021 : 1-8 .
Iscen A, Zhang J, Lazebnik S, et al. Memory-efficient incremental learning through feature adaptation[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVI 16. Springer International Publishing, 2020 : 699-715 .
Pellegrini L, Graffieti G, Lomonaco V, et al. Latent replay for real-time continual learning[C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020 : 10203-10209 .
Zhu F, Zhang X Y, Wang C, et al. Prototype augmentation and self-supervision for incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 5871-5880 .
Yang H M, Zhang X Y, Yin F, et al. Convolutional prototype network for open set recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020 , 44(5) : 2358-2370 .
K. Zhu, w. Zhai, w, Y. Cao, B.J. Luo, and J.Z.Zha,“Self-sustainingre presentation expansion for non-exemplar class-incremental learning, In: Proceedings of the lEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022, pp.92869295.
G. Yang, M. Li, Y. Li, and Y. Ke,“An Improved Incremental Learning based on Gradient Vector Field,", EEE Access, vol. 9, pp.33955-339672021.
Y. Dong, Y. Zhang, and X. Wang, “A novel incremental learning frame-work based on class prototype reconstruction and transfer learning Applied Sciences, vol. 10, no. 8, p.2871,2020.
J. Bang and J. Hui, “Rainbow Memory: Continual Learning with a Memory of Diverse Samples,Journal of Computer Research and Development, vol.58, no.7, pp.1337-1348, 2021.
Z.D. Mai, and R. W. Li, “Supervised Contrastive Replay: Revisiting the Nearest Class Mean Classifier in Online Class-Incremental Continual Learning,, Journal of Computer Research and Development, vol. 58, no.4, pp.811-820,2021
W. Zhu, J. He, and Z.Y. You, “Class-incremental learning with prototype container", Journal of Huazhong University of Science and Technology (Natural Science Edition), vol. 49, no.1, pp. 103-109, 2021
X. H. Li, Y. N. Xu, F. Y Li, and Y.F. Liang,“Learning with out Forgetting,"”Computational Intelligence, vol. 34, no.4, pp. 970-9872018.
T. Y. Li, W. J. Zhao, and Y. F. Liu,“Experience Replay for Continual Learning,"Acta Electronica Sinica, vol.49,no.2, pp. 238-243, 2021.
J. Yoon, E. Yang, and J. Lee,“Lifelong Learning with Dynamically Expandable Networks, In Proceedings of the AAAl Conference on Artificial Intelligence, vol. 33, 2019, pp. 6140-6147.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Jiamin Zhi, Yong Liu

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







