Research on Integrating Artificial Intelligence in User Feedback Loops for Dynamically Adjusting Video Game Difficulty
DOI:
https://doi.org/10.62051/ijcsit.v8n1.02Keywords:
Dynamic Difficulty Adjustment, Player Modeling, Implicit Feedback, Machine Learning, Unity IntegrationAbstract
Dynamic Difficulty Adjustment (DDA) remains one of the most promising mechanisms for optimizing player engagement in interactive digital entertainment. This research presents an end-to-end framework for integrating artificial intelligence into the player feedback loop to adjust game difficulty dynamically. Current similar models of DDA are usually requiring manual setup for controlling the in-game resources, making the capital cost high to deploy such a model. In this research, a functional prototype is deployed and tested in “Gulltovia”, a mobile card game launched on Appstore. We design a data pipeline that parses raw player event logs from the game, computes session-level metrics such as fail count, adjusted playtime, and button interaction frequency, and uses these as inputs to a hybrid classification system that combines rule-based thresholds with a machine learning model. The trained model is exported to ONNX and executed inside the Unity engine, enabling real-time predictions and conservative difficulty adjustments based on probability confidence.
Downloads
References
[1] Csikszentmihalyi, Mihaly. Flow: The Psychology of Optimal Experience. Harper & Row, 1990. https://archive.org/details/flowpsychologyof00csik
[2] Hunicke, Robin. “The Case for Dynamic Difficulty Adjustment in Games.” Proceedings of the 2005 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology, ACM, 2005, pp. 429–433. doi:10.1145/1178477.1178573.
[3] Zoeller, Georg. “Game Telemetry: The Key to Balancing Your Game.” Game Developers Conference (GDC), 2010, https://gdcvault.com/play/1012292/Game-Telemetry-The-Key-to
[4] Wang, Zhiguang, Weizhong Yan, and Tim Oates. “Time Series Classification from Scratch with Deep Neural Networks: A Strong Baseline.” 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, 2017, pp. 1578–1585. doi:10.1109/IJCNN.2017.7966039.
[5] Srivastava, Nitish, et al. “Dropout: A Simple Way to Prevent Neural Networks from Overfitting.” Journal of Machine Learning Research, vol. 15, 2014, pp. 1929–1958. http://jmlr.org/papers/v15/srivastava14a.html
[6] Ioffe, Sergey, and Christian Szegedy. “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015, https://arxiv.org/abs/1502.03167
[7] Kingma, Diederik P., and Jimmy Ba. “Adam: A Method for Stochastic Optimization.” International Conference on Learning Representations (ICLR), 2015, https://arxiv.org/abs/1412.6980
[8] Mohammadi Foumani, Nasim, et al. “Deep Learning for Time Series Classification and Extrinsic Regression: A Survey.” ACM Computing Surveys, vol. 56, no. 2, 2024, Article 28, pp. 1–43. doi:10.1145/3624971.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 International Journal of Computer Science and Information Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







