Analyzing Virtual Reality Teaching Behaviors Based on Multimodal Data
DOI:
https://doi.org/10.62051/ijcsit.v2n2.34Keywords:
Virtual Reality, Multimodal Data, Behavior Analysis, Data CollectionAbstract
The paper begins with an introduction to the background and research objectives, as well as the scope and limitations of the study. It focuses on multimodal data in virtual reality teaching. It discusses the different types of multimodal data and how they can be collected and analyzed. It also explores the integration of multimodal data in virtual reality teaching that need to be taken into account.
Downloads
References
Ahn, B. T., & Harley, J. M. (2020). Facial expressions when learning with a queer history app: Application of the control value theory of achievement emotions. British Journal of Educational Technology, 51(5), 1563–1576.
Poria, S., Hazarika, D., Majumder, N., Naik, G., Cambria, E., & Mihalcea, R. (2018). Meld: a multimodal multi-party dataset for emotion recognition in conversations, In Proceedings of ACL, 527-536, 2019.
A. Zadeh, P. Liang, S. Poria, E. Cambria, and L.P. Morency, Multimodal language analysis in the wild: cmu-mosei dataset and interpretable dynamic fusion graph, inProceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 2236–2246, 2018.
Sarah Masud, Subhabrata Dutta, Sakshi Makkar, Chhavi Jain, Vikram Goyal, Amitava Das, Tanmoy Chakraborty, “Hate is the new infodemic: a topic-aware modeling of hate speech diffusion on Twitter,” in 2021 IEEE 37th International Conference on Data Engineering (ICDE), Chania, Greece, 2021.
Zadeh, A., Zellers, R., Pincus, E., & Morency, L.P. (2016). Mosi: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259.
Liang H,Yuan J S, Thalmann D, Thalmann N M.AR in hand: egocentric palm pose tracking and gesture recognition for augmented reality applications. In: Proceedings of the 23rd ACM International Conference on Multimedia-MM'15. Brisbane, Australia, New York, ACM Press, 2015, 743–744
Elmezain M, Al-Hamadi A, Michaelis B. Hand trajectory-based gesture spotting and recognition using HMM. In: 2009 16th IEEE International Conference on Image Processing (ICIP). Cairo, Egypt, IEEE, 2009, 3577–3580
Han, W., Chen, H., & Poria, S., Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Dominican Republic, 2021.
S.-H.H. Yu-Fu Chen a, “Sentiment-influenced trading system based on multimodal deep reinforcement learning,” ELSEVIER, 2021.
Multimodal Data Fusion Collection and Analysis Platform. https://www.psychtech.cn/research-lab.html?bd_vid=9485476462290548649
Shreyash Mishra, S. Suryavardan, Amrit Bhaskar, Parul Chopra, Aishwarya Reganti, Parth Patwa, Amitava Das, Tanmoy Chakraborty, Amit Sheth, Asif Ekbal and Chaitanya Ahuja, “FACTIFY: a multi-modal fact verification dataset,” in De- Factify: Workshop On Multimodal Fact Checking and Hate Speech Detection, 2022.
Jongchan Park, Min-Hyun Kim and Dong-Geol Choi, “Correspondence learning for deep multi-modal recognition and fraud detection,” Electronics (Basel), p. 800, 2021.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Jianping Hu

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







