USING RECOGNIZED EMOTION AS IMPLICIT FEEDBACK FOR A RECOMMENDER SYSTEM
DOI:
https://doi.org/10.26906/SUNZ.2023.3.115Keywords:
deep neural network with immersion layers, extended reality, immersiveness, 3D convolutional neural networkAbstract
Topicality. Due to the growing of digitalization of art, the tasks of improving immersiveness during user interaction with extended reality art systems arise. Research methods. Deep Neural Network with Immersion Layers, 3D Convolutional Neural Network. The purpose of the article: Improving the selection of the most relevant videos by using recognized user emotions as implicit feedback in the recommender system of virtual art compositions. The results obtained. A system was developed for classifying the user's emotions in the video, further calculating the emotional scoring and using the obtained value as implicit feedback for the recommender system for selecting the most relevant videos for creating virtual art compositions. The combination of the presented methods will allow to improve the personalization of recommendations and increase its immersiveness during user interaction with virtual art compositions. Conclusion. The approach developed in the work can be used to improve the immersiveness and personalization of recommendations during user interaction with extended reality art systems.Downloads
References
F. Ye, "Image Art Innovation based on Extended Reality Technology," 2022 7th IEEE International Conference on Data Science in Cyberspace (DSC), Guilin, China, 2022, pp. 584-587, doi: 10.1109/DSC55868.2022.00087.
Gironacci, Irene. (2021). State of the Art of Extended Reality Tools and Applications in Business. 10.4018/978-1-7998-4339-9.ch008.
Caarls, Jurjen & Jonker, Pieter & Kolstee, Yolande & Rotteveel, Joachim & Eck, Wim. (2009). Augmented Reality for Art, Design and Cultural Heritage—System Design and Evaluation. EURASIP J. Image and Video Processing. 2009.10.1155/2009/716160.
Lai, Chi-Hui & Chen, Chun-Chih & Wu, Shu-Ming. (2023). Analysis of Key Factors for XR Extended Reality Immersive Art Experience. International Journal of Social Sciences and Artistic Innovations. 3. 24-36. 10.35745/ijssai2023v03.01.0004.
Wang, Fei. (2023). Research on the application of immersive art in digital technology scene. Advances in Education, Humanities and Social Science Research. 5. 88. 10.56028/aehssr.5.1.88.2023.
Zhang, Ying. (2023). Immersive Multimedia Art Design Based on Deep Learning Intelligent VR Technology. Wireless Communications and Mobile Computing. 2023. 1-8. 10.1155/2023/9266522.
Ha, Taejin & Kim, Yeongmi & Ryu, Jeha & Woo, Woontack. (2006). Enhancing Immersiveness in AR-Based Product Design. 4282. 207-216. 10.1007/11941354_22.
Kuliahin, A. & Narozhnyi, V. & Tkachov, V. & Kuchuk, H.. (2022). Study of methods of building recommendation system for solving the problem of selecting the most relevant video when creating virtual art compositions. Control, navigation and communication systems. Collection of scientific papers. 4. 94-99. 10.26906/SUNZ.2022.4.094.
Zhao, Qian & Harper, Franklin & Adomavicius, Gediminas & Konstan, Joseph. (2018). Explicit or implicit feedback? engagement or satisfaction?: a field experiment on machine-learning-based recommender systems. SAC '18: Proceedings of the 33rd Annual ACM Symposium on Applied Computing. 1331-1340. 10.1145/3167132.3167275.
Yang, Zhen. (2022). Research on Personalized Product Recommendation Algorithm for User Implicit Behavior Feedback.10.1007/978-981-19-6901-0_149.
Hu, Yifan & Koren, Yehuda & Volinsky, Chris. (2008). Collaborative Filtering for Implicit Feedback Datasets. Proceedings - IEEE International Conference on Data Mining, ICDM. 263-272. 10.1109/ICDM.2008.22.
Zhao, Jianfeng & Mao, Xia & Zhang, Jian. (2018). Learning deep facial expression features from image and optical flow sequences using 3D CNN. The Visual Computer. 34. 10.1007/s00371-018-1477-y.
D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun and M. Paluri, "A Closer Look at Spatiotemporal Convolutions for Action Recognition," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 6450-6459, doi: 10.1109/CVPR.2018.00675.
Umer, Saiyed & Rout, Ranjeet & Hossain, Sanoar & Asari, Vijayan. (2021). A Unified Framework of Deep Learning-Based Facial Expression Recognition System for Diversified Applications. Applied Sciences. 11. 10.3390/app11199174.
Zadeh, A., Liang, P.P., Poria, S., Vij, P., Cambria, E., & Morency, L.P. (2016). CMU-MOSI Dataset (Version 1.0) [Data set]. CMU Multimodal SDK. http://multicomp.cs.cmu.edu/resources/cmu-mosi-dataset/.