He has built a 3D eye-tracking dataset collected when users were navigating through a VR mu-seum. Based on the dataset, He proposed several methods for predicting subsequent behaviour attention. He is now working on improving the prediction methods and understanding the visual behaviour mechanism in VR.
The dataset can be found here: https://github.com/YunzhanZHOU/EDVAM
Virtual Reality, Human-Computer Interaction, Visual Attention, Eye Tracking, User Interface.
- (2019). 8. Towards personalized virtual reality touring through cross-object user interfaces. In Personalized Human-Computer Interaction. 201.
- Li, Xiangdong, Chen, Wenqian, Zhou, Yunzhan, Athalye, Surabhi, Chin, Wai Kit Daniel, Goh Wei Kit, Russell, Setiawan, Vincent & Hansen, Preben (2019). Mobile Phone-Based Device for Personalised Tutorials of 3D Printer Assembly. In Human-Computer Interaction. Recognition and Interaction Technologies. 11567: 37.
- Li, Zhaoxing, Shi, Lei, Cristea, Alexandra I. & Zhou, Yunzhan (2021), A Survey of Collaborative Reinforcement Learning: Interactive Methods and Design Patterns, ACM Designing Interactive Systems (DIS). Virtual, Association for Computing Machinery, 1579-1590.
- Zhou, Yunzhan, Feng, Tian, Shuai, Shihui, Li, Xiangdong, Sun, Lingyun & Duh, Henry B.L. (2019), An Eye-Tracking Dataset for Visual Attention Modelling in a Virtual Museum Context, The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry. 1.
- Zhou, Yunzhan, Feng, Tian, Shuai, Shihui, Li, Xiangdong, Sun, Lingyun & Duh, Henry Been-Lirn (2022). EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum. Frontiers of Information Technology & Electronic Engineering 23(1): 101.
- Sun, Lingyun, Zhou, Yunzhan, Hansen, Preben, Geng, Weidong & Li, Xiangdong (2018). Cross-objects user interfaces for video interaction in virtual reality museum context. Multimedia Tools and Applications77(21): 29013.