Skip to main content

PhD Project 2 – Yunzhan Zhou

Recognition of User Behaviour in a VR museum 

The project is based on the dataset EDVAM, aiming to understand and recognise user behaviour in a VR museum for a better user interactive experience.

Research Outline and Goals Virtual reality (VR) has drawn more attention in the past several years, providing users with an immersive experience through head-mounted displays (HMDs). It has been applied in various domains such as education, gaming, and entertainment, where user experience are augmented by VR hardware advancements such as high resolution and wide field of view, and various interaction methods such as hand tracking and motion tracking. However, the mechanism of 3D visual attention in VR is less studied. Previous studies on visual attention mechanism are primarily limited in 2-dimensional (2D) spaces such as images and video. Our research aims to understand the 3D visual attention mechanism by learning how different factors impact users’ visual attention in virtual reality environments based on an eye-tracking dataset.

Pilot Study and Feasibility We have built a 3D eye-tracking dataset collected when 63 participants were navigating through a VR museum where their’ real-time eye movements were recorded. Based on the collected data, we have proposed several deep learning models (CNN, LSTM, Seq2Seq) for predicting users’ subsequent visual attention from history eye-tracking records. For example, we devised an LSTM model to predict the spatial areas with which the user might be about to interact. The result showed 79.87% accuracy, demonstrating the feasibility and effectiveness of visual attention modelling in VR environments.

Methodology Based on the dataset, we want to learn how factors such as user’s gender, eye movement history, body movement history, or 3D objects surrounding will impact the user’s visual attention when navigating through a VR environment. We need two possible methods to achieve our goal. One is by data visualisation. Another is to perform an ablation study on the visual attention predicting models we have built by Investigating the effect of feature selection on the prediction accuracy.


Mr Martin Ramalingum

UG student at Durham CS