A key bottleneck in developing and deploying augmented-reality (AR) systems is the inability to assess a user’s experience in real time. The problem sounds deceptively simple - why not just measure latency? - but in practice it is far more complex. Perceived quality of AR-generated virtual content depends on accurate light- and shadow-estimation, high-fidelity spatial mapping, and correct handling of static and dynamic occlusions, all of which interact in non-linear ways. Traditional image quality metrics detect degradation, not the contextual errors that arise when AR-generated virtual objects are placed incorrectly in the physical world. Compounding the challenge, we still lack a clear understanding of how AR overlays influence human attention: subtle graphics can fade into the background, while overly vivid cues can monopolize focus and erode situational awareness.
Duke I3T Lab has been approaching this important challenge from multiple angles. In one line of work, we are learning to evaluate user attention allocation via AR-headset-captured gaze metrics, for which we are creating new machine-learning models, algorithms, and data-generation approaches. In another line of work, we are developing metrics for real-time assessment of AR platforms’ spatial mapping. Recently, we have also started investigating whether modern vision-language models (VLMs) can be used as a proxy for human perception of AR scenes. Together, these efforts will enable automated, human-centered AR quality assessment, fuel better design and QA practices, drive adaptive resource-allocation algorithms, and ultimately deliver safer, more secure AR systems.
Recent and selected publications
[ISMAR25] Z. Qu, T. Hu, C. Fronk, and M. Gorlatova. Will You Be Aware? Eye Tracking-Based Modeling of Situational Awareness in Augmented Reality. In Proc. IEEE ISMAR, Oct. 2025.
[GenAI-XR25] L. Duan, Y. Xiu, and M. Gorlatova, Advancing the Understanding and Evaluation of AR-Generated Scenes: When Vision-Language Models Shine and Stumble. In Proc. IEEE GenAI-XR Workshop co-located with IEEE VR 2025, Mar. 2025.
[ISMAR24] Z. Qu, R. Byrne, and M. Gorlatova. “Looking'' into Attention Patterns in Extended Reality: An Eye Tracking-based Study. Proc. IEEE ISMAR, Oct. 2024.
[IPSN22] G. Lan, T. Scargill, and M. Gorlatova, EyeSyn: Psychology-inspired Eye Movement Synthesis for Gaze-based Activity Recognition, in Proc. ACM/IEEE IPSN, May 2022. Selected media coverage: vice.com, hackster.io. Highlighted in the university-wide Duke Daily and in the NSF-wide Discoveries newsletters.
Great thanks to our sponsors!
This research is generously supported by the DARPA Young Faculty Award, awards from the National Science Foundation, and the Department of Defense-sponsored Center of Excellence in Advanced Computing and Software.