Multi-device architecture to support AR

Spatial Computing for Intelligent Augmented Reality

Current augmented-reality (AR) deployments have multiple limitations that must be overcome for the technology to become truly pervasive. The Duke I3T Lab develops advanced AR platforms that combine edge computing, advanced edge-adapted machine-learning techniques, and the Internet of Things to create intelligent, context-aware, and resource-efficient experiences. Our research improves core system components, such as VI-SLAM, depth mapping, and a broad suite of semantic scene-understanding functions, while reducing energy and latency costs. We showcase these edge- and IoT-enabled AR system capabilities in cognitive guidance, training support, and human-robot collaboration, and demonstrate how edge-based monitoring can harden the security of immersive AR experiences. 

Image
A system diagram of an edge-supported AR platform
An architecture that employs edge and cloud computing to support advanced semantic context awareness in AR applications. 
Recent and selected publications

[Xiu25Detecting] Y. Xiu and M. Gorlatova. Detecting Visual Information Manipulation Attacks in Augmented Reality: A Multimodal Semantic Reasoning Approach. In IEEE Transactions on Visualization and Computer Graphics 2025, with a presentation at IEEE ISMAR 2025 (8% acceptance rate). IEEE ISMAR 2025 Best Paper Award.

[Xiu25ViDDAR] Y. Xiu, T. Scargill, and M. Gorlatova. ViDDAR: Vision Language Model-based Detrimental Content Detection for Augmented Reality. IEEE Transactions on Visualization and Computer Graphics, Vol. 31, No. 5, May 2025. Presented at IEEE VR 2025. 

[IPSN24] L. Duan, Y. Chen, Z. Qu, M. McGrath, E. Ehmke, and M. Gorlatova, BiGuide: A Bi-Level Data Acquisition Guidance for Object Detection on Mobile Devices, in Proc. ACM/IEEE IPSN, Hong Kong, China, May 2024. (21.5% acceptance rate). ACM/IEEE IPSN 2024 Best Research Artifact Runner-up Award.

[INFOCOM23] Y. Chen, H. Inaltekin, and M. Gorlatova, AdaptSLAM: Edge-assisted Adaptive SLAM with Resource Constraints via Uncertainty Minimization, in Proc. IEEE INFOCOM, Hoboken, NJ, May 2023 (19.2% acceptance rate).

[IMWUT22] Y. Zhang, T. Scargill, A. Vaishnav, G. Premsankar, M. Di Francesco, and M. Gorlatova, InDepth: Real-time Depth Inpainting for Mobile Augmented Reality, in Proc. ACM IMWUT, Cambridge, UK, Sept. 2022. 

Great thanks to our sponsors! 

This research is supported by the National Science Foundation CAREER Award, DARPA Young Faculty Award, by the National Science Foundation AI ATHENA Institute, and by awards from the National Science Foundation, IBM, Cisco, and Meta.