Vision and Autonomous Systems Seminar - Yingsi Qin September 16, 2024 3:30pm — 4:30pm Location: In Person - Newell-Simon 3305 Speaker: YINGSI QIN, EDWARD LU, and MOSAM DABHI, PhD Candidates, Carnegie Mellon University Split-Lohmann displays provide a novel approach to creating instant visual 3D worlds that support realistic eye accommodation. Unlike commercially available VR headsets that show content at a fixed depth, the proposed display can optically place each pixel region to a different depth, instantly creating eye-tracking-free 3D worlds without using time-multiplexing. This enables real-time streaming of 3D content over a large depth range at high spatial resolution, offering an exciting step towards a more immersive real-time 3D experience. We demonstrate the technology's capabilities through a lab prototype, showcasing high-quality visuals across various static, dynamic, and interactive 3D scenes. — Yingsi Qin is a PhD candidate in Electrical and Computer Engineering at Carnegie Mellon University, advised by Aswin C. Sakaranarayanan and Matthew P. O'Toole. Her research focuses on designing and building next-generation computational 3D displays for Virtual, Augmented, and Mixed Reality. The interdisciplinary work involves a fusion of computer vision, optics, signal processing, and machine learning. Yingsi received the Best Paper Award at SIGGRAPH 2023 and the Best Demo Award at ICCP 2023. Yingsi holds a B.S. in Computer Science from Columbia University and a B.A. in Physics from Colgate University. She was a research intern at Meta Reality Labs in the Display Systems Research team (2024) and Snap Research in the Computational Imaging team (2020). She was also a software engineering intern at Google Search (2019). The VASC Seminar is sponsored in part by Meta Reality Labs Pittsburgh Event Website: https://www.ri.cmu.edu/event/instant-visual-3d-worlds-through-split-lohmann-displays/ Add event to Google Add event to iCal