Holographic near-eye displays promise to deliver full 3D experiences in a novel and potentially very thin/small optical layout and have made remarkable progress over the last few years. Unlike conventional displays, holographic displays are based on phase-only spatial light modulators (SLMs) which can show 3D images by shaping a wave field such that the target image is created through interference. However, holographic display have remained only in the research field because of the computational complexity, low image quality, and bulky optics. In this presentation, I will introduce recent research efforts from Nvidia and Stanford researchers to solve above problems with learning-based approaches. For example, optimizing phase patterns in real-time with a learned wave propagation model (Neural Holography), speckle reduction with partially coherent light sources (Speckle-free Holography), higher contrast with an additional SLM and camera-in-the-loop optimization (Michelson Holography), and optimizing high diffraction orders without optical filtering for compact holographic displays (Unfiltered Holography).
|