In virtual reality (VR) and augmented reality (AR) display, the vergence-accommodation conflict (VAC) is a significant issue. Thus, true-3D display technologies has been proposed to solve the VAC problem. Integral imaging (II) display, one of the most critical true-3D display technologies, has received increasing research recently. Significantly, anachromatic metalens array has realized a broadband metalens-array-based II (meta-II). However, the past micro-scale metalens arrays were incompatible with commercial micro-displays. Additionally, the elemental image array(EIA)rendering is slow. These device and algorithm problems prevent meta-II from being used for practical video-rate near-eye displays (NEDs). This research demonstrates a II-based NED combining a commercial micro-display and a metalens array. We make efforts in the hardware and software to solve the bottlenecks of video-rate metalens array II-based NED. The large-area nanoimprint technology fabricates the metalens array, and a novel real-time rendering algorithm is proposed to generate the EIA. We also build a see-through prototype based on our meta-II NED, demonstrating the effect of depth of field in AR, and the 3D parallax effect on the real mode. This work verifies the feasibility of nanoimprint technology for mass preparation of metalens samples, explores the potential of video-rate meta-II displays, which we can be applied in the fields of VR/AR and 3D display.
The light field display (LFD) can provide realistic 3D content with feasible implementation but is encountering low spatial resolution, requiring higher-resolution picture generation units. The mini-LED backlight LCD is a strong competitor due to its high contrast, small form factor, and mature fabrication. We remove the color filter array for tripled resolution and light efficiency, namely, the field sequential color (FSC) LCD. This study reports an LFD prototype based on a 240-Hz FSC-LCD. The unique issue in FSC-LCDs, color breakup, is addressed by using three fields for one frame, but not traditional 4-field driving. The low-color-breakup driving is achieved using multi-objective optimization (MOO) to create a training set for a lightweight neural network. The MOO guarantees our system’s color breakup and image distortion are simultaneously invisible, and the lightweight neural network realizes real-time driving.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.