Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Full-color retinal-projection near-eye display using a multiplexing-encoding holographic method

Open Access Open Access

Abstract

We propose a novel method to construct an optical see-through retinal-projection near-eye display using the Maxwellian view and a holographic method. To provide a dynamic full-color virtual image, a single phase-only spatial light modulator (SLM) was employed in conjunction with a multiplexing-encoding holographic method. Holographic virtual images can be directly projected onto the retina using an optical see-through eyepiece. The virtual image is sufficiently clear when the crystal lens can focus at different depths; the presented method can resolve convergence and accommodation conflict during the use of near-eye displays. To verify the proposed method, a proof-of-concept prototype was developed to provide vivid virtual images alongside real-world ones.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Near-eye displays are the most promising mobile displays for artificial intelligence computing, and have thus received significant attention from academia and industries [13]. Most commercial near-eye displays can only generate one virtual display depth, so virtual images are blurred when the crystalline lens focuses on an area away from this virtual plane. Many studies have reported visual confusion and fatigue in association with these displays [4,5], and these are exacerbated in augmented reality display applications.

To solve these problems, virtual images can be generated using methods capable of more accurately reconstructing real-world objects. Multiple focal planes and varifocal planes can be used in near-eye displays to generate virtual three-dimensional (3D) objects at different depths. Multiple virtual planes can be generated using spatial-multiplexed [6,7] or time-multiplexed methods [810], and the virtual depth can be changed by adjusting dynamic optical or mechanical properties [11,12]. Multiple focal planes and varifocal planes can help to alleviate the problems of visual confusion and fatigue, but do not solve them. Light-field displays provide direct sufficient light into the user’s pupils for correct focus, imitating the process observed in the real world. A pinhole array [1315] or micro-lens array [1618] is often employed as an SLM to sample the light from pixels on the display panel on, and create the light field of a 3D scene within the exit pupil of the eyepiece. To make full use of the pixels, many optimization methods [1922] have been applied to improve display performance according to the characteristics of the human visual system (HVS). In these methods, limited pixels are used to provide depth, angular or wavefront information, which can lead to low-resolution virtual images. Methods that create holographic images [23,24] have also been attractive, as they contain both amplitude and phase information. Holographic methods and near-eye displays can be combined based on wave optics. The light wavefront is often controlled by SLMs [2527], which can control all light information of the virtual images. Nevertheless, holographic near-eye displays still suffer from low resolution. Moreover, due to étendue conservation [28], the product of the field of view and exit pupil in near-eye displays should be less than that of the diffraction angle and effective aperture of the SLM. The étendue of SLMs (limit of the diffraction angle and effective aperture) commonly used in holographic displays is often very small, and cannot achieve both a large field of view and large eye box. This problem also limits the development of holographic near-eye displays.

A near-eye display with an accommodation-free virtual image could also resolve the convergence and accommodation conflict. The virtual image has a large depth of field, and the image is clear regardless of whether the eye fixates on a near or distant object. Most accommodation-free displays use the Maxwellian view, which is based on an experiment conducted by James Clerk Maxwell in 1868 [29]. Thin parallel beams, which can be provided by a specially designed illumination system, are emitted from an SLM, such as a liquid crystal display (LCD), digital micromirror device (DMD) or liquid crystal on silicon (LCoS), and converge at the center of the pupil via a lens system and are directly projected onto the retina [30,31]. When the method was first introduced, it suffered from a small exit pupil, so was not tolerable by all users (depending on the interpupillary distance), and allowed little room for the eyes to swivel within their sockets such that vignetting occurred. Many works have reported the convergence of multiple beam centers to extend the exit pupil, and accommodation-free near-eye displays have received attention [3234]. As mentioned above, the holographic display method allows full control of light information but has limitations associated with étendue conservation. Combining the holographic method with the Maxwellian view allows an accommodation-free virtual image to be directly projected onto the human retina by a near-eye display, thus resolving the problems of visual confusion and fatigue. Flexible control of virtual images in a holographic display with the Maxwellian view has also been demonstrated via wavefront modulation [35]. Different types of reconstruction methods such as the spherical wave type and the plane wave type have been analyzed in holographic retinal near-eye displays [36]. Moreover, other methods to solve the convergence and accommodation conflict can also been introduced to provide more information to human eyes [37]. The aim of holographic retinal displays is to generate a dynamic, high-resolution full-color accommodation-free virtual scene alongside a real-world one. However, to the best of our knowledge, only one-color static images have been demonstrated using this method. Spatial multiplexing or time multiplexing will increase the complexity of the system by introducing more display devices or a high refresh rate SLM.

In this study, a full-color retinal-projection near-eye display using a multiplexing-encoding dynamic holographic method was developed. One single phase-only SLM and three-color lasers are used to obtain dynamic full-color images. An optics see-through eyepiece is used to project the virtual image onto the human retina, while allowing real-world light to be observed simultaneously. A proof-of-concept experiment is reported to demonstrate that the proposed method can provide a vivid virtual image.

2. Principle of a full-color retinal-projection near-eye display using a multiplexing-encoding dynamic holographic method

2.1 Overview of the system

Figure 1 shows a diagram of our proposed full-color retinal-projection near-eye display. Red (R), green (G) and blue (B) lasers are unitized to illuminate one phase-only SLM. Multiplexing encoding and complex amplitude modulation methods have been used to obtain computer-generated holograms (CGHs). Holographic patterns can be generated and dynamically loaded onto the SLM to reconstruct a virtual image in the air. The used phase-only SLM is a reflective-type opto-electro modulator. The optical axis can be designed vertically along with the modulation plane of the SLM, and a beam splitter should be introduced into the optical path in practical optical system due to the reflective-type. Thus, the power efficiency would be reduced and the system will be more complex. A tilt SLM with little angle will not introduce any obvious noise which has been provided in previous works [38]. Due to the advantages of this design, we consequently used this structure as shown in Fig. 1. Spatial filtering and other optical techniques can be used to filter the unwanted light and control the position of the image. As mentioned above, a holographic image with a small field of view cannot be easily viewed by the human eye due to the limitations associated with étendue conservation. An eyepiece is used to project the holographic image onto the human retina, which can be immersive or optically transparent. As shown in Fig. 1, the eyepiece consists of a see-through optical combiner and an optical lens that converge the light arising from the holographic image. Via the see-through eyepiece, a full-color accommodation-free holographic image can be observed along a real-world image.

 figure: Fig. 1.

Fig. 1. Schematic diagram of a full-color retinal-projection near-eye display using multiplexing-encoding dynamic holographic method.

Download Full Size | PDF

2.2 Multiplexing-encoding dynamic holographic method

To realize a high-quality color holographic display, we employed multiplexing encoding and complex amplitude modulation methods to obtain CGHs [39,40]. Complex amplitude information cannot be directly loaded onto a commercial SLM, as they can only manipulate one degree of freedom of light. To overcome this issue, the kinoform technique is commonly applied, where the target scene is set as a diffuser and its amplitude on a hologram plane is considered as a constant; consequently, some information is lost. Complex amplitude modulation is an effective method for high-quality holographic reconstruction, and is used in this work. The complex amplitude distribution of the target image propagating to the hologram plane is given by E(x,y) = E0(x,y)exp[iφ(x,y)], where E0(x,y) is the amplitude, φ(x,y) is the phase, and i is an imaginary unit. The amplitude distribution can be recorded using pure phase optical holography based on the bleaching method, as follows:

$$\textrm{a}$$
where φR(x,y) is the phase distribution of a reference beam and β is a coefficient. We select a tilt planar wave with unit amplitude as the reference beam, where φR(x,y) = kysinφ and φ is the tilt incident angle. Then, the equation can be rewritten as
$$E_h\left( {x,y} \right) = \sum\limits_{m = -\infty }^\infty {J_m\left[ {\beta E_0\left( {x,y} \right)} \right]i^m\textrm{exp} \left\{ {-im\left[ {\varphi \left( {x,y} \right)-\varphi _R\left( {x,y} \right)} \right]} \right\}} ,$$
where Jm() is the mth order of a Bessel function of the first kind. Because J−1(u) ≈ -u when u is small, and the −1 order is similar to the original complex amplitude,
$${E_{ - 1}}({x,y} )\approx i\beta {E_0}({x,y} )\textrm{exp} [{i\varphi ({x,y} )} ]\textrm{exp} [{ - iky\sin \phi } ].$$

The phase factor kysin ϕ indicates that the target image is reconstructed with a tilt angle of ϕ.

A color image can be divided into R, G, and B components. Since these components are encoded in a single CGH with corresponding wavelengths, the crosstalk associated with illumination via three wavelengths should be considered. As shown in Fig. 2(a), the R, G, and B components of the target images are encoded based on off-axis holography, where the reference beams are planar waves with different incident angles along the horizontal direction. Hence, a CGH can be written as

$${H_c}({x,y} )= \textrm{exp} \left\{ {i\sum\limits_{c = 1}^3 {{E_{0c}}({x,y} )\cos [{{\varphi_c}({x,y} )+ {k_c}x\sin {\theta_c} - {k_c}y\sin \phi } ]} } \right\},$$
where E0c(x,y) and φc(x,y) are the amplitude and phase distributions for each color component, respectively, θc is the corresponding incident angle, kc is the wave number for each wavelength, and indices 1–3 refer to R, G, and B, respectively.

 figure: Fig. 2.

Fig. 2. Schematic of the CGH encoding and its reconstruction. (a) encoding CGH; (b) reconstructing target image reconstructed.

Download Full Size | PDF

During reconstruction, the illumination beam for each component is incident from the angle shown in Fig. 2(b), and can be described as

$${E_{rcs}}({x,y} )= {H_c}({x,y} )\bullet R({x,y} )$$
$$= \textrm{exp} \left\{ {i\beta \sum\limits_{c = 1}^3 {{E_{0c}}({x,y} )\cos [{{\varphi_c}({x,y} )+ {k_c}x\sin {\theta_c} + {k_c}y\sin \phi } ]} } \right\} \bullet \sum\limits_{c = 1}^3 {\textrm{exp} ({ - i{k_c}x\sin {\theta_c}} )}$$
$$= \sum\limits_{c = 1}^3 {{J_0}[{\beta {E_{0c}}({x,y} )} ]\textrm{exp} ({ - i{k_c}x\sin {\theta_c}} )}$$
$$+ \sum\limits_{c = 1}^3 {{J_1}[{\beta {E_{0c}}({x,y} )} ]\textrm{exp} \{{i[{{\varphi_c}({x,y} )- {k_c}y\sin \phi } ]} \}}$$
$$+ \sum\limits_{c \ne cr} {\sum\limits_{cr = 1}^3 {{J_1}[{\beta {E_{0c}}({x,y} )} ]\textrm{exp} \{{i[{{\varphi_c}({x,y} )+ {k_c}x\sin {\theta_c} - {k_c}y\sin \phi - {k_{cr}}x\sin {\theta_{cr}}} ]} \}} }$$
$$+ {E_{others}}({x,y} ).$$

Equation (5c) represents the zero orders for R, G, and B, Eq. (5d) is the target color image, Eq. (5e) describes unwanted images in the first order, and Eothers(x,y) represents other orders in the reconstruction and background noise in the display. Theoretically, these terms are separated from each other spatially, however, as the maximum diffraction angle of a commercial SLM is only a few degrees, the target color image is not clearly distinguishable from the noise. We applied “4f filtering” to filter out unwanted terms and avoid the zero order of the SLM.

2.3 Characteristics of the retinal-projection near-eye display method

In retinal-projection near-eye displays, thin parallel beams from the virtual image converge at the center of the pupil via the lens system and are directly projected onto the retina. Ideally, the size of the converged point should be infinitesimal, such that the accommodation function of the crystal lens in the eye will not work. In actual systems, light beams from pixels cannot be infinitely thin, and the size of the converged point can only be controlled to be far smaller than that of the human pupil. Thus, the virtual image has a large depth of field, and the image will be clear no matter whether the human eye fixates on a near or distant object.

Figure 3(a) shows the natural process of viewing real-world objects and the retinal-projection near-eye display. When we observe the real world, light from a 3D scene passes through the pupil and is focused on the retina via the accommodation function of the crystal lens. The size of the pupil, Ee, is the entrance pupil when the human eye is treated as an ordinary lens system. For the retinal-projection near-eye display, the focal depth of the virtual image is located at a distance, ld, from the human eye, and the size of the converged beam of the virtual image is denoted as EC. Supposing the focal length of the human eye is fe, and the diameter of the circle of confusion on the retina is set as ɛ, then, according to the Gauss formula, the relationship between the focal length and the object and image distances is given as

$$\frac{1}{{{l_e}}} + \frac{1}{{{l_d}}} = \frac{1}{{{f_e}}}.$$

Based on the circle of confusion, half of the depth of focus Δl’ can be formulated as

$$\Delta {l^{\prime}} = {{\varepsilon {l_e}} / {{E_C}}}.$$

 figure: Fig. 3.

Fig. 3. (a) Comparison between the natural viewing and Maxwell-view near-eye displays. (b) the characteristics of the retinal-projection near-eye display method.

Download Full Size | PDF

Thus, the front depth of field, lf, and rear depth of field, lr, satisfy the following equation:

$$\frac{1}{{{l_d} - {l_f}}} + \frac{1}{{{l_e} + \Delta {l^{\prime}}}} = \frac{1}{f}, \frac{1}{{{l_d} + {l_r}}} + \frac{1}{{{l_e} - \Delta {l^{\prime}}}} = \frac{1}{f}.$$

According to Eqs (68), the depth of field (the summation of lr and lf) is given by Eq. (9), which shows that the depth of field will increase with a decrease in the size of the converged beam. Therefore, during the design of the retinal-projection near-eye display, the size of the converged beam should be controlled to be as small as possible to obtain a virtual image with a very large field of view.

$${l_r} = \frac{{\varepsilon l_d^2 - \varepsilon f{l_d}}}{{Df + \varepsilon {l_d}}}, \,{l_f} = \frac{{\varepsilon l_d^2 - \varepsilon f{l_d}}}{{Df - \varepsilon {l_d}}}.$$

In our near-eye display using a multiplexing-encoding holographic method, the size of the generated holographic image is y0 and the diffraction angle is θ. The focal length of the eyepiece is f, and either an immersive or optical see-through eyepiece can be treated as a convex lens. The size of the converged beam EH can be calculated using Eq. (10), and the field of view ω is given by Eq. (11).

$${E_H} = 2f\tan ({{\theta / 2}} )$$
$$\omega = 2arc\tan ({{{{y_0}} / {2f}}} ).$$

The central position of the depth field is also very important, and should be controlled during the design process. The distance of virtual images li and yi can be obtained as follows:

$${l_i} = \frac{{f{l_o}}}{{f - {l_o}}}$$
$${y_i} = \frac{{{l_i}{y_i}}}{{{l_o}}} = \frac{{({f - {l_o}} ){y_i}}}{{f{l_o}}}.$$

3. Experimental set-up and results

Optical experiments were performed to test the proposed method. The experimental set-up for the tests is shown in Fig. 4. In the experiments, three laser diodes (LDs) with wavelengths of 632, 532 and 430 nm were used, and parallel beams were obtained via spatial filter elements and collimator lenses. The resolution of the SLM was 1,920 × 1,080 pixels, with a pixel pitch of 8 µm. CGHs was obtained using the multiplexing-encoding method, and a holographic image could be reconstructed in air when the holograms were loaded onto the SLM under illumination from the three LDs. Filling defects cause a multiple order diffraction effect, such as grating, and impair image quality. Moreover, full-color images are generated by combining the virtual images from the three color channels, and multiple order diffraction from the different channels can also affect the image quality. To solve this problem, a 4f lens system and band-pass filter (BPF) were used to eliminate unwanted light, so dynamic holographic scenes with high image quality could be generated anywhere.

 figure: Fig. 4.

Fig. 4. Setup of full-color retinal-projection near-eye display using multiplexing-encoding dynamic holographic method.

Download Full Size | PDF

A see-through eyepiece with a bird-bath structure, consisting of a beam splitter and a convex reflective mirror, was used to converge the holographic image light. The rays from the holographic image are transmitted through a half mirror inside a cube prism and reflected by the convex reflective mirror. The eyepiece material was China K9 glass, which has the same parameters as BK7 glass, and the size of the cube prism was 25.4 mm. The radius of the reflective mirror was 75.8 mm, with an effective focal length of 30.6 mm, and the eye relief distance was 20 mm from the eyepiece. The holographic virtual image was located around 1.5 m from the convergence point. In terms of holographic images generated by SLM, the reconstructed distance and the size of reconstructed images could be flexible as long as the images in the viewing zone. In the experimental set-up, the holographic images were generated around the position of SLM, and the size of the images is set as the same as that of SLM, 15.36mm × 8.64mm. Due to the usage of 4f system, the size of generated image near the focal plane of eyepiece will be the same size. According to the Gaussian lens formula, the effective focal length (30.6 mm) and the virtual image position (around 1.5m), the field of view will be 16.4° × 28.7°, and the diagonal field of view can be 32°. Obviously, the structure and these parameters can be optimized by designing the optical system. A Sony camera was employed to emulate the accommodation function of the human eye, and two toys were used as reference objects, located at distances of 0.5 and 2 m, respectively. The real-world 3D objects were imaged by the camera lens, which was focused on different positions. When the position of the CMOS camera is fixed, only one position can be clearly focused on by the sensor; objects at other depths will be blurred due to defocusing. The light beam of a holographic image will focus on a very small point, and the virtual image can be projected directly onto the sensor as the camera lens will not work.

Due to the Maxwellian view, 2D images are of sufficiently high quality; there is no need for 3D images. Two sample videos of a model of the Earth and a butterfly were used as the dynamic virtual holographic images. One video was separated into a series of 2D pictures, and kinoform holograms 1,920 × 1,080 pixels in size were generated for each 2D image using the multiplexing-encoding dynamic holographic method. In our experiment, the maximum full diffraction angle of the used SLM is 3.36°, and the angle difference between adjacent color channels should smaller than 1.12°, which is 1/3 of maximum full diffraction angle according to Eq. (5). Furthermore, a tilt SLM with a little angle was used in the experiment, which can make the system compact and increase the light efficiency. In this kind of structure, the smaller the tilt angle, the better performance will be obtained. Thus, the maximum angle of color light was set as small as 4°, and the angle difference between adjacent color channels was set as 0.97°. During the calculation and simulation process, the incident angles of the R, G and B reference lights were 2.03°, 3.0°, and 3.97°, respectively, as in the experimental set-up. The holograms of the 1,920 × 1,080-pixel 2D color images generated using our method can be obtained in advance and loaded into the SLM, and the calculation process can be conducted in real time when GPUs and parallel calculation methods are used.

Figures 5(a)–5(b) shows selected frames from the videos, and Figs. 5(c)–5(d) shows the color images using simulation results. The information displayed using the experimental set up is shown in Figs. 5(e)–5(h), and was captured when the camera was focused at depths of 0.5 m (2 diopters) and 2 m (0.5 diopters). When the camera focuses on a nearby location, the nearer real object is clear and the farther real object is blurred, and vice versa. The generated virtual holographic images are very clear at both focusing depths, and can be directly observed by the human eye alongside real-world images. For the simulated and actual holographic images, image details can be represented accurately, but colors cannot. There is an effect of nonlinear amplitude mapping between the original video images and the reconstructed ones, and there is a color difference between the actual light source and the ideal RGB source. This discrepancy can be improved and corrected by colorimetric characterization, which has already been applied in commercial display products. To further verify the presented method, two videos showing a continuous change in the sensor position of the camera, which can focus at depths from 0.2–3 m, are provided in Visualizations 1 and 2. The frame rate of dynamic holographic video determined by the spatial light modulator (SLM) in our system, and the maximum frame rate of the used SLM in the experiment (Holoeye PLUTO 2) can reach 60 Hz. The high-frame-rate display is not the core contribution of our work, and the practical rate in our experiment was set 25 Hz. Dynamic virtual holographic images can also be seen in these two videos, alongside real-world images. In the experimental videos, ghost images or stray light can be found. They are generated by multiple reflections and transmission on the surfaces of eyepiece, and the process can be analyzed by raytracing the light path. It can be eliminated or alleviated the ghost images or stray light by optimizing the optical system in the future work. The experimental results demonstrated that our method can produce a holographic scene with a very large field of view that can be directly projected onto the retina and viewed alongside real-world images.

 figure: Fig. 5.

Fig. 5. (a-b) selected 2D frames from the videos. (c-d) simulation color 2D images using the multiplexing-encoding holographic method (e-f) Captured images when the camera is focused at 2 m (0.5 diopter). (g-h) Captured images when the camera is focused at 0.5 m (2 diopter). Captured video from a continuous change in the sensor position of the camera, which can focus at depths from 0.2–3 m. Videos for continuous change of the focus position of camera are shown in Visualization 1 and Visualization 2.

Download Full Size | PDF

4. Conclusions

This paper describes how dynamic holographic virtual images were directly projected onto the human retina using the proposed near-eye display method with multiplexing-encoding holography. Full-color images can be generated using a single phase-only SLM with three different-colored lasers, and a see-through eyepiece renders the virtual images free of accommodation, along with the full-depth real-world images. The principles and characteristics of the method have been presented in detail. The experimental results demonstrated the validity of the presented method, and this work provides valuable insights into holographic and retinal-projection near-eye displays. Future works will aim to improve display performance and apply the proposed method to other structures of optical see-through near-eye displays to make them more compact.

Funding

National Key Research and Development Program of China (2020YFC1523103); National Natural Science Foundation of China (62002018, 61727808); A*STAR RIE2020 AME Programmatic Funding (A18A7b0058).

Disclosures

The authors declare no conflicts of interest.

References

1. O. Cakmakci and J. Rolland, “Head-Worn Displays: A Review,” J. Disp. Technol. 2(3), 199–216 (2006). [CrossRef]  

2. H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017). [CrossRef]  

3. G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-Eye Display and Tracking Technologies for Virtual and Augmented Reality,” In Computer Graphics Forum, 38(2), 493–519 (2019).

4. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” Journal of Vision 8(3), 33 (2008). [CrossRef]  

5. G. Kramida, “Resolving the Vergence-Accommodation Conflict in Head-Mounted Displays,” IEEE Trans. Visual. Comput. Graphics 22(7), 1912–1931 (2016). [CrossRef]  

6. J. Rolland, M. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000). [CrossRef]  

7. K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004). [CrossRef]  

8. G. D. Love, D. M. Hoffman, P. J. W. Hands, J. Gao, A. K. Kirby, and M. S. Banks, “High-speed switchable lens enables the development of a volumetric stereoscopic display,” Opt. Express 17(18), 15716–15725 (2009). [CrossRef]  

9. S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Visual. Comput. Graphics 16(3), 381–393 (2009). [CrossRef]  

10. Q. Chen, Z. Peng, Y. Li, S. Liu, P. Zhou, J. Gu, J. Lu, L. Yao, M. Wang, and Y. Su, “Multi-plane augmented reality display based on cholesteric liquid crystal reflective films,” Opt. Express 27(9), 12039–12047 (2019). [CrossRef]  

11. N. Matsuda, A. Fix, and D. Lanman, “Focal surface displays,” ACM Trans. Graph. 36(4), 1–14 (2017). [CrossRef]  

12. X. Xia, Y. Guan, A. State, P. Chakravarthula, K. Rathinavel, T. J. Cham, and H. Fuchs, “Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support,” IEEE Trans. Visual. Comput. Graphics 25(11), 3114–3124 (2019). [CrossRef]  

13. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 1–11 (2014). [CrossRef]  

14. K. Akşit, J. Kautz, and D. Luebke, “Slim near-eye display using pinhole aperture arrays,” Appl. Opt. 54(11), 3422–3427 (2015). [CrossRef]  

15. W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light f ield head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett. 12(6), 060010 (2014). [CrossRef]  

16. C. Yao, D. Cheng, T. Yang, and Y. Wang, “Design of an optical see-through light-field near-eye display using a discrete lenslet array,” Opt. Express 26(14), 18292–18301 (2018). [CrossRef]  

17. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018). [CrossRef]  

18. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013). [CrossRef]  

19. M. Xu and H. Hua, “Systematic method for modeling and characterizing multilayer light field displays,” Opt. Express 28(2), 1014–1036 (2020). [CrossRef]  

20. F. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 1–12 (2010). [CrossRef]  

21. D. Chen, X. Sang, X. Yu, X. Zeng, S. Xie, and N. Guo, “Performance improvement of compressive light field display with the viewing-position-dependent weight distribution,” Opt. Express 24(26), 29781–29793 (2016). [CrossRef]  

22. M. Liu, C. Lu, H. Li, and X. Liu, “Near eye light field display based on human visual features,” Opt. Express 25(9), 9886 (2017). [CrossRef]  

23. Y. Pan, J. Liu, X. Li, and Y. Wang, “A review of dynamic holographic three-dimensional display: algorithms, devices, and systems,” IEEE Trans. Ind. Inf. 12(4), 1599–1610 (2016). [CrossRef]  

24. K. Wakunami, P. Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y. P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954–7 (2016). [CrossRef]  

25. A. Maimone, A. Georgiou, and J. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

26. C. Chang, W. Cui, and L. Gao, “Foveated holographic near-eye 3D display,” Opt. Express 28(2), 1345–1356 (2020). [CrossRef]  

27. Q. Gao, J. Liu, J. Han, and X. Li, “Monocular 3D see-through head-mounted display via complex amplitude modulation,” Opt. Express 24(15), 17372–17383 (2016). [CrossRef]  

28. J. Chaves, Introduction to nonimaging optics. CRC press, 103–108 (2017)

29. G. Westheimer, “The Maxwellian View,” Vision Res. 6(11-12), 669–682 (1966). [CrossRef]  

30. M. Sugawara, M. Suzuki, and N. Miyauchi, “Late-News Paper: Retinal imaging laser (or LED) eyewear with focus-free and augmented reality,” Sid Symposium Digest of Technical Papers 47(1), 164–167 (2016). [CrossRef]  

31. Y. Ochiai, K. Otao, Y. Itoh, S. Imai, K. Takazawa, H. Osone, A. Mori, and I. Suzuki, “Make your own retinal projector: retinal near-eye displays via metamaterials,”, In ACM SIGGRAPH 2018 Posters1–2 (2018).

32. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 1–13 (2017). [CrossRef]  

33. J. Kim, Y. Jeong, M. Stengel, K. Aksit, R. Albert, B. Boudaoud, T. Greer, W. Lopes, Z. Majercik, P. Shirley, J. Spjut, M. McGuire, and D. Luebke, “Foveated AR: dynamically-foveated augmented reality display,” ACM Trans. Graph. 38(4), 1–15 (2019). [CrossRef]  

34. S. Kim and J. Park, “Optical see-through Maxwellian near-to-eye display with an enlarged eyebox,” Opt. Express 43(4), 767–770 (2018). [CrossRef]  

35. Y. Takaki and N. Fujimoto, “Flexible retinal image formation by holographic Maxwellian-view display,” Opt. Express 26(18), 22985–22999 (2018). [CrossRef]  

36. J. S. Lee, Y. K. Kim, M. Y. Lee, and Y. H. Won, “Enhanced see-through near-eye display using time-division multiplexing of a Maxwellian-view and holographic display,” Opt. Express 27(2), 689–701 (2019). [CrossRef]  

37. Z. Wang, X. Zhang, G. Lv, Q. Feng, H. Ming, and A. Wang, “Hybrid holographic Maxwellian near-eye display based on spherical wave and plane wave reconstruction for augmented reality display,” Opt. Express 29(4), 4927–4935 (2021). [CrossRef]  

38. T. Kozacki, “Holographic display with tilted spatial light modulator,” Appl. Opt. 50(20), 3579–3588 (2011). [CrossRef]  

39. X. Li, J. Liu, J. Jia, Y. Pan, and Y. Wang, “3D dynamic holographic display by modulating complex amplitude experimentally,” Opt. Express 21(18), 20577–20587 (2013). [CrossRef]  

40. G. Xue, J. Liu, X. Li, J. Jia, Z. Zhang, B. Hu, and Y. Wang, “Multiplexing encoding method for full-color dynamic 3D holographic display,” Opt. Express 22(15), 18473–18482 (2014). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       Captured augmented reality sample video 1 from a continuous change in the sensor position of the camera
Visualization 2       Captured augmented reality sample video 2 from a continuous change in the sensor position of the camera

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Schematic diagram of a full-color retinal-projection near-eye display using multiplexing-encoding dynamic holographic method.
Fig. 2.
Fig. 2. Schematic of the CGH encoding and its reconstruction. (a) encoding CGH; (b) reconstructing target image reconstructed.
Fig. 3.
Fig. 3. (a) Comparison between the natural viewing and Maxwell-view near-eye displays. (b) the characteristics of the retinal-projection near-eye display method.
Fig. 4.
Fig. 4. Setup of full-color retinal-projection near-eye display using multiplexing-encoding dynamic holographic method.
Fig. 5.
Fig. 5. (a-b) selected 2D frames from the videos. (c-d) simulation color 2D images using the multiplexing-encoding holographic method (e-f) Captured images when the camera is focused at 2 m (0.5 diopter). (g-h) Captured images when the camera is focused at 0.5 m (2 diopter). Captured video from a continuous change in the sensor position of the camera, which can focus at depths from 0.2–3 m. Videos for continuous change of the focus position of camera are shown in Visualization 1 and Visualization 2.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

a
E h ( x , y ) = m = J m [ β E 0 ( x , y ) ] i m exp { i m [ φ ( x , y ) φ R ( x , y ) ] } ,
E 1 ( x , y ) i β E 0 ( x , y ) exp [ i φ ( x , y ) ] exp [ i k y sin ϕ ] .
H c ( x , y ) = exp { i c = 1 3 E 0 c ( x , y ) cos [ φ c ( x , y ) + k c x sin θ c k c y sin ϕ ] } ,
E r c s ( x , y ) = H c ( x , y ) R ( x , y )
= exp { i β c = 1 3 E 0 c ( x , y ) cos [ φ c ( x , y ) + k c x sin θ c + k c y sin ϕ ] } c = 1 3 exp ( i k c x sin θ c )
= c = 1 3 J 0 [ β E 0 c ( x , y ) ] exp ( i k c x sin θ c )
+ c = 1 3 J 1 [ β E 0 c ( x , y ) ] exp { i [ φ c ( x , y ) k c y sin ϕ ] }
+ c c r c r = 1 3 J 1 [ β E 0 c ( x , y ) ] exp { i [ φ c ( x , y ) + k c x sin θ c k c y sin ϕ k c r x sin θ c r ] }
+ E o t h e r s ( x , y ) .
1 l e + 1 l d = 1 f e .
Δ l = ε l e / E C .
1 l d l f + 1 l e + Δ l = 1 f , 1 l d + l r + 1 l e Δ l = 1 f .
l r = ε l d 2 ε f l d D f + ε l d , l f = ε l d 2 ε f l d D f ε l d .
E H = 2 f tan ( θ / 2 )
ω = 2 a r c tan ( y 0 / 2 f ) .
l i = f l o f l o
y i = l i y i l o = ( f l o ) y i f l o .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.