Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Foveated integral imaging system for near-eye 3D displays

Open Access Open Access

Abstract

Integral imaging displays have been presented as the most effective solution to reduce the visual discomfort in three-dimensional (3D) images caused by vergence-accommodation conflict (VAC). However, due to the resolution degradation, it is still challenging to adapt the integral imaging system to near-eye display (NED) devices. In this paper, we propose a resolution-enhanced integral imaging NED using foveated imaging system with two display panels and an optical combiner. We use a microdisplay combined with a lens array to provide the integral imaging 3D images with relatively high pixel density to the central foveal area. And the peripheral area is covered by an additional display panel which offers background images with wide field-of-view (FOV). Combining these two images with an optical combiner, the foveated integral imaging system results in highly resolution- enhanced integral imaging 3D images concentrated on the foveal area. The proposed NED system effectively provides integral imaging 3D images with approximately 4.5 times improved resolution in the foveal area through an optimally designed foveated imaging system.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Near-eye displays (NEDs) present virtual objects in the visual field of human eyes by using head-mounted display devices. This system allows the interactions between the viewers and display contents more natural than conventional displays. Recently, several researches have reported on NEDs as the promising method to enable augmented-reality (AR) and virtual reality (VR). In this system, there are depth cues such as motion parallax, accommodation, disparity, and size perspective, which make the viewers’ experience in virtual scenes more naturally. Therefore, it is essential to optimize these factors to achieve comfortable and realistic 3D environment. The most common method for the current VR and AR devices is to present different images to both eyes to construct binocular disparity [13]. However, this method has significant drawback of VAC [4,5], which occurs when there is a mismatch between the perceived distance of a virtual image and the focusing distance of the eyes to the image. To solve the VAC, several studies have suggested various methods including holographic displays [6,7], integral imaging displays [810], and multi-focal plane displays [11,12].

Among the possible solutions for VAC conflict, the integral imaging display have been considered as the most feasible approach for NEDs due to the simplicity of the system. By combining a lens array and the display panel, the system creates bundle of light rays that provides the depth of field (DOF) for comfortable 3D scenes without VAC problem. For a more natural 3D effect, the most interested subject in research on the integral imaging NED systems have been the various methods of expanding the DOF by applying additional optical elements, such as focus-tunable lenses [1315], and multi-layered display devices [1618]. However, there are still very few cases of NED devices using the integral imaging systems. This is because the integral imaging systems still have a critical drawback to be applied to NEDs, which is the low spatial resolution. In the integral imaging systems, it is a well-known issue to have resolution degradation as a tradeoff for the depth information [19]. To overcome the issue, there have been several efforts [2023] to enhance the resolution in optical methods. For instance, switchable optical components have been employed to combine spatially shifted images through the time-multiplexing method [20,21]. By using the fast switching optics, the effective resolution for the viewers can be improved to the value between the resolutions of two shifting images. Also, computational methods have been reported to enhance the resolution by rearranging subpixels [22,23]. These methods can implement the resolution enhancement in relatively simple hardware system. Although those approaches introduced above have achieved the improvement of the resolution in the integral imaging systems, the resulted resolutions are still lower than the original resolution of the displays in two-dimensional (2D) mode. Therefore, the most desirable goal of our work on the integral imaging NEDs is to enhance the resolution of the 3D images above the original resolution of 2D images with simple optical system.

In this paper, we propose a resolution-enhanced integral imaging NED system using foveated imaging. The foveated imaging is a displaying technique using the characteristics of the human eyes that perceive the central area of the visual field with the highest resolution and the peripheral area with a relatively lower resolution [2426]. Therefore, two display panels are employed for the proposed system to present images in different areas of the visual field. One provides resolution-enhanced 3D images of the integral imaging in the central foveal area, while the other offers a wide FOV covering the peripheral area. Using the foveated imaging system, the resolution of the integral imaging display can be enhanced over that of the original 2D display compensating the resolution degradation which is inevitable in the integral imaging process. To implement this system, we made several simulations on the human visual system especially about the size of foveal area and the tolerance for the accommodation conflict between two different displays.

2. Integral imaging in near-eye display

VAC is one of the main issues still remaining in current NEDs. This occurs when perceived depth cues of 3D images by the viewer are different from the focusing distance by the eyes. It can cause visual fatigue, and eyestrain which makes us feel uncomfortable when we watch 3D displays. Although it is well-known that integral imaging is an effective method to overcome the VAC problem, the application of integral imaging system to NEDs is still a challenge due to the critical drawback of integral imaging system, which is resolution degradation.

2.1 Tradeoff between spatial resolution and DOF in integral imaging

Integral imaging offers multiple perspectives by using a lens array to combine each elemental image, which has its own perspective at different directions [27]. The integrated perspectives make it possible to form DOF for natural 3D images. However, as the elemental images from different perspectives are overlapped by magnifying each image with a lens array, the lateral resolution of the reconstructed 3D images is degraded. This means that the integral imaging system sacrifices the resolution to provide 3D effect. In other words, there is an inevitable tradeoff between the image resolution and the expressible depth range of 3D images. The analysis of the relationship among these two parameters can be obtained by ray-optical approximation on the integral imaging system. Although diffraction and defocus aberrations of the lens array can cause unwanted degradation on the performance of the integral imaging system [28,29], the analysis can give the tendency of the parameters with mathematical definitions as shown in Fig. 1(a).

 figure: Fig. 1.

Fig. 1. (a) The schematic structure of the integral imaging system in virtual mode. (b) Tradeoff relationship between the spatial resolution and the DOF in integral imaging system.

Download Full Size | PDF

Figure 1(a) shows the schematic structure of the integral imaging system in virtual mode ($f > g$). Since the lateral pixel size of the reconstructed image is determined by the product of the original pixel size of the display panel and the magnification of the lens array, the spatial resolution of 3D images can be expressed by:

$${R_s} = \frac{1}{{{P_M}}} = \frac{{f - g}}{{f \cdot {P_d}}}$$
where ${R_s}$ is the spatial resolution, ${\; }{P_M}$ is the magnified pixel size of the image, ${P_d}$ is the pixel size of the display panel, f is the focal length of the lens array, and g is the gap distance between the display panel and the lens array. According to the Eq. (1), the spatial resolution has inverse relationship with the gap and the display pixel size.

In case of DOF, the expressible depth range of 3D images can be set as the distance between the central depth plane (CDP) and the marginal depth plane where the focusing spot size is equal to the size of the magnified pixel. This is because the overlap of the magnified pixels becomes distorted and broken when the image is located farther from the CDP. Accordingly, the DOF can be given by

$$\Delta z = 2{P_M}\frac{l}{D} = 2\frac{{{f^2}g}}{{D{{(f - g)}^2}}}{P_d}$$
where $\Delta \textrm{z}$ is the DOF of the system, l is the distance between lens array and the CDP, and D is the diameter of the lens. The DOF is proportional to the gap and the display pixel size in the condition of virtual mode ($f > g$). According to the Eqs. (1) and (2), the spatial resolution and the DOF show contrasting tendency for the same variables. This tradeoff relationship can be expressed as Fig. 1(b).

In Fig. 1(b), comparing to the original resolution, the ratio of the resolution degradation is approximately 1/3 to 1/4 depending on the targeted DOF. When it comes to the NED system, this becomes critical because the images would be displayed in much closer distance from our eyes. Therefore, we decide to apply the foveated imaging system which provides 3D images with a narrow FOV to improve the number of pixels per degree, and apply an additional display to cover the large FOV.

2.2 Foveated integral imaging system

The foveated imaging system is based on the fact that the visual field is divided into central foveal area, and the surrounding peripheral area. The central foveal area is the eye-gazing area that the viewer perceives images with the highest sensitivity. This means that the viewer perceives the quality of the NEDs only with the resolution in the foveal area. In other words, the viewer can experience high-resolution 3D scenes only by offering high-resolution integral imaging in the foveal area, even if the peripheral area is expressed with lower resolution. Accordingly, our proposed system provides integral imaging 3D images only in the foveal area, and normal images in the peripheral area as shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Principle of the proposed foveated integral imaging NED system.

Download Full Size | PDF

As shown in Fig. 2, we integrate the FHD microdisplay with a lens array for foveal integral imaging display, and an additional display is employed for peripheral area. The reproduced integral imaging 3D image is placed on the center of the visual field with small FOV. Comparing to the conventional integral imaging system that provides 3D images in overall range of FOV, this system keeps the FOV of foveated integral imaging display narrow to enhance the pixel density inside the area. Accordingly, the spatial resolution of the foveated integral imaging display can be enhanced by narrowing the foveal FOV as depicted in Fig. 3. As our system is designed to have 95-degree FOV, the original spatial resolution of the FHD display is calculated as approximately 11.37 pixels per degree (ppd), and the degraded resolution after applying the integral imaging system is about 3.79 ppd for a reasonable range of DOF in this system. These values are simply calculated by dividing the number of horizontal pixels on a display panel by the horizontal FOV. As depicted in Fig. 3, the resolution degradation of integral imaging can be fully recovered by setting the foveal FOV under 32 degrees.

 figure: Fig. 3.

Fig. 3. Spatial resolution of foveated integral imaging display according to the FOV.

Download Full Size | PDF

3. Design of the foveated integral imaging system

3.1 Tradeoff between foveated resolution and accommodation conflict

In our system, the size of foveal area is another significant factor that affects the resolution enhancement of integral imaging. By narrowing the FOV of integral imaging display into the small foveal area, the resolution can be improved because the density of pixels is increased in the small area. To make further enhancement, it seems to be benefit to make the FOV much narrower. However, in the foveated NED system, it will cause disturbance to the viewers as eyes can perceive the difference between the foveal and peripheral areas. This means there is a tradeoff relationship between the resolution and the degree of visual fatigue around the boundary. In this paper, we call this visual disturbance occurring between two different displays as the accommodation conflict, which should be eliminated to implement natural foveated imaging. Therefore, it is essential to set the minimum size of foveal area while not disturbing the human eye vision. To find this optimal condition, we conduct a simple test on the human vision with three participants.

The experiment is designed to figure out the differences in the sensitivity of the human visual system to the perception of depth differences at different eccentricities. Figure 4 depicts the reference and test images used in the test. The reference image consists of a fixation target and a pattern image which is placed on horizontal eccentricity with respect to the target. The test image is displayed simultaneously with 5-degree above the pattern of the reference image. In this test, observers are forced to choose a defocused one between given pattern images from different depth planes while focusing their eyes on the fixation target. The horizontal eccentricity of the pattern image pair in the visual field is set to 0, 4, 8, 12, or 16$^\circ $ with respect to the fixation target. During the test, the depth plane of a random pattern image is varied from 1 m to 0.3 m with respect to eyes while the other pattern image is on the fixed depth plane of 1 m. If the observers perceive the difference between two images, they are asked to decide which pattern is defocused. The data collected from each observer are the average values of 10 times repeated tests at each eccentricity. As plotted in Fig. 4, the average eccentricity at which eyes cannot perceive differences in images on different depth planes is determined as approximately 10$^\circ $ which refers to FOV of 20$^\circ $.

 figure: Fig. 4.

Fig. 4. Example image of the test with fixation target and stimulus target and average threshold for the accommodation conflict across all observers.

Download Full Size | PDF

As described in previous section, the tradeoff between the FOV of foveal area and its spatial resolution can be expressed as Fig. 5. To achieve highly enhanced resolution, the FOV needs to be set as small as possible. Considering the result of the simulation test on eyes, the maximum resolution in our system is figured out as 18.9 ppd in 1m-depth with 19-degree FOV. Also, if the FOV of foveal area becomes bigger than 32$^\circ $, the system has no advantage as the resolution becomes lower than the original 2D resolution. Therefore, we decide the range of FOV from 19$^\circ $ to 32$^\circ $ as the condition to achieve reasonable resolution enhancement without the accommodation conflict in the foveated integral imaging NED system.

 figure: Fig. 5.

Fig. 5. The tradeoff between FOV and spatial resolution under the condition of conflict.

Download Full Size | PDF

4. Experiment

4.1 System configuration

In the integral imaging system, it is benefit for 3D effect to use many of lenses as possible because each lens can provide one perspective image for the 3D image. Therefore, as the size of microdisplay has fixed size, the lens array should have small pitch with high fill factor. In our system, we use a hexagonal array of 18 × 10 lenses with 1mm pitch (Fresnel Technologies) to align with the microdisplay (Sony, ECX335) which is 18 × 12 mm including bezel. The fill factor of hexagonal arrangement is higher than that of rectangular arrangement, and it is optically not efficient to use a hexagonal solid lens with a smaller lens size. The focal length of the lens array is 3mm and its thickness is 1.6mm. The microdisplay has 1920 × 1080 resolution with a pixel pitch of 8${\mathrm{\mu} \mathrm{m}}$. Also, 5.9-inch display panel with resolution 1920 × 1080 (LG, G Pro 2) is employed as the peripheral display. As for the eyepiece lens, we use a doublet lens with 5cm focal length. For smaller size of the entire system, the eyepiece lens with smaller focal length can also be used in our system, but we use the eyepiece lens with 5cm focal length to achieve clear photography in wide FOV for comparison between two different displaying areas. The overall optical setup is shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. (a) System configuration, and the unfolded optical paths of (b) the integral imaging display, and (c) the peripheral display.

Download Full Size | PDF

As shown in Fig. 6(a), the integral imaging 3D images displayed by the microdisplay has small FOV in the central foveal area. The size of integral imaging FOV is controlled by adjusting the position of the convex lens in front of the lens array. Accordingly, the pixel density of the image can be enhanced multiple times in smaller FOV than the conventional integral imaging system. To support dynamic shifting of the foveal area, a fast-steering mirror is arranged on the opposite side of the microdisplay with the beam splitter in between. The peripheral display is placed in front of the eye to cover wide FOV surrounding the foveal area. Figures 6(b) and 6(c) depict the unfolded optical paths of both displays. To combine two images from different display panels without significant disturbance in eyes, the central depth plane (CDP) of the integral imaging display and the depth plane of the peripheral display are placed in the same depth plane. Thus, the distances between the optical components are set under the condition that the optical path distances of the microdisplay and the peripheral display are the same: ${a_1} + 2{a_2} + {b_2} = {b_1} + {b_2}$. In the following experiments, the optical path distances is set to 4.76 cm in order to obtain the depth of 1 meter, which is the most sensitive depth for human eyes. This value is simply obtained from the lens equation. For the optical combiner, although a beam splitter is not the best option as a combiner due to its bulky size, it is employed to clearly capture the combined images and analyze the quality of foveated integral imaging results without unnecessary optical aberrations. The system volume can be simplified by replacing the beam splitter with other novel optical combining designs from several research groups [30,31]. The displayed images are captured by a camera placed after the eyepiece lens.

4.2 Resolution-enhanced integral imaging result using foveated imaging system

In this section, we demonstrate the experimental results of the proposed design. The optical setup in the experiments are based on the layout shown in Fig. 6. For precise observation to the resolution enhancement, we first carry out the experimental results of the integral imaging systems in two different methods. One is the proposed method which uses the foveated imaging system, and the other is the conventional method. Since our proposed system has a 95-degree FOV including the peripheral area, the conventional integral imaging system in this experiment is designed to have the same FOV to compare the spatial resolutions of two methods in the same optical environment.

Figure 7 shows the resolution difference in 3D images of the two methods. As depicted in Fig. 7(a) and 7(b), the 3D image presented by the conventional integral imaging system has significantly degraded quality compared to the foveal 3D image of the proposed system. Specifically, comparing the close-up photos in Fig. 7(a) and 7(b), the conventional integral imaging result shows the pixelation and the screen-door effect which occurs due to the low spatial resolution, while the foveated integral imaging result does not. Also, an image processing has been used to smooth the boundary of the foveal area. Thus, the transition from the foveal area to the peripheral area is observed to be natural with the inconspicuous boundary. For more precise analysis on the improvement of the resolution, the spatial resolutions are measured from the modulation transfer function (MTF) using the USAF chart images. Figure 8 shows the resulted USAF chart images of the foveated and the conventional integral imaging displays. These images are captured by a Canon EOS 60D camera

 figure: Fig. 7.

Fig. 7. Resolution comparison between (a) the conventional integral imaging, and (b) the resolution enhanced integral imaging using the foveated imaging system.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The resulted USAF chart images of foveated integral imaging display and normal integral imaging display (left). MTF measurement results with a slanted edge method (right).

Download Full Size | PDF

In this experiment, the FOV of foveal area is set as 19$^\circ $ for maximized enhancement in resolution without the accommodation conflict. The MTF graphs are measured using a slanted edge method [32] for three different displays: the foveated integral imaging display, the peripheral display, and the conventional integral imaging display. As shown in the MTF graphs, the normalized MTF of the foveal display is greater than 0.5 at 17.13 ppd, exceeding the resolution of peripheral display which is measured as 11.25 ppd. Comparing to the conventional integral imaging display with the resolution of 3.82 ppd, the resolution is enhanced by approximately 4.5 times by employing the foveated integral imaging system. Considering that the peripheral display is a normal FHD display without integral imaging, the result shows that our proposed system overcomes the resolution degradation of integral imaging by providing higher spatial resolution than the peripheral display. The resulted MTF shows slightly lowered resolution than the expected value in previous section, which introduces additional aberration derived from the lens array. This decline in quality is expected to be reduced by adopting aberration-calibrated rendering in integral imaging images, which will be our further work with color matching issue.

4.3 Depth of focus in integral imaging

For quantitative evaluations on 3D effect, we measure the DOF of the integral imaging results in the foveated area by using letters in different depth planes. As illustrated in Fig. 9, the integral imaging display shows letter ‘B’ on various depth planes in the foveal area, while the letters ‘A’ and ‘C’ are displayed by the peripheral display on the fixed depth plane of 1 meter. As the camera focus is controlled to be equal to the depth distance of the letter ‘B’ in the foveal area, the letters ‘A’ and C’ in the peripheral area are out of focus when the depth planes of both areas are not matched. By examining the focus distances of the camera in each case, the DOF of our proposed system is determined as 50 cm approximately. In other words, the proposed system offers VAC-free 3D images in the certain depth range. Also, the defocus in the peripheral area is not perceived for the viewers watching the foveal area, because the size of foveal area is large enough as mentioned in previous section.

 figure: Fig. 9.

Fig. 9. Resulted images focused on different depth distance (a) 0.8 m, (b)1 m, and (c) 1.3 m

Download Full Size | PDF

4.4 Image shifting of the foveal area

As discussed above, the resolution-enhanced integral imaging display can be obtained in the small central area. The observer may experience the high-resolution integral imaging display when they are looking at the center of the display. However, human eyes have dynamic visual system. The gaze direction can scan over a quite large area with eye-ball rotation. Therefore, for the near-eye display design, eye-tracking techniques need to be integrated to support the dynamic foveated display system. To enable the eye-tracking function, a fast-steering mirror (FSM) is employed to shift the high-resolution integral imaging area in real time. The diameter of the employed FSM is 2 inch and its response time is within 15 ms. In our design, we could steer the high-resolution integral imaging area up to 40$^\circ $ using the FSM with ${\pm} 10^\circ $ steering range.

Figure 10 shows the experimental results of the shifted foveal area with the FSM. To observe the image shift clearly, the depth distances of the two displays are set differently. The camera focuses on the foveated integral imaging display to differentiate each area with defocus effect. After simple image processing to match the image contents in the foveal area with the background images, the resolution-enhanced integral imaging area is successfully displaced by steering the FSM as shown in Fig. 10(a) and 10(c). Therefore, with the aid of an eye-tracking device, the proposed system can support the dynamic foveated display system.

 figure: Fig. 10.

Fig. 10. The experimental results of the shifted foveal area: high-resolution integral imaging area on (a) the left, (b) the center, and (c) the right.

Download Full Size | PDF

5. Conclusion

In the research, we have implemented a resolution-enhanced integral imaging NED system using the foveated imaging. The proposed system consists of an integral imaging display, a peripheral display, and an optical combiner. The integral imaging display is implemented with a microdisplay coupled with a lens array to offer 3D images in the foveal area, and the peripheral display provides a wide FOV. The spatial resolution result of the foveated integral imaging display is achieved as 17.13 ppd in the FOV of 19$^\circ $, exceeding the resolution of original 2D mode. Considering that eye acuity is about 60 ppd at the small foveal area, the current resolution 17.13 ppd may not be enough to achieve eye acuity. However, the advantage of the proposed system is that the resolution of integral imaging can be enhanced by 4.5 times regardless of the display panel specifications. It means that the resulted resolution 17.13 ppd of the system can be further improved by employing a high-resolution display panel, which is continuously being developed. Therefore, among various integral imaging systems, our proposed system has the greatest potential to achieve the nearest resolution to eye acuity through improved display technology in the future. Also, by using multiple simulation results, the two displays are combined in optimized condition to minimize the eye disturbance from accommodation conflict. The FOV of the entire system is 95$^\circ $ with the peripheral display. In our design, the FOV condition of the foveal area is determined to be in the range from 19$^\circ $ to 32$^\circ $ to overcome the resolution degradation of the integral imaging without the accommodation conflict.

The proposed system can be easily adopted to various NED devices to effectively provide comfortable and natural 3D scenes. Now, we reflect on the remaining issues and the direction of our future work. The key role of integral imaging in near-eye display is to eliminate VAC of 3D images. However, this advantage is achievable only in the DOF range so that there is still remaining VAC issue if we desire to display images over the range. In this work, the proposed system offers about 50-cm DOF which is quite short range comparing to other works. This is because there is a tendency that the DOF range is proportional to the FOV in integral imaging systems. The resulted depth of the integral imaging system is determined by the lens array. Therefore, the DOF range can be extended by employing the focus-tunable optical system, which will be our future work. In our previous work [15], we have developed a depth-adaptive integral imaging system using focus-tunable liquid lens array. By adapting the focus-tunable lens array to the proposed design, we expect the DOF range of the system will be further extended. We look forward to contributing to bring breakthroughs in NEDs with our work in the future.

Funding

Ministry of Science and ICT, South Korea; National Research Foundation of Korea.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF2020R1F1A107408912).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, systems and applications,” Multimed. Tools Appl. 51(1), 341–377 (2011). [CrossRef]  

2. J Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). [CrossRef]  

3. B Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). [CrossRef]  

4. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vision 8(3), 33 (2008). [CrossRef]  

5. C. M. Schor and T. K. Tsuetaki, “Fatigue of accommodation and vergence modifies their mutual interactions,” Invest. Ophthalmol. Visual Sci. 28(8), 1250–1259 (1987).

6. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014). [CrossRef]  

7. H. J. Yeom, H. J. Kim, S. B. Kim, H. Zhang, B. Li, Y. M. Ji, and J. H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]  

8. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013). [CrossRef]  

9. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]  

10. F. C. Huang, K. Chen, and G Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(3), 60 (2015). [CrossRef]  

11. S. Ravikumar, K. Akeley, and M. S Banks, “Creating effective focus cues in multi-plane 3D displays,” Opt. Express 19(21), 20940–20952 (2011). [CrossRef]  

12. J. P. Rolland, M. W. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000). [CrossRef]  

13. X. Shen and B. Javidi, “Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens,” Appl. Opt. 57(7), B184–B189 (2018). [CrossRef]  

14. X. Shen, Y. J. Wang, H. S. Chen, X. Xiao, Y. H. Lin, and B. Javidi, “Extended depth-of-focus 3D micro integral imaging display using a bifocal liquid crystal lens,” Opt. Lett. 40(4), 538–541 (2015). [CrossRef]  

15. D. Shin, C. Kim, G. Koo, and Y. H. Won, “Depth plane adaptive integral imaging system using a vari-focal liquid lens array for realizing augmented reality,” Opt. Express 28(4), 5602–5616 (2020). [CrossRef]  

16. B. Lee, S. Jung, S. W. Min, and J. H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. 26(19), 1481–1482 (2001). [CrossRef]  

17. S. W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42(20), 4186–4195 (2003). [CrossRef]  

18. D. Q. Pham, N. Kim, K. C. Kwon, J. H. Jung, K. Hong, B. Lee, and J. H. Park, “Depth enhancement of integral imaging by using polymer-dispersed liquid-crystal films and a dual-depth configuration,” Opt. Lett. 35(18), 3135–3137 (2010). [CrossRef]  

19. S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005). [CrossRef]  

20. T. Zhan, Y. H. Lee, and S. T. Wu, “High-resolution additive light field near-eye display by switchable pancharatnam-berry phase lenses,” Opt. Express 26(4), 4863–4872 (2018). [CrossRef]  

21. J.-Y. Wu, P.-Y. Chou, K.-E. Peng, Y.-P. Huang, H.-H. Lo, C.-C. Chang, and F.-M. Chuang, “Resolution enhanced light field near eye display using e-shifting method with birefringent plate,” J. Soc. Inf. Disp. 26(5), 269–279 (2018). [CrossRef]  

22. Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014). [CrossRef]  

23. Z. Qin, P.-Y. Chou, J.-Y. Wu, C.-T. Huang, and Y.-P. Huang, “Resolution-enhanced light field displays by recombining subpixels across elemental images,” Opt. Lett. 44(10), 2438–2441 (2019). [CrossRef]  

24. S. Lee, J. Cho, B. Lee, Y. Jo, C. Jang, D. Kim, and B. Lee, “Foveated retinal optimization for see-through near-eye multi-layer displays,” IEEE Access 6, 2170–2180 (2018). [CrossRef]  

25. G. Tan, Y.-H. Lee, T. Zhan, J. Yang, S. Liu, D. Zhao, and S.-T. Wu, “Foveated imaging for near-eye displays,” Opt. Express 26(19), 25076–25085 (2018). [CrossRef]  

26. C. Yoo, J. Xiong, S. Moon, D. Yoo, C.-K. Lee, S.-T. Wu, and B. Lee, “Foveated display system based on a doublet geometric phase lens,” Opt. Express 28(16), 23690–23702 (2020). [CrossRef]  

27. F. C. Huang, D. P. Luebke, and G. Wetzstein, “The light field stereoscope,” in SIGGRAPH Emerging Technologies (2015), pp. 24.

28. X. Wang and H. Hua, “Theoretical analysis for integral imaging performance based on microscanning of a microlens array,” Opt. Lett. 33(5), 449–451 (2008). [CrossRef]  

29. Z. E. Ashari, Z. Kavehvash, and K. Mehrany, “Diffraction influence on the field of view and resolution of three-dimensional integral imaging,” J. Disp. Technol. 10(7), 553–559 (2014). [CrossRef]  

30. M. Geng, “Oculus research: optics & displays for future VR: from perceptual testbeds to building blocks,” Proc. SPIE 11764, 117640W (2021). [CrossRef]  

31. K. Yin, Z. He, Y. Li, and S. T. Wu, “Foveated imaging by polarization multiplexing for compact near-eye displays,” J. Soc. Inf. Disp. 30(1), 1–2 (2022). [CrossRef]  

32. P. D. Burns, “Slanted-edge MTF for digital camera and scanner analysis,” Proc. IS&T 2000 PICS Conference, 135–138 (2000).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. (a) The schematic structure of the integral imaging system in virtual mode. (b) Tradeoff relationship between the spatial resolution and the DOF in integral imaging system.
Fig. 2.
Fig. 2. Principle of the proposed foveated integral imaging NED system.
Fig. 3.
Fig. 3. Spatial resolution of foveated integral imaging display according to the FOV.
Fig. 4.
Fig. 4. Example image of the test with fixation target and stimulus target and average threshold for the accommodation conflict across all observers.
Fig. 5.
Fig. 5. The tradeoff between FOV and spatial resolution under the condition of conflict.
Fig. 6.
Fig. 6. (a) System configuration, and the unfolded optical paths of (b) the integral imaging display, and (c) the peripheral display.
Fig. 7.
Fig. 7. Resolution comparison between (a) the conventional integral imaging, and (b) the resolution enhanced integral imaging using the foveated imaging system.
Fig. 8.
Fig. 8. The resulted USAF chart images of foveated integral imaging display and normal integral imaging display (left). MTF measurement results with a slanted edge method (right).
Fig. 9.
Fig. 9. Resulted images focused on different depth distance (a) 0.8 m, (b)1 m, and (c) 1.3 m
Fig. 10.
Fig. 10. The experimental results of the shifted foveal area: high-resolution integral imaging area on (a) the left, (b) the center, and (c) the right.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

R s = 1 P M = f g f P d
Δ z = 2 P M l D = 2 f 2 g D ( f g ) 2 P d
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.