Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D light-field display with an increased viewing angle and optimized viewpoint distribution based on a ladder compound lenticular lens unit

Open Access Open Access

Abstract

Three-dimensional (3D) light-field displays (LFDs) suffer from a narrow viewing angle, limited depth range, and low spatial information capacity, which limit their diversified application. Because the number of pixels used to construct 3D spatial information is limited, increasing the viewing angle reduces the viewpoint density, which degrades the 3D performance. A solution based on a holographic functional screen (HFS) and a ladder-compound lenticular lens unit (LC-LLU) is proposed to increase the viewing angle while optimizing the viewpoint utilization. The LC-LLU and HFS are used to create 160 non-uniformly distributed viewpoints with low crosstalk, which increases the viewpoint density in the middle viewing zone and provides clear monocular depth cues. The corresponding coding method is presented as well. The optimized compound lenticular lens array can balance between suppressing aberration and improving displayed quality. The simulations and experiments show that the proposed 3D LFD can present natural 3D images with the right perception and occlusion relationship within a 65° viewing angle.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Light-field displays (LFDs) have attracted a lot of attention from scientists due to the potential prosperity of three-dimensional (3D) displays [14]. In LFDs, the 3D spatial information can be redistributed optically, and the reconstructed 3D light-field information can provide natural depth cues with dense viewpoints. Researchers are committed to improving the LFD technology for reconstructing glasses-free large-scale and true-color 3D contents accurately [5]. Although LFDs possess the manifold merits of integral imaging (InIm) displays [6], they still need to solve some problems such as the restricted viewing angle, low spatial information capacity (i.e., the viewpoint number), and limited depth range, for adapting to various application scenarios.

A wide viewing angle is a prerequisite for multi-viewer watching while providing as much spatial information as possible of a 3D scene. Some general and inspirational techniques for increasing the viewing angle of LFDs have been proposed over the past few years, such as illumination optimization [79] and head/eye-tracking technique [10]. However, it is difficult to ensure the smoothness of binocular parallax or uniform light intensity in an LFD system regarding multiple light sources are used in the illumination optimization method. The eye tracking-based LFDs encountered the problem of accommodating multi-viewer. Lee et al. developed a 3D display with increased viewing angle based on lens switching [11]. However, the mechanical movement of the used lightproof mask is too complicated in practice. The multi-projection-based LFD is a common device used to achieve a wide viewing angle [1214], smooth motion parallax and high definition can be achieved with high-density projectors. However, the prototype with many projectors is too complex, which results in considerable calibration effort, and massive data must be transmitted. In our previous work, a composite lens array with wide pitch is applied to an LFD system for increasing the viewing angle and viewpoint number [15]. In this system, a holographic functional screen (HFS) [16,17] with diffusion characteristics was used to reduce the resolution degradation caused by the increasing pitch of the lens. The holographic diffuser with optical modulation has been proved to get a continuous light field [18].

For the previously mentioned InIm-based LFDs, a complex set of tradeoffs must be considered. Because the number of pixels used to construct spatial information is limited, when the number of viewpoints is fixed, increasing the viewing angle decreases the density of viewpoints, which reduces the depth range [2,19,20].

In order to increase the limited depth-of-field (DOF) in InIm-based LFDs, Yun et al. utilized a non-uniform camera array which composed of large f-number (i.e., small aperture size for a fixed focal length) cameras and small f-number ones to pickup elemental images (EIs) with various parameters [21]. The diffraction of the small aperture was reduced by taking the non-uniform parameters. The visual quality of every 3D object was enhanced by the small Airy disk size of the large aperture. Thus, both lateral resolution and depth range got improved. Shateri et al. introduced a non-uniform lens array in which different lenses have different focal lengths to improve the DOF of InIm [22]. Since a single lens with a fixed focal length limits DOF, the overall DOF region of the non-uniform lens array becomes wider compared to the DOF of a lens array consisted of uniform lenses. In the above-mentioned two systems, although more intermediate viewpoints are reconstructed using limited EIs at different depth planes, the utilization rate of vertical viewpoints is inefficient for a full-parallax LFDs. A large LFD system with only horizontal parallax was constructed to balance the tradeoff between the spatial resolution and viewpoint number in our previous work [23], the system employed a spaced arranged micro-pinhole unit array and an HFS to enlarge the viewing angle and construct a high-density viewpoint. However, the crosstalk is evident, and the display luminance is severely reduced.

The above studies focused on improving the utilization of viewpoints by optimizing the light control structure. Besides, the observers are accustomed to staying in the middle of a viewing zone to watch a large-scale 3D display, so the central area of the viewing zone is expected to be filled with more viewpoint information for a better watching experiences.

Here, a method for improving viewing angles and optimizing viewpoint distribution of a 3D LFD based on a ladder-compound lenticular lens unit (LC-LLU) is presented. To verify the feasibility of the proposed method, a 54-inch display prototype is demonstrated in which 160 viewpoints are non-uniformly redistributed within a 65° viewing angle. With the special designed LC-LLU, the maximal crosstalk of the system is less than 7.83%, which touches the typical level of commercial 3D displays (9.6% crosstalk) [24]. In addition, an HFS with special modulation characteristics is adopted to recompose the light distribution. Compared with conventional 3D displays, the proposed display system has got an improvement in viewing angle and maximal off-screen depth in the middle of the viewing zone. In the experiments, reconstructed fatigue-free 3D images can be perceived with a precise depth cue of 300 mm in the middle of the viewing angle range (±20°), and with a depth of 280 mm in the remaining range.

2. Experimental configuration

2.1 Principle of the light-field display system

The basic structure of the proposed 3D display system mainly comprises a light-emitting diode (LED) panel, an LC-LLU array, and an HFS, as shown in Fig. 1(a). The 54-inch LED with 1280 × 720 resolution is used to display the EIs, which can provide multiple viewpoint perspectives. The LED is installed in the focal plane of the LC-LLU array, and the LC-LLU is a composite light-controlled optical component based on a lenticular lens array and a rectangular aperture array. The size of the rectangular aperture is the same as the bottom area of a single lens. The LC-LLU and its corresponding EI unit can be considered an independent LFD unit. Thin masks are installed on both sides of each LFD unit to prevent the beams from scattering to the adjacent units and causing unnecessary interference.

 figure: Fig. 1.

Fig. 1. (a) Schematic diagram of proposed LFD. (b) Light-controlling principle of LFD unit. Distribution of viewpoints for the human eye (c) in the middle of viewing zone and (d) at the edge of viewing zone.

Download Full Size | PDF

The light-controlling principle of an LFD unit is illustrated in Fig. 1(b). The light rays are bundled in specific directions, and the intensities generated by the pixels on different rows of EI units present a ladder distribution after passing through the LC-LLU array. Owing to the modulation of the HFS, these light beams with ladder distribution are diffused vertically at large angles. The viewpoint information from different rows is recomposed and interpolated together in the horizontal direction. Consequently, the viewpoint density in the middle viewing zone is higher than that at the edges because of the cross arrangement. In other words, the pixels in the vertical direction are modulated to construct the viewpoints horizontally, and both the horizontal viewpoint number and resolution are improved. It should be noted that the viewpoint distribution is determined by the design of the HFS and the LC-LLU. The schematic diagrams of the intensity distributions in the middle and at the edge of the viewing zone are shown in Fig. 1(c) and 1(d), respectively.

2.2 Modulation and design of the holographic functional screen

The HFS is holographically printed with speckle patterns onto a proper sensitive material, and the operating principle and fabrication method were introduced in [16,18]. And the diffraction design of HFS is a global uniform to ensure that the diffusion angles of the light-rays incident from any direction are close as possible. And the specific diffusion angle is determined by the shape and size of speckles by driving the shutter and the diffusion plate with a computer [17]. In the proposed display system, each incident core light ray is expanded to a certain angle in the horizontal direction through the necessary optical conversion provided by the HFS, as shown in Fig. 2(a). Subsequently, the discrete viewpoint information in the light field is distributed to be continuous such that a natural and realistic 3D sense is achieved. Moreover, the visual blind spots caused by the lightproof part of the LC-LLU are eliminated by the HFS with a specific horizontal diffusion angle. f denotes the focal distance of a single lens in the LC-LLU array. According to the geometric relationship in Fig. 2(b), the horizontal diffusion angle of the HFS can be expressed as follows:

$${\phi _{x}}{ = 2} \times arctan\left( {\frac{{{{M}_{l}}{ + a}}}{{{2D}}}} \right), $$
where D denotes the distance between the LC-LLU array and HFS, ${M_l}$ is the width of the LC-LLU, and ${a}$ represents the horizontal pitch of a lenticular lens, the pixel pitch of the LED panel is denoted as ${{W}_{p}}$. Please refer to the Supplement 1 for derivation details.

 figure: Fig. 2.

Fig. 2. (a) Schematic of HFS, which recomposes light. (b) Top view of light reconstitution process.

Download Full Size | PDF

To ensure that enough pixels from different rows are integrated together for observers in a large vertical viewing angle, the vertical diffusion angle ${\phi _{y}}$ must be as large as possible.

From the above-presented analysis, the HFS plays an important role in expanding each light ray and letting them connect or overlap mutually, and the design of the HFS is closely related to the structure of the LC-LLU. The next section describes the design of the LC-LLU in detail.

2.3 Optical design, fabrication and advantages of the LC-LLU

2.3.1 Optical design of the LC-LLU

For simplicity, the operation principle of each LC-LLU is presented based on four lenticular lenses with a ladder pattern, and the four lenses are ${l_1}$, ${l_2}$, ${l_3}$, and ${l_4}$ in Fig. 3(a). The corresponding EI with resolution M × N is illustrated in Fig. 3(b) and b is the height of a lenticular lens. The distance between adjacent lenses in the horizontal direction is ${c}$.

 figure: Fig. 3.

Fig. 3. Structure diagram of (a) LC-LLU and (b) EI unit. Light ray arrangements of (c) proposed LFD unit and (d) conventional 3D display based on parallax barrier.

Download Full Size | PDF

The light ray arrangement of the proposed LFD unit is shown in Fig. 3(c). The horizontal pitch ${M_{l}}$ of the LC-LLU can be deduced according to the geometric relationship:

$${M_{l}} = \frac{{LM{W_p}}}{{L + {f}}}, $$
where L represents the observation distance, and the details of Eq. (2) on the derivations can be found in Supplement 1.

The total viewing angle $\theta $ is an important characteristic of the proposed LFD, and it can be calculated by the following equation:

$$\theta { = 2arctan}({{{{S_{{total}}}} / {2L}}} ), $$
where Stotal is the total width of the entire pixels information seen on the viewing plane, which can be expressed as
$${S_{total}} = {{S}_{2}}{ + }{{w}_{2}}({{N - 1}} ), $$
where ${{S}_2}$ is the width of each row of viewing zone on the observing plane, which is formed by the pixels in each row of EI after diffusion. ${{w}_2}$ is the offset distance between the adjacent row of viewing zones. ${{S}_2}$ and ${{w}_2}$ can be respectively expressed as
$$\left\{ {\begin{array}{l} {{{S}_2}{ = }{{S}_1} + \frac{{2L\left( {2f \, tan \frac{{{\phi_x}}}{2} + M{W_p}} \right)}}{{2{f} - M{W_p}\, tan \frac{{{\phi_x}}}{2}}}}\\ {{{w}_2}{ = }{{w}_1}{ + }L\left[ {\, tan \left( {{acr}\, tan \left( {\frac{{{M_l} + 2c}}{{2{f}}}} \right) + {{{\phi_x}} / 2}} \right) - {tan}\left( {acr\, tan \left( {\frac{{{M_l}}}{{2{f}}}} \right) + {{{\phi_x}} / 2}} \right)} \right]{ }} \end{array}} \right., $$
where ${{S}_1}$ is the width of pixels from a row of EI projected on the HFS, and ${{w}_1}$ is the offset distance between the adjacent row of viewing zones on the HFS. Please refer to the Supplement 1 for derivation details. ${{S}_1}$ and ${{w}_1}$ can be expressed as
$$\left\{ {\begin{array}{c} {{{S}_1}{ = }M\varDelta {{s}_{{before}}}}\\ {{w_1} = {c}{{({{f} + D} )} / {f}}} \end{array}} \right., $$
where $\varDelta {{s}_{{before}}}$ is the width of each pixel projected on the HFS, and it can be expressed as
$$\varDelta {{s}_{{before}}}{ = }{{D{W_{p}}} / {f}}. $$

As shown in Fig. 3(c), $\varDelta {{s}_{{before}}}$ is represented as the width of each viewpoint on the observing plane, and it can be expressed as

$$\varDelta {{s}_{{after}}}{ = }\frac{{D{W_{p}}}}{{f}} + \frac{{2L\left( {\frac{{{W_p}}}{{2{f}}} + \, tan \frac{{{\phi_x}}}{2}} \right)}}{{1 - \frac{{{W_p}}}{{2{f}}} \cdot \, tan \frac{{{\phi _x}}}{2}}}. $$

Combining the above Eq. (3)-(7), Eq. (4) can be rewritten as

$${{S}_{{total}}}{ = }\frac{{DM{W_{p}}}}{f} + \frac{{2L\left( {2f\, tan \frac{{{\phi_x}}}{2} + M{W_p}} \right)}}{{2{f} - M{W_p}\, tan \frac{{{\phi _x}}}{2}}} + {w_2}({N - 1} ). $$

Please refer to the Supplement 1 for more details about Eq. (9).

In the traditional 3D display, as shown in Fig. 3(d), a parallax barrier unit projects the covered pixels onto the observing plane to construct viewpoints in the same viewing area. That is, the viewpoint information formed by different rows of pixels in each EI unit overlap accurately, the width of viewing zone is ${S^{\prime}}$, which is expressed as

$${S^{\prime}}{ = }{{{DM}{{W}_{p}}} / f}. $$

Then, the viewing angle of the traditional 3D system can be described as

$${\theta ^{\prime}}{ = }2 arctan \left( {\frac{{{DM}{{W}_{p}}}}{{2f}}} \right). $$

In general, compared Eq. (9) with Eq. (10), our proposed method can optimize viewpoint distribution and expand the viewing zone, so that the whole viewing angle is increased.

2.3.2 Optical optimization for the LC-LLU

Different from the pitch of the traditional InIm display, the lens in the proposed LC-LLU is not designed as large as possible to cover more pixels. The smaller ${a}$ is, the less crosstalk the 3D image has. However, more brightness is sacrificed. When the pitch ${a}$ increases, the light-controlling ability of the LC-LLU for large viewing angles improves, but the convergence accuracy of the beam in the large-angle direction is significantly reduced owing to the aberration effect of the lens which leads to a tradeoff regarding the image quality.

To suppress aberration, optical optimization is implemented in the design of the compound lenticular lens array. In addition, the light rays from pixels covered by a single lens must be evenly arranged at predesigned positions in space. The compound lenticular lens array is designed with two aspheric surfaces and two different refractive indices, and the aspheric model uses the base radius of curvature and conic constant. The surface sag is as follows:

$$z = \frac{{\varepsilon {R^2}}}{{1 + \sqrt {1 - (1 + \kappa ){\upsilon ^2}{R^2}} }} + {\alpha _2}{r^2} + {\alpha _4}{r^4} + {\alpha _6}{r^6} + \ldots , $$
where $\upsilon $ is the vertex curvature, R is the radial coordinate, $\kappa $ is the conic coefficient, and ${\alpha _2}$, ${\alpha _4}$, and ${\alpha _6}$ are the aspheric coefficients. The damped least-squares method is used to design the aspheric surfaces to achieve the previously mentioned conditions. The corresponding parameters and optimized structure are shown in Fig. 4(a).

 figure: Fig. 4.

Fig. 4. (a) Optimized structure and corresponding parameters of compound lenticular lens. (b) Comparison of spot diagrams of optimized compound lenticular lens and general single lens. (c) Comparison of image reconstruction results of an identical 3D scene from two optical structures.

Download Full Size | PDF

The spot diagram of the compound aspheric lenticular lens of the LC-LLU and corresponding root-mean-square (RMS) radius are presented with that of the standard lens (the focal lengths and diameters are identical), as shown in Fig. 4(b). Compared with that of the standard lens, the spot diagram of the optimized compound aspheric lenticular lens is evidently improved, and the biggest RMS radius at the half-field angle (40°) is only 69.803 μm, while that of the standard lens is 367.106 μm. The details of the 3D images (the fan of an engine is shown) of the optimized compound lens array are much clearer than those of the standard single lens array, as shown in Fig. 4(c). In general, the data and reconstructed 3D images (captured with a Canon 60D camera) indicate that the image quality has been significantly improved by the compound aspheric lens array.

2.3.3 Fabrication of the LC-LLU

The fabrication process of LC-LLU can be divided into two parts: the fabrication process of the lens array molds and the fabrication process of LC-LLU [25,26].

Figure 5 shows the fabrication process of the molds. Step-1: an ultra-precision computer-numerical-control (CNC) milling machine is used to make two flat metal molds. The mold parameters are shown in Fig. 5. Step-2: The PET film with liquid UV-curable material is attached to the flat metal molds. After plate imprinting and UV curing, the hardened UV molds with a convex micro-lens array can be achieved by peeling off the molds from the flat metal molds. The PET film with liquid UV-curable material is attached to the flat metal molds. After plate imprinting and UV curing, the hardened UV molds can be achieved by peeling off the molds from the flat metal molds. Step-3: For the subsequent demolding process, the surface treatment of the UV molds is carried out (UV exposure, 3∼5 min, 2000 W). Step-4: Based on the obtained UV molds, the UV molds with the designed surface parameters are achieved after UV transfer and surface treatment. These UV molds are denoted as mold-A and mold-B.

 figure: Fig. 5.

Fig. 5. The fabrication process of the molds.

Download Full Size | PDF

Figure 6 shows the fabrication process of LC-LLU. Step-1: the mold-A and mold-B are attached on a flat glass substrate, and the double-sided adhesive is applied around the glass substrate. Step-2: the mold-A and another new flat glass substrate are aligned by using the high precision CCD automatic alignment device equipped with a nano grating scale (alignment accuracy: ±0.5 μm). The double-sided adhesive is used to prevent mold deviation. Then, the liquid UV-curable material (photopolymer refractive index N1 =1.41) is injected into the gap between mold-A and the new flat glass substrate. After UV curing and demolding, the first layer lens array of the LC-LLU can be achieved. Step-3: Follow the above method, the first layer lens array and mold-B are fixed. The liquid UV-curable material (refractive index N2=1.61) is injected into the gap between the first lens array and mold-B. After UV curing and demolding, the designed LC-LLU can be obtained, and the manufactured LC-LLU is demonstrated in Fig. 7.

 figure: Fig. 6.

Fig. 6. The fabrication process of LC-LLU.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. The prototype of the proposed system and the manufactured LC-LLU.

Download Full Size | PDF

2.3.4 Advantages of the LC-LLU for constructing a non-uniform viewpoint distribution

As can be known from section 2.3.1, LC-LLU can optimize the distribution of viewpoint. And the viewpoint density of our proposed LFD decreases gradually from the middle to the edge of the viewing zone, and the gradient layer number is identical to the row number N of the EI unit, as shown in Fig. 8(a). The mathematical description of the viewpoint density from the middle to the edge of the viewing zone is presented in Eq. (13):

$$\rho ({s}){ = }\frac{{N - \lfloor{{s / {\varDelta {s_{{after}}}}}} \rfloor }}{{\varDelta {{s}_{after}}}}, $$
where s denotes the absolute distance between the observer and center of the viewing zone, and the round-down symbol “$\lfloor{} \rfloor $” represents the maximal integer that does not exceed the variable.

 figure: Fig. 8.

Fig. 8. (a) Distribution of viewpoints. (b) Depth range analysis.

Download Full Size | PDF

Based on the modulation transfer function of a single slit [20], the maximal off-screen depth ${z_s}$ is the clear depth range of a reconstructed 3D image between the display panel and front marginal reconstruction planes:

$${z_{s}} = \frac{L}{{{{{W_s}} / {{W_{i}}}} + 1}}, $$
where ${W_{i}}$ represents the minimal spot size (i.e., the voxel width), which is equal to the horizontal pitch ${M_{l}}$ of the LC-LLU in our proposed system, ${W_s}$ is determined by the ratio between the width of the viewing zone Stotal and the number of viewpoints N (i.e., the reciprocal viewpoint density). Thus, the equation of the maximal off-screen depth can be converted as follows:
$${z_{s}} = \frac{L}{{\frac{1}{{\rho ({s}){M_{l}}}} + 1}}. $$

To analyze the off-screen depth more thoroughly, we compare the viewpoint information seen by the observer at the middle position A with that seen at the edge position B of the observation plane. As shown in Fig. 8(a), at position A of the viewing area, the distance between the observer and center of the viewing zone is ${{s}_{a}}$, and the information of view r+5, view r+4, and view r+3 can be simultaneously seen by the left eye. The right eye can simultaneously capture the content of view r+2, view r+1, and view r, and r denotes any viewpoint in the viewing zone. As shown in Fig. 8(b), the maximal parallax is obtained when the human left and right eyes see the perspectives of view r+5 and view r, respectively. The equivalent average distance between viewpoints is $W_{s}^{\prime}$, which is related to the viewpoint density; the maximal off-screen depth is Z2.

When the left and right eyes at position B can only see the information of view 1 and view 2, respectively, which are provided by the adjacent pixels in the same EI row, the distance between the viewpoints is ${W_{s}}$, and the maximal off-screen depth is Z1. The scene seen at position B is equivalent to the scene when the left and right eyes at position A see the information of view r+3 and view r, respectively. To facilitate the comparison of the maximal screen depth, the scenes originally observed at different positions are merged into one picture in Fig. 8(b). In both previously presented cases, $W_s^{\prime} < W_s^{}$, and their minimal spot sizes ${W_{i}}$ are equal to ${M_{l}}$; therefore, the maximal screen depth seen at position A is larger than that seen at the edge position B (i.e., ${Z_1} < {Z_2}$).

In general, the proposed method (i.e., constructing dense viewpoints) causes the maximal off-screen depth in the middle of the viewing zone to exceed those on both sides, which improves the stereo visual experience in the middle viewing zone.

2.4 Image mapping method for light-field reconstruction

To obtain the content with the correct viewpoint arrangement, the mapping relationship between the rendering synthetic image (SI) and parallax sequence images (PSIs) captured by a set of off-axis virtual CAs is required. The cameras of the CA are classified into M groups, and each group contains N cameras at different positions, as shown in Fig. 9(a). For a correct perspective relationship, the CA should be distributed in a ladder form as the viewpoint distribution. The offset distance between adjacent groups of cameras is set to ${w_2}$, and the distance between the cameras is equal to the distance between the viewpoints $\varDelta {{s}_{{after}}}$. Our group adopted a 3D creation suite (Blender 2.76b) to set up the camera array in the specific pickup process of implementation, the parallax sequence images were obtained by using the default render-engine of the software as the reverse ray-tracing render-engine [27,28].

 figure: Fig. 9.

Fig. 9. (a) Light field pickup. (b) Mapping relationship of coding for rendering SI.

Download Full Size | PDF

To simplify the principle of the coding method, each rendering EI unit with 9 × 4 resolution (M = 9, N = 4) is presented as an example in Fig. 9(b). To maximize the utilization of vertical pixels to construct more viewpoint information, the height of the lenticular lens must be as consistent as possible with the height of one pixel. Moreover, the resolution of each PSI is identical to that of the SI or LED panel during viewpoint construction. The marked number (T) on each pixel of the EI unit in Fig. 9(b) represents the sequence number of the PSIs, and it can also be considered the virtual camera number of the CA because each PSI is picked up by its corresponding virtual camera.

Based on the origin ${O_p}$ of the index coordinate, the position coordinate (y-th row, x-th column) of an arbitrary pixel of the EI is represented by $O({{y_T},{x_T}} )$, T is the sequence number of the PSI (where T = 1, 2, …, M×N), and T can be expressed as

$$T = \sum\limits_{y = 1}^N {({\varDelta n{y} - 1} )} + y({x - 1} )+ ({{T_0} - 1} ), $$
where T0 is the sequence number of the PSI, which represents the source of the viewpoint information that should be filled into the upper-left corner of the EI unit. $\varDelta {n}$ indicates the offset value of the first viewpoint number between the second row and the first row, it is related to c in LC-LLU and can be expressed as $\varDelta {n = }\lfloor{{{c} / {{W_p}}}} \rfloor $. The pixel $O({{y_T},{x_T}} )$ in an EI unit is extracted from the corresponding pixel ${P_{m,n}}({{i_T},{j_T}} )$ (i-th row and j-th column) of the T-th PSI (where m = 1, 2, …, M and n = 1, 2, …, N). The T-th PSI is captured by the m-th group and n-th camera in the CA. The mathematical description of the mapping relationships between pixels of the EIs and the PSIs can be derived as follows:
$$O({{y_T},{x_T}} )= {P_{m,n}}({{i_T},{j_T}} ), $$
where,
$$\left\{ {\begin{array}{c} {{m = M} - \bmod \left( {\frac{{{i_T}}}{M}} \right)}\\ {n = \bmod \left( {\frac{{{j_T}}}{N}} \right)} \end{array}} \right.. $$
where the symbol “mod” is a modulo operator.

3. Experimental results and analysis

To verify the feasibility of the presented display method with optimized viewpoint distribution, relevant experiments and analyses are performed. The prototype based on a 54 inch LED panel is set up with 160 viewpoint perspectives within a ±32.5° viewing angle. The characteristics of the display prototype are listed in Table 1.

Tables Icon

Table 1. Parameters of experiments

To facilitate the encapsulation of the light-controlled elements to expand the application field of the proposed system, the thickness of the prototype is designed as thin as possible. The focal length f of the compound lenticular lens is 20 mm, and the distance D from the HFS to the LC-LLU is 250 mm.

To evaluate the viewpoint arrangement of the viewing zone, we measure the two luminance distributions at 0° and 32.5° viewing angles. The luminance distribution of the viewpoints is measured with a CCD camera with a focal length of 35 mm placed at an observation distance of 2000mm. From the measured normalized results in Figs. 10(a) and 10(b), the minimal crosstalk statistics measured at 0° and 32.5° viewing angles are 7.83% and 3.52%, respectively. These statistics indicate that the crosstalk of our proposed prototype is under the level of the commercial glass-type 3D display (less than 9.6%) [24]. Compared with those in Fig. 10(b), the fluctuation of the light intensity in Fig. 10(a) is smaller, and the 3D effect seen at the center of the observation plane provides a smoother moving parallax.

 figure: Fig. 10.

Fig. 10. (a) Luminance and crosstalk distributions in the center of observation plane. (b) Luminance and crosstalk distributions at the edge of observation plane.

Download Full Size | PDF

As the viewpoint density increases from the edge to the center of the viewing zone, the discrepancy between adjacent viewpoint information becomes smaller and inter-perspective aliasing becomes weaker, and the 3D image quality is improved. To evaluate the quality of these constructed images seen at different locations, the structural similarity (SSIM) index [29,30], which is based on an initial uncompressed or distortion-free image as the reference, is calculated for different angles. In the simulation experiment, the simulation results from different angles, which are composed of a series of sub-images observed through every single lens, are generated with backward ray-tracing [27]. As references, different viewpoint perspectives of a 3D object are captured separately with virtual cameras. Figure 11(a) shows the SSIM values at different viewing angles. The similarity of the reconstructed 3D images in the LFD with uniform viewpoint distribution for a $\theta \in [ - 20^\circ ,20^\circ ]$ viewing angle is low, as shown in Fig. 11(e). When the viewpoint distribution is optimized, the SSIM values of our proposed display are significantly improved, as shown in Fig. 11(g). On both sides of the viewing zone ($\theta \in [ - 32.5^\circ , - 20^\circ ] \cup [20^\circ ,32.5^\circ ]$), the SSIM values of our proposed display are slightly lower than those of the conventional 3D display, and the difference between them is not significant.

 figure: Fig. 11.

Fig. 11. (a) SSIM indices of simulation results at different viewing angles. (b) Viewpoint perspectives captured at different angles. (c) Depth maps of different perspectives. (d) Simulation results of 3D display with uniform viewpoint distribution. (e) Corresponding SSIM index of 3D display with uniform viewpoint distribution. (f) Simulation results of presented 3D LFD with optimized viewpoint distribution. (g) Corresponding SSIM index of presented 3D LFD with optimized viewpoint distribution.

Download Full Size | PDF

According to the results of the presented simulation experiment, the proposed system meets the requirements of large-depth 3D imaging with a large viewing angle. Based on this advantage, one of the prominent applications of the proposed system is the public demonstration of industrial design. Optical experiments are carried out to prove the feasibility of the simulation results. A camera (Canon 60D camera) is used to take 3D images at different angles, and the maximal displayed clear depth is captured. The main parameters and 3D scene layout are shown in Fig. 12(a). As shown in Fig. 12(b), the clear focus depth of the spaceflight rocket engine captured at 0° can reach 300 mm, while the clear maximal off-screen depth at the edge of the viewing zone ($\theta { = } - 32.5^\circ , \theta { = }32.5^\circ$) is approximately 280 mm, which is slightly smaller than that in the middle area.

 figure: Fig. 12.

Fig. 12. Comparison of displayed 3D effects with depth information of spaceflight rocket engine produced with two viewpoint arrangement methods. (a) Main parameters and 3D scene layout. (b) 3D effect based on proposed system prototype. (c) 3D effect based on parallax barrier.

Download Full Size | PDF

As a comparison, a 3D display system with uniform viewpoint distribution based on a parallax barrier with the same number of viewpoints (160 viewpoints) in the same viewing angle (65°) is assembled. As shown in Fig. 12(c), the experimental results show that the strip of the parallax barrier is significant, and the off-screen depth in the middle of the viewing zone is shorter than that of our proposed LFD.

The 3D display results of the reconstructed image (i.e., the spaceflight rocket engine) with continuous motion parallax within the 65° viewing angle can be seen in Visualization 1. The relative position occlusions of different parts in the reconstructed 3D scene are clearly visible. Moreover, the same displayed 3D image based on the parallax barrier with uniform viewpoint distribution can be seen in Visualization 2, and the result shows that the motion parallax exhibits weak continuity and the brightness is low.

In order to describe the experimental results numerically, a luminance meter is used to measure the brightness of different display systems. The illuminance of 54-inch LED panel is 473 cd/m2. And the luminance of the proposed system and the autostereoscopic display based on parallax barrier are 171 and 98 cd/m2, respectively. So the optical utilization of these two 3D display systems is about 36% and 20%, respectively. The experimental results show that the brightness of 3D images of the demonstrated LFD system is noticeably improved compared with the autostereoscopic display based on the parallax barrier.

4. Conclusion

In summary, to mitigate the inner tradeoff among the viewpoint number, viewing angle, and depth range, an LFD system with increased viewing angle and optimized viewpoint distribution is proposed. It can provide clear 3D images with smooth parallax and correct geometric occlusion in the entire viewing range of 65°. As an essential light-controlled structure, the LC-LLU optimizes the viewpoint distribution such that more effective viewpoint information is concentrated in the middle of the viewing zone. The lenticular lens of the LC-LLU is designed as a compound structure consisting of two aspheric lenses to suppress aberration and improve the image quality further. The HFS of the system is indispensable for optical modulation and improved imaging quality. Compared with conventional 3D displays, this system’s laddered arrangement of viewpoint information significantly improves the clear maximal off-screen depth in the middle of the viewing zone and simultaneously increases the system’s viewing angle. In the experiment, a high-quality 3D scene with 160 viewpoint perspectives can be displayed with a viewing angle of ±32.5°, and the clear focus depth of the displayed 3D scene captured at 0° can reach 300 mm. Regarding its commercial application, the prototype can be scaled for large-scale applications and exhibits stability under most circumstances. We believe that the presented 3D LFD can find a comprehensive application prospect, particularly in the fields of aviation simulations, industrial or architectural design, and multimedia presentation teaching.

Funding

National Natural Science Foundation of China (61905017, 61905019, 61905020, 62075016); Fundamental Research Funds for the Central Universities (2021RC09, 2021RC1, 2021RC13).

Disclosures

The authors declare no conflicts of interest. This work is original and has not been published elsewhere.

Data availability

Underlying the results presented in this paper are not publicly available at this time but maybe obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. N. Balram and I. Tosic, “Light-Field Imaging and Display Systems,” Inf. Disp. 32(4), 2–9 (2016). [CrossRef]  

2. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017). [CrossRef]  

3. X. Liu and H. Li, “The progress of light field 3-D displays,” Inf. Disp. 30(6), 6–14 (2014). [CrossRef]  

4. D. Nam, J. Lee, Y. H. Cho, Y. J. Jeong, H. Hwang, and D. S. Park, “Flat Panel Light-Field 3-D Display: Concept, Design, Rendering, and Calibration,” Proc. IEEE 105(5), 876–891 (2017). [CrossRef]  

5. M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photon. 10(3), 512–566 (2018). [CrossRef]  

6. A. Stern and B. Javidi, “Three-Dimensional Image Sensing, Visualization, and Processing Using Integral Imaging,” Proc. IEEE 94(3), 591–607 (2006). [CrossRef]  

7. S. W. Cho, J. H. Park, Y. Kim, H. Choi, J. Kim, and B. Lee, “Convertible two-dimensional-three-dimensional display using an LED array based on modified integral imaging,” Opt. Lett. 31(19), 2852–2854 (2006). [CrossRef]  

8. L. Yang, X. Sang, X. Yu, B. Yan, K. Wang, and C. Yu, “Viewing angle and viewing-resolution enhanced integral imaging based on time-multiplexed lens stitching,” Opt. Express 27(11), 15679–15692 (2019). [CrossRef]  

9. B. Liu, X. Sang, X. Yu, X. Gao, L. Liu, C. Gao, P. Wang, Y. Le, and J. Du, “Time-multiplexed light field display with 120-degree wide viewing angle,” Opt. Express 27(24), 35728–35739 (2019). [CrossRef]  

10. X. Shen, M. Martinez Corral, and B. Javidi, “Head Tracking Three-Dimensional Integral Imaging Display Using Smart Pseudoscopic-to-Orthoscopic Conversion,” J. Disp. Technol. 12(6), 542–548 (2016). [CrossRef]  

11. B. Lee, S. Jung, and J.-H. Park, “Viewing angle-enhanced integral imaging by lens switching,” Opt. Lett. 27(10), 818–820 (2002). [CrossRef]  

12. H. Watanabe, N. Okaichi, H. Sasaki, and M. Kawakita, “Pixel-density and viewing angle enhanced integral 3D display with parallel projection of multiple UHD elemental images,” Opt. Express 28(17), 24731–24746 (2020). [CrossRef]  

13. L. Ni, Z. Li, H. Li, and X. Liu, “360-degree large-scale multiprojection light-field 3D display system,” Appl. Opt. 57(8), 1817–1823 (2018). [CrossRef]  

14. X. Yu, X. Sang, X. Gao, D. Chen, B. Liu, L. Liu, C. Gao, and P. Wang, “Dynamic three-dimensional light-field display with large viewing angle based on compound lenticular lens array and multi-projectors,” Opt. Express 27(11), 16024–16031 (2019). [CrossRef]  

15. X. Sang, X. Gao, X. Yu, S. Xing, Y. Li, and Y. Wu, “Interactive floating full-parallax digital three-dimensional light-field display based on wavefront recomposing,” Opt. Express 26(7), 8883–8889 (2018). [CrossRef]  

16. C. Yu, J. Yuan, F. Fan, S. Choi, X. Sang, C. Lin, and D. Xu, “The modulation function and realizing method of holographic functional screen,” Opt. Express 18(26), 27820–27826 (2010). [CrossRef]  

17. X. Sang, F. Fan, S. Choi, C. Jiang, C. Yu, B. Yan, and W. Dou, “Three-dimensional display based on the holographic functional screen,” Opt. Eng. 50(9), 091311 (2011). [CrossRef]  

18. Z. Yan, X. Yan, X. Jiang, H Gao, and J. Wen, “Integral imaging based light field display with enhanced viewing resolution using holographic diffuser,” Opt. Commun. 402, 437–441 (2017). [CrossRef]  

19. Y. Takaki, K. Tanaka, and J. Nakamura, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011). [CrossRef]  

20. C. N. Moller and A. R. L. Travis, “Correcting interperspective aliasing in autostereoscopic displays,” IEEE Trans. Visual. Comput. Graphics 11(2), 228–236 (2005). [CrossRef]  

21. H. Yun, A. Llavador, G. Saavedra, and M. Cho, “Three-dimensional imaging system with both improved lateral resolution and depth of field considering non-uniform system parameters,” Appl. Opt. 57(31), 9423–9431 (2018). [CrossRef]  

22. F. Shateri, S. Behzadfar, and Z. Kavehvash, “Improved depth resolution and depth-of-field in temporal integral imaging systems through non-uniform and curved time-lens array,” Opt. Express 28(5), 6261–6276 (2020). [CrossRef]  

23. L. Yang, X. Sang, X. Yu, B. Liu, L. Liu, S. Yang, B. Yan, J. Du, and C. Gao, “Demonstration of a large-size horizontal light-field display based on the LED panel and the micro-pinhole unit array,” Opt. Commun. 414, 140–145 (2018). [CrossRef]  

24. Y. C. Chang, C. Y. Ma, and Y. P. Huang, “Crosstalk suppression by image processing in 3D display,” SID Symposium Digest 41(1), 124–127 (2010). [CrossRef]  

25. S. Xing, X. Sang, X. Yu, D. Chen, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25(1), 330–338 (2017). [CrossRef]  

26. B. Pang, X. Sang, S. Xing, X. Yu, D. Chen, B. Yan, K. Wang, C. Yu, B. Liu, C. Cui, Y. Guan, W. Xiang, and L. Ge, “High-efficient rendering of the multi-view image for the three-dimensional display based on the backward ray-tracing technique, Optics Communications,” Opt. Commun. 405, 306–311 (2017). [CrossRef]  

27. X. Gao, X. Sang, W. Zhang, X. Yu, B. Yan, C. Gao, and L. Liu, “Design, fabrication, and evaluation of Petzval retro-mirror array for floating displays,” Opt. Commun. 474, 126179 (2020). [CrossRef]  

28. C. Gao, X. Sang, X. Yu, X. Gao, J. Du, B. Liu, L. Liu, and P. Wang, “Space-division-multiplexed catadioptric integrated backlight and symmetrical triplet-compound lenticular array based on ORM criterion for 90-degree viewing angle and low-crosstalk directional backlight 3D light-field display,” Opt. Express 28(23), 35074–35098 (2020). [CrossRef]  

29. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers2, 1398–1402 (2003).

30. L. Liu, X. Sang, X. Yu, X. Gao, B. Liu, Y. Wang, Y. Chen, P. Wang, C. Gao, and B. Yan, “Depth of field analysis for a three-dimensional light-field display based on a lens array and a holographic function screen,” Opt. Commun. 493, 127032 (2021). [CrossRef]  

Supplementary Material (3)

NameDescription
Supplement 1       More details about the derivation process of the important equations
Visualization 1       The 3D display results based on proposed system prototy with continuous motion parallax within the 65° viewing angle can be seen in Visualization 1, and he relative position occlusions of different parts in the reconstructed 3D scene are clearly visi
Visualization 2       The same displayed 3D image based on the parallax barrier with uniform viewpoint distribution can be seen in Visualization 2, and the result shows that the brightness is low, and the motion parallax exhibits weak continuity.

Data availability

Underlying the results presented in this paper are not publicly available at this time but maybe obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. (a) Schematic diagram of proposed LFD. (b) Light-controlling principle of LFD unit. Distribution of viewpoints for the human eye (c) in the middle of viewing zone and (d) at the edge of viewing zone.
Fig. 2.
Fig. 2. (a) Schematic of HFS, which recomposes light. (b) Top view of light reconstitution process.
Fig. 3.
Fig. 3. Structure diagram of (a) LC-LLU and (b) EI unit. Light ray arrangements of (c) proposed LFD unit and (d) conventional 3D display based on parallax barrier.
Fig. 4.
Fig. 4. (a) Optimized structure and corresponding parameters of compound lenticular lens. (b) Comparison of spot diagrams of optimized compound lenticular lens and general single lens. (c) Comparison of image reconstruction results of an identical 3D scene from two optical structures.
Fig. 5.
Fig. 5. The fabrication process of the molds.
Fig. 6.
Fig. 6. The fabrication process of LC-LLU.
Fig. 7.
Fig. 7. The prototype of the proposed system and the manufactured LC-LLU.
Fig. 8.
Fig. 8. (a) Distribution of viewpoints. (b) Depth range analysis.
Fig. 9.
Fig. 9. (a) Light field pickup. (b) Mapping relationship of coding for rendering SI.
Fig. 10.
Fig. 10. (a) Luminance and crosstalk distributions in the center of observation plane. (b) Luminance and crosstalk distributions at the edge of observation plane.
Fig. 11.
Fig. 11. (a) SSIM indices of simulation results at different viewing angles. (b) Viewpoint perspectives captured at different angles. (c) Depth maps of different perspectives. (d) Simulation results of 3D display with uniform viewpoint distribution. (e) Corresponding SSIM index of 3D display with uniform viewpoint distribution. (f) Simulation results of presented 3D LFD with optimized viewpoint distribution. (g) Corresponding SSIM index of presented 3D LFD with optimized viewpoint distribution.
Fig. 12.
Fig. 12. Comparison of displayed 3D effects with depth information of spaceflight rocket engine produced with two viewpoint arrangement methods. (a) Main parameters and 3D scene layout. (b) 3D effect based on proposed system prototype. (c) 3D effect based on parallax barrier.

Tables (1)

Tables Icon

Table 1. Parameters of experiments

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

ϕ x = 2 × a r c t a n ( M l + a 2 D ) ,
M l = L M W p L + f ,
θ = 2 a r c t a n ( S t o t a l / 2 L ) ,
S t o t a l = S 2 + w 2 ( N 1 ) ,
{ S 2 = S 1 + 2 L ( 2 f t a n ϕ x 2 + M W p ) 2 f M W p t a n ϕ x 2 w 2 = w 1 + L [ t a n ( a c r t a n ( M l + 2 c 2 f ) + ϕ x / 2 ) t a n ( a c r t a n ( M l 2 f ) + ϕ x / 2 ) ] ,
{ S 1 = M Δ s b e f o r e w 1 = c ( f + D ) / f ,
Δ s b e f o r e = D W p / f .
Δ s a f t e r = D W p f + 2 L ( W p 2 f + t a n ϕ x 2 ) 1 W p 2 f t a n ϕ x 2 .
S t o t a l = D M W p f + 2 L ( 2 f t a n ϕ x 2 + M W p ) 2 f M W p t a n ϕ x 2 + w 2 ( N 1 ) .
S = D M W p / f .
θ = 2 a r c t a n ( D M W p 2 f ) .
z = ε R 2 1 + 1 ( 1 + κ ) υ 2 R 2 + α 2 r 2 + α 4 r 4 + α 6 r 6 + ,
ρ ( s ) = N s / Δ s a f t e r Δ s a f t e r ,
z s = L W s / W i + 1 ,
z s = L 1 ρ ( s ) M l + 1 .
T = y = 1 N ( Δ n y 1 ) + y ( x 1 ) + ( T 0 1 ) ,
O ( y T , x T ) = P m , n ( i T , j T ) ,
{ m = M mod ( i T M ) n = mod ( j T N ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.