Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Sub-pixel marking and depth-based correction methods for the elimination of voxel drifting in integral imaging display

Open Access Open Access

Abstract

Integral imaging is a kind of true three-dimensional (3D) display technology that uses a lens array to reconstruct vivid 3D images with full parallax and true color. In order to present a high-quality 3D image, it’s vital to correct the axial position error caused by the misalignment and deformation of the lens array which makes the reconstructed lights deviate from the correct directions, resulting in severe voxel drifting and image blurring. We proposed a sub-pixel marking method to measure the axial position error of the lenses with great accuracy by addressing the sub-pixels under each lens and forming a homologous sub-pixel pair. The proposed measurement method relies on the geometric center alignment of image points, which is specifically expressed as the overlap between the test 3D voxel and the reference 3D voxel. Hence, measurement accuracy could be higher. Additionally, a depth-based sub-pixel correction method was proposed to eliminate the voxel drifting. The proposed correction method takes the voxel depth into consideration in the correction coefficient, and achieves accurate error correction for 3D images with different depths. The experimental results well confirmed that the proposed measuring and correction methods can greatly suppress the voxel drifting caused by the axial position error of the lenses, and greatly improve the 3D image quality.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Since it was first proposed by Lippmann in 1908 [1], integral imaging has become one of the most promising three-dimensional (3D) display technologies [28], because it can reconstruct light field distribution of 3D object, present 3D images with full parallax and quasi-continuous viewpoints, and alleviate the vergence-accommodation conflict [912]. With these advantages, integral imaging can be widely applied in various fields including virtual reality and augmented reality, medical surgical navigation, and so on [1316]. Integral imaging includes recording and displaying stages. In the recording stage, the light field of the 3D object is captured through a lens array thus generating an elemental image array (EIA), while in the displaying stage, the 3D image is reconstructed by the back projection of the EIA through the lens array. According to the principle of reversible optical path, only when the recording and display devices are consistent in structural parameters, the 3D image could be correctly reconstructed as naturally as the real 3D object. In other words, the optical modulation characteristics of the lens array in both stages should be the same. Even small errors, for example, misalignment and deformation of the lens array, will cause deterioration and distortion of the reconstructed 3D image.

There are some related researches on the measurement and correction of lens array error. For the micro-lens array based integral imaging display system, there exist local error and global error, firstly presented by Arai et al. [17]. The local error occurs when combining several small size lens arrays into a large size, 3D image separation will be caused. The global error occurs during the one-step manufacturing of the lens array and is distributed over the global area, and 3D image blurring will be caused. Kawakita et al. [18] analyzed the relationships between the elemental images and the spatial distortion in the reconstructed 3D image. They clarified that 3D images that were reconstructed far from the lens array were largely affected. Ji et al [19] proposed a tilted EIA generation method for computer generated integral imaging display to match the slanted lens array. Li et al. [20] introduced an error correction model to represent the rotation error, translation error, and tilt error of the micro-lens array in a light field camera. In addition, a method for measuring the errors caused by the pitch, radius, and decentering of the lens was proposed, which needed to detect the center and edge of the image. Xiong et al. [21] analyzed the misalignment of a camera array in the recording stage and proposed a planar parallax-based calibration to rearrange parallaxes for alignment. Fan et al. [22] calibrated the optical apparatus by quantitative rotation and translation of the lens array, and presented an accurate 3D autostereoscopic display method using optimized parameters derived from quantitative calibration. Tavakoli et al. [23] analyzed the sensitivity of synthetic aperture integral imaging (SAII) and its 3D reconstruction to the uncertainty in sensor positioning in order to achieve a tolerable degradation in the presence of sensor positioning error. Javidi et al. [24] analyzed the impact of sensor position uncertainty during the image acquisition stage on the quality of reconstructed images, and proposed a sensor position estimation algorithm to improve the image reconstruction quality. Jang et al. [25] analyzed the impact of the lens fill factor on image quality and proposed to improve the viewing resolution and viewing angle by adopting a moving array-lenslet technique.

For a macro-lens array based integral imaging display system, the macro-lens array is usually prepared by embedding a large number of lenses with a pitch of about 10 mm into the holes on a solid substrate manually, which is easy to be fabricated in a large format [26]. But the lateral and axial position errors during the manual and mechanical assembling are inevitable. These errors change the propagation directions of pixel light randomly, resulting in splitting and blurry of 3D image. Yan et al. [27] proposed a post-calibration compensation method for the lateral position misalignment, in which the target pixels were imaged in an ideal regular reference grid, so as to find the correct pixel-to-ray mapping. Sang et al. [28] developed a correction method for the lateral position deviations by regenerating a coded image through a real-time backward ray tracing (BRT) technology, and the proposed method could realize real-time generation of EIA. In practice, the lateral position error can be controlled by using a high precision substrate, while the axial position error due to the manual assembling error and deformation caused by gravity and stress is ineluctable. For the measurement of axial position error, Yan et al. [29] proposed to separately measure the voxel size composed of multiple pixels on a holographic diffuser at two different depths so as to derive the axial distance based on the geometric relationship. In their subsequent work [30], they proposed to adjust the holographic diffuser so that the pixel can be imaged on it and the bottom of the image falls on a fixed reference line perpendicular to the display panel, and then the axial position was calculated according to the geometric relationship. This method requires multiple movements of the holographic diffuser, and the distances from the pixel and the center of the corresponding lens to the reference line are difficult to measure accurately. For the correction of the axial position error, Refs. [29] and [30] rearrange EIA pixels according to the ratio of the actual axial distance to the ideal axial distance [31,32]. However, the rearranging method overlooks the influence of the depth of the 3D image on the quality of the reconstructed image. Xing et al. [33] proposed to match the reconstructed 3D image with the printed pattern placed on a holographic diffuser to achieve correction by applying projection transformation to each elemental image. But each elemental image corresponds to a 3 × 3 transformation matrix, hence, a large amount of computation is needed.

In this paper, a sub-pixel marking method was proposed to achieve high accuracy measurement of axial position errors of lens array by comparing the overlap between the test 3D voxel and the reference 3D voxel. Additionally, a depth-based sub-pixel correction method was proposed to preprocess the EIA for the elimination of voxel drifting. Both the measurement accuracy of axial position error and the correction accuracy of voxel drifting were significantly improved, and the 3D image quality was obviously improved.

2. Principles

2.1 Voxel drifting caused by axial position error

In an ideal integral imaging display system, all the light rays emitted from homologous sub-pixels a 1, a 2, a 3, a 4, a 5 and a 6 intersect at a point to form a 3D voxel, denoted as A, as shown in Fig. 1. However, caused by random axial position error, these lights deviate from original propagation directions, and produce multiple intersecting points, denoted as A 1 , A 2 ’, and A 3 ’. Considering a simple case when the human eye receives only two lights. At the viewing position P 1, the human eye receives lights from the homologous sub-pixels a 1 and a 2, and the perceived voxel is located at A 1 . When the viewing position moved to P 2, the human eye receives lights from a 3 and a 4, and the perceived voxel drifts to A 2 . Similarly, the voxel seen at P 3 drifts to A 3 . With the movement of the human eye, the originally fixed voxel produces a random drift. The voxels that were originally converged together become scattered, reducing the image resolution. Here, we defined this phenomenon as voxel drifting. Furtherly, the lights received by human eye are usually more than two, these homologous rays no longer intersect at a single point, but at multiple points. The voxel perceived by the human eye will be a mixture of multiple intersections, resulting in voxel diffusion. Hence, the reconstructed 3D image is drifting and blurry.

 figure: Fig. 1.

Fig. 1. Diagram of voxel drifting caused by axial position error.

Download Full Size | PDF

2.2 Proposed sub-pixel marking method

The schematic diagram of the proposed sub-pixel marking method is shown in Fig. 2. Taking the horizontal and vertical directions of the liquid crystal display (LCD) as X axis and Y axis, respectively, and the direction perpendicular to the LCD plane as Z axis, a Cartesian coordinate system is set up. For simplicity, we take the horizontal direction of the lens array as an example, that is the XOZ plane shown in Fig. 2(a), and the origin of the coordinate is set at the first sub-pixel. Before measurement, a reference lens, a reference depth plane, and a reference 3D voxel are set. The selection of reference lenses has a certain impact on the measurement accuracy, and we will describe it later. The coordinate of the reference lens is denoted as (x lens-ref , z lens-ref). The central depth plane (CDP) is selected as the reference depth plane. The sub-pixel on the optical axis of the reference lens locates at (x subpx-ref, 0), and its image point on the reference depth plane is taken as the reference 3D voxel, whose coordinate is (x vx-ref , z vx-ref). The coordinate of the ith ideal lens is denoted as (x lens-i , z lens-i) and that of the ith actual lens is denoted as (x lens-i , z' lens-i), in which the subscript i is the index of the lens, z lens-i is the ideal axial position of the lens, which is a known constant value g, and z' lens-i is the to be measured actual axial position of the ith lens.

 figure: Fig. 2.

Fig. 2. (a) Schematic diagram of sub-pixel marking method, and (b) the process of sub-pixel searching.

Download Full Size | PDF

Corresponding to the reference 3D voxel, a group of ideal homologous sub-pixels, X (x subpx-1, x subpx-2, …, x subpx-i, …), can be derived in the case of ideal lens array, where x subpx-i represents the abscissa of the ideal homologous sub-pixel under the ith ideal lens, and can be calculated by Eq. (1).

Coordinate deviations exist between the actual homologous sub-pixel group and the ideal homologous sub-pixel group, and the axial distance z' lens-i can be derived by measuring these coordinate deviations.

$${x_{\textrm{subpx} - i}} = \frac{{{z_{\textrm{vx} - \textrm{ref}}}({x_{\textrm{lens} - i}} - {x_{\textrm{lens} - \textrm{ref}}})}}{{{z_{\textrm{vx} - \textrm{ref}}} - g}} + {x_{\textrm{subpx} - \textrm{ref}}}.$$

As shown in Fig. 2(b), the ideal homologous sub-pixel is marked in red. The sub-pixels in the vicinity of the ideal homologous sub-pixels are searched with the step of one sub-pixel, and the actual homologous sub-pixels are determined by comparing the overlap between the test 3D voxel and the reference 3D voxel on the reference depth plane. The traversal search is implemented around each ideal homologous sub-pixel (x subpx-i, 0). An objective function M which is expressed by Eq. (2) denotes the accumulated deviations between the image point of the searched sub-pixel and the reference 3D voxel.

$$M = \sum\limits_i {|x_{\textrm{vx} - i}^{} - {x_{\textrm{vx} - \textrm{ref}}}|},$$
where x vx-i represents the geometric center of the image point by the searched sub-pixel through the ith actual lens. The sub-pixel with the minimal objective function M, marked in blue, is the actual homologous sub-pixel. In this way, a group of actual homologous sub-pixels of the reference 3D voxel, X’ (x' subpx-1, x' subpx-2, …, x' subpx-i, …), can be obtained. Here, x' subpx-i represents the coordinate of the actual homologous sub-pixels under the case of the actual axial position z' lens-i. Then, the actual axial position z' lens-i can be derived by
$$z{^{\prime}_{\textrm{lens} - i}} = \frac{{{x_{\textrm{subpx} - \textrm{ref}}} - x{^{\prime}_{\textrm{subpx} - i}} - ({x_{\textrm{lens}- \textrm{ref}}} - {x_{\textrm{lens} - i}})}}{{{x_{\textrm{subpx} - \textrm{ref}}} - x{^{\prime}_{\textrm{subpx} - i}}}}{l_c},$$
where lc represents the distance between the reference depth plane and the LCD. For comparison, Ref. [29] measured the voxel size composed of multiple pixels on two depth planes, respectively. Our method only searched for sub-pixels overlapping with the reference 3D voxel on only one depth plane, namely CDP, which had certain advantages in measurement process and measurement accuracy.

When searching for the actual homologous sub-pixels, the search step is one sub-pixel, so the position error of the actual homologous sub-pixel is no more than half a sub-pixel. Consequently, the measurement accuracy δ of the actual axial position of the lens can be determined as

$$\delta \le \frac{{{P_s}l}}{{\textrm{2}|{x_{\textrm{lens}-\textrm{ref}}} - {x_{\textrm{lens}-i}}|}},$$
where Ps is the size of a sub-pixel. Equation (4) indicates that the measurement accuracy δ is inversely proportional to the distance between the measured lens and the reference lens. The farther the distance is, the higher the measurement accuracy will be. To obtain a higher measurement accuracy, a two-step measurement was carried out. In the first step, the lens at the left end of the lens array was selected as the reference lens, and measurements were carried out for the lenses on the right side of the array to achieve higher measurement accuracy. Vice versa, in the second step, the reference lens was at the right end of the array, and measurements were for the lenses on the left side of the array.

Take the prototype developed in the experiment as an example, and the structural parameters are as follows, Ps = 0.03 mm, g = 11.8 mm, p = 13 mm, lc= 174 mm. When the measured lens is 4p away from the reference lens, the measurement accuracy is about 50 µm.

Additionally, our proposed method can also be used to measure the lateral position error, and the schematic diagram is shown in Fig. 3. The process of establishing the coordinate system is consistent with the above method for measuring the axial position error. The differences lie in the coordinate of the ith ideal lens is denoted as (x lens-i, g) and that of the actual lens is denoted as (x' lens-i, g) in which g is the ideal axial position of the lens, x' lens-i is the to be measured actual lateral position of the ith lens.

 figure: Fig. 3.

Fig. 3. Schematic diagram of sub-pixel marking method in lateral position error measurement.

Download Full Size | PDF

The search process is consistent with the axial position error measurement, and the objective function M is obtained by comparing the overlap between the test 3D voxel and the reference 3D voxel on the reference depth plane and searching around each ideal homologous sub-pixel (x subpx-i, 0) with the step of one sub-pixel. Therefore, a group of actual homologous sub-pixels of the reference 3D voxel, X’ (x' subpx-1, x' subpx-2, …, x' subpx-i, …), can be obtained. Then, the lateral distance of the actual lens relative to the reference lens can be expressed as

$${d_{\textrm{lateral}}} = \frac{{{l_c} - g}}{{{l_c}}}|x{^{\prime}_{\textrm{subpx}-i}} - {x_{\textrm{subpx} - ref }}|,$$
where lc is the distance from the reference depth plane to the LCD and g is the ideal axial position of the lens array. The actual lateral position of the lens can be derived by
$$x{^{\prime}_{\textrm{lens}-i}} = |{x_{\textrm{lens}-ref}} - {d_{\textrm{lateral}}}|.$$

For the rotation errors, it can be decomposed into the lateral position errors in the X direction and the axial position errors in the Z direction. Then the above method can be used to measure the rotation errors. Regarding to the lens aberrations, we determined the actual homologous sub-pixel by comparing the overlap of voxels formed by the actual lens (with lens aberration) and the ideal lens (without lens aberration) in the searching process. The actual homologous sub-pixels that are found correspond to the actual lens with lens aberration. Therefore, the proposed method is also effective for the lens aberrations.

3. Correction of axial position error

3.1 Proposed depth-based sub-pixel correction method

The purpose of preprocessing EIA is to make the reconstructed 3D voxel approximate to the ideal 3D voxel by changing the position of the sub-pixel. Specially, under the modulation of the actual lens with axial position error, the reconstructed light of the corrected sub-pixel intersects the ideal 3D voxel as closely as possible. Considering an ideal 3D voxel with the coordinate of (x vx-i, z vx-i), as shown in Fig. 4, to ensure the reconstructed light still intersects at the ideal 3D voxel, the actual homologous sub-pixel should be moved from x subpx-i to x' subpx-i. Therefore, a corrected EIA can be obtained by rescaling the distance of each sub-pixel to the optical axis of the corresponding lens. Note that, the interpolation is required when the actual sub-pixel position is non-integer multiple of the sub-pixel size. And the correction coefficient of Δx'i to Δxi of the ith lens, denoted as ki, can be deduced as Eq. (7).

 figure: Fig. 4.

Fig. 4. Principle of proposed depth-based sub-pixel correction method.

Download Full Size | PDF

However, the correction coefficient ki is related to the depth z vx-i of the 3D voxel which was

$${k_i} = \frac{{\Delta x{^{\prime}_i}}}{{\Delta {x_i}}} = \frac{{z{^{\prime}_{\textrm{lens}-i}}({z_{\textrm{vx}-i}} - \textrm{g})}}{{g({z_{\textrm{vx}-i}} - z{^{\prime}_{\textrm{lens}-i}})}}.$$
overlooked in former method, 3D voxels with different depths have different correction coefficients, making the sub-pixel correction extremely complicated. In fact, it is not difficult to find that the change of the correction coefficient ki is very small within a limited 3D depth. For a typical macro-lens integral imaging system, the CDP is usually far away from the lens array, usually more than 100 mm, so it’s possible to get a constant ki. We still taking the experimental parameters in section 4 as an example in which the ideal axial position is 11.8 mm. When the actual axial position is 12.2 mm, the curve of correction coefficient ki changing with the depth of the 3D voxel z vx-i is shown in Fig. 5. Obviously, ki decreases rapidly with the increase of z vx-i, and then it gradually becomes almost a constant after z vx-i being greater than 100 mm. In an integral imaging display system, 3D voxels should be limited to the depth of field ΔzGeom. The depth of field of integral imaging display system was derived based on geometrical optics [34] and wave optics [35], denote by Eqs. (8) and (9), respectively.
$$\Delta {\textrm{z}_{Geom}} = {\textrm{z}_{\textrm{max}}} - {z_{\textrm{min}}} = \frac{{2gp{p_\textrm{d}}{f^2}}}{{{{(\textrm{g} - f)}^2}{p^2} - {f^2}{p_\textrm{d}}^2}},$$
$$\Delta {z_{Wave}} = \frac{{2\sqrt { - {a^4}{b^2} + {a^2}{b^4}\tan ({\alpha _e}/2) + {a^2}{b^2}{d^2}\tan {{({\alpha _e}/2)}^2}} }}{{|{a^2} - {b^2}\tan {{({\alpha _e}/2)}^2}|}}$$
where z min, z max, pd and f represent the minimal depth, the maximal depth, pixel width and focal length of lens, respectively, a and b are given by the results of the hyperbola light beam which was obtained by fitting the half-width of the diffraction intensity distribution of the pixel ray at different depths, d refers to the viewing distance between the observer and the CDP, and αe represents the minimum angular resolution of human eyes.

 figure: Fig. 5.

Fig. 5. Variation curve of ki when g = 11.8 mm and z' lens-i = 12.2 mm.

Download Full Size | PDF

Hence, for our developed system, d = 1000 mm, αe =1.662 × 10−2°, lc = 174 mm, ΔzGeom = 31.2 mm, ΔzWave = 27.6 mm, and the depth range of z vx-i are [148.1 mm, 179.3 mm] or [150.1 mm, 177.7 mm], calculated by Eqs. (8) and (9), respectively. Within the depth range, the change of ki is extremely small, and the maximum variation is 0.0006, which can be ignored. So, we could reasonably select z vx-i  = lc to get a constant value of ki, that is

$${k_i} = \frac{{z{^{\prime}_{\textrm{lens}-i}}({l_c} - g)}}{{g({l_c} - z{^{\prime}_{\textrm{lens} - i}})}}.$$

Compared with the former correction method, the proposed correction method considers the influence of voxel depth in the correction coefficient, so that high correction accuracy can be obtained for 3D images on different depths.

3.2 Drift error analysis

Before correction, the actual light deviates a lot from the ideal one, as shown in Fig. 6. And the green triangle represents the diffused 3D voxel reconstructed by several actual lights. Drift error, denoted as the perpendicular distance from the actual light to the ideal 3D voxel, is defined to quantify the voxel drift problem. The drift error of the ith lens shown by the black line in Fig. 5 denoted as Δti, and the total drift error of the system is ΔT, depicted in Eqs. (11) and (12), respectively.

$$\Delta {t_i} = \Delta {x_i}|z{^{\prime}_{\textrm{lens}-i}} - g|\frac{{{z_{\textrm{vx} - i}}}}{{g\sqrt {\Delta {x_i}^2 + z{^{\prime}_{\textrm{lens} - i}}^2} }},$$
where Δxi represents the distance between the ideal homologous sub-pixel and the optical axis of the corresponding lens.
$$\Delta \textrm{T} = \sum\limits_{i = 1}^N {\Delta {t_i}} .$$

 figure: Fig. 6.

Fig. 6. Drift error without correction.

Download Full Size | PDF

The drift errors after being corrected by the former method in Ref. [29] and our proposed depth-based sub-pixel correction method are shown in Figs. 7(a) and 7(b), respectively. In Fig. 7(a), the corrected lights are parallel to the ideal lights, and they can’t intersect with the ideal 3D voxel, hence there will be residual drift errors at all depths. In contrast in Fig. 7(b), the actual lights corrected by our proposed depth-based sub-pixel correction method intersect with the ideal lights, and the actual 3D voxel on the CDP plane completely coincides with the ideal 3D voxel, that is, the drift error is totally eliminated on CDP. In the range of depth of field, the residual drift error is still very small. The green triangle represents the diffused 3D voxel without correction and the blue triangle represents the corrected 3D voxel after correction, respectively. Obviously, the 3D voxel corrected by our proposed method is finer. The drift errors after corrected by the former method and by our proposed depth-based sub-pixel correction method $\Delta t_i^{\prime}$, $\Delta t_i^{\prime \prime}$ can be written as

$$\Delta {t^{\prime}_i} = \Delta {x_i}|z{^{\prime}_{\textrm{lens}-i}} - g|\frac{1}{{\sqrt {\Delta {x_i}^2 + {g^2}} }},$$
$$\Delta {t^{\prime\prime}_i} = \Delta {x_i}|z{^{\prime}_{\textrm{lens} - i}} - g|\frac{{|{l_c} - {z_{\textrm{vx}-i}}|}}{{\sqrt {\Delta {x_i}^2{{({l_c} - g)}^2} + {g^2}{{({l_c} - z{^{\prime}_{\textrm{lens}-i}})}^2}} }}.$$

 figure: Fig. 7.

Fig. 7. Drift error corrected by (a) the former method in Ref. [29] and (b) proposed depth-based sub-pixel correction method.

Download Full Size | PDF

Comparing Eqs. (11), (13), and (14), it’s not difficult to find that Δti''<Δti'<Δti, since the depth z vx-i is around the depth of lc which is much larger than z lens-i and z'lens-i in macro-lens array based integral imaging display system.

Take the aforementioned experimental parameters as an example. The drift error Δti, and the residual drift errors corrected by the former method $\Delta t_i^{\prime}$ and by our method $\Delta t_i^{\prime \prime}$ are shown in Fig. 8. The uncorrected drift error Δti is very large, and it increases linearly with the increase of voxel depth z vx-i, the residual drift error corrected by the former method Δti’ no longer increases with the increase of depth, and it is suppressed to a certain extent. After correction by our proposed method, the residual drift error $\Delta t_i^{\prime \prime}$ is greatly suppressed, and is significantly lower than Δti’ and Δti. Especially, the residual drift error on the CDP is completely eliminated. Among the depth of field, colored in purple, the residual drift error is extremely small, less than 25 µm which is less than the size of one sub-pixel of LCD. It’s obvious that after applying our correction method, the intersection points of light rays emitted by the actual homologous sub-pixels become more concentrated. This leads to reductions of scattering and residual drift errors, resulting in an improvement of image resolution. Hence, we can reasonably conclude that our proposed method could effectively eliminate the drift error caused by axial position error.

 figure: Fig. 8.

Fig. 8. Drift error curves of Δti, Δti’ and $\Delta t_i^{\prime \prime}$.

Download Full Size | PDF

4. Experiments and results

In the experiment, an integral imaging based light field display system was developed, as shown in Fig. 9(a). A 32-inch 8 K (7680 × 4320 pixels) high resolution LCD with the sub-pixel pitch of Ps = 0.03 mm was used to load the EIA. The lens array, as shown in Fig. 9(b), contains 50 × 35 lenses arranged in hexagon with the focal length and pitch of the lens are 11 mm and 13 mm, respectively. An optical diffuser with diffusion angle of 5° was placed on the CDP which is 174 mm away from the LCD. The mechanical structure at the bottom of the system can roughly adjust the axial position of the lens array. The detailed parameters of the system are listed in Table 1.

 figure: Fig. 9.

Fig. 9. (a)The experimental system, and (b) lens array.

Download Full Size | PDF

Tables Icon

Table 1. Specifications of the developed integral imaging display system

In the sub-pixel marking measurement, the image point of the ideal homologous sub-pixel x subpx-i on the reference depth plane doesn’t coincide with the reference 3D voxel due to the axial position error. As shown in Fig. 10(a), the image point is clearly separated from the reference 3D voxel. After a traversal search around the ideal homologous sub-pixels under the optimization constraint of objective function M, the image point of the actual homologous sub-pixel x' subpx-i on the reference depth plane well overlaps with the reference 3D voxel significantly, as shown in Fig. 10(b). The actual axial position of the corresponding lens can be calculated according to Eq. (3), and the actual axial position and axial error of each lens were measured out by using the proposed sub-pixel marking method. Figures 10(c) and 10(d) show the measurement results. The measured results show that the actual axial position of each lens is different. There is a certain deviation from the ideal axial position of 11.8 mm, and the axial error range is from -0.7 mm to 0.5 mm. From the center to the edge, the axial error decreases gradually from a positive value to 0, and when it reaches the edge of the array, the axial error decreases to a negative value. The reason for this phenomenon may be that the mounting screws at four corners of the array are suffering excessive pressure resulting in the sinking of the four corners and the warping of the center of the lens array.

 figure: Fig. 10.

Fig. 10. Measurement results of actual axial position of lens array. Results of the reference 3D voxel overlapping with the image points from the (a) ideal homologous sub-pixels and (b) actual homologous sub-pixels, (c) histogram of the actual axial position, and (d) distribution of the axial error.

Download Full Size | PDF

A 1951 USAF resolution chart was reconstructed at the depths of 174 mm and 184 mm, respectively, and the uncorrected and corrected results are shown in Fig. 11. Without correction, a point of an object may be reconstructed into multiple 3D voxels, resulting in an image point dispersion, or the lines become distorted and blurred. The minimal resolvable element at depths of 174 mm and 184 mm are both element 4 of group 1, as shown in Figs. 11(a) and 11(b). After being corrected by our proposed depth-based sub-pixel correction method, the image points formed by the homologous sub-pixels overlap with each other, resulting in a smaller image point and leading to an improvement in the resolution of the reconstructed image. The images of element 4 of group 1 are clearer as shown in Figs. 11(c) and 11(d), and the minimal resolvable element at depths of 174 mm and 184 mm are up to element 2 of group -2. The experimental results show that our proposed measurement and correction methods can effectively enhance the quality of images and improve the resolution of reconstructed images.

 figure: Fig. 11.

Fig. 11. Optically reconstruction results of 1951 USAF resolution chart. Uncorrected results at depths of (a) 174 mm, and (b) 184 mm, and corrected results by our proposed method at depths of (c) 174 mm, (d) 184 mm.

Download Full Size | PDF

A 3D scene was built up, and the 3D display results and its partial enlargement are shown in Fig. 12. As shown in Fig. 12(a), the uncorrected 3D image is very blurry, especially at the edges of the screen, and the result is consistent with the distribution of the axial errors in which the axial errors at the edges are larger. As can be seen from Figs. 12(b) and 12(c), the 3D image corrected by our proposed correction method has a clearer edge contour and higher image quality comparing with the corrected by the former method. These experimental results well confirmed that our proposed measuring and correction methods can effectively eliminate the voxel drifting caused by the axial position error of the lenses, and greatly improve the 3D image quality.

 figure: Fig. 12.

Fig. 12. Optical reconstruction results of 3D scene, (a) uncorrected results, (b) corrected by the former method, and (c) corrected by our proposed method.

Download Full Size | PDF

5. Conclusion

In this paper, the 3D voxel drifting and diffusion caused by the axial position error in integral imaging display system were analyzed in detail. A sub-pixel marking method was proposed to measure the actual axial position of each lens by addressing the sub-pixels under each lens. The proposed measurement method is convenient to operate, and has high measurement accuracy. By selecting the reference lens properly, the measurement accuracy can be higher. Besides, we analyzed the universality of the proposed measurement method. It also can be applied to measure the lateral position error, rotation error, and lens aberrations. Additionally, a depth-based sub-pixel correction method was proposed to eliminate the voxel drifting. The proposed correction method takes the voxel depth into consideration in the correction coefficient, and achieves accurate error correction for 3D images with different depths. The theoretical analysis and experiment results verified that the proposed measurement and correction methods greatly improved the 3D display quality. The error measurement and correction method proposed in this paper can effectively solve the problem of voxel drifting caused by the axial position error of lens array. It has important practical application in large scale integral imaging display system.

Funding

National Key Research and Development Program of China (2022YFB3606600); National Natural Science Foundation of China (62275179); National Natural Science Foundation of China (U21B2034); Sichuan Province Science and Technology Support Program (2022YFG0326).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Lippmann, “Epreuves réversibles photographies integrals,” C. R. Acad. Sci. 146, 446–451 (1908).

2. H. Deng, Q. H. Wang, F. Wu, et al., “Cross-talk-free integral imaging three-dimensional display based on a pyramid pinhole array,” Photon. Res. 3(4), 173–176 (2015). [CrossRef]  

3. Z. Qin, Y. H. Zhang, and B. R. Yang, “Interaction between sampled rays’ defocusing and number on accommodative response in integral imaging near-eye light field displays,” Opt. Express 29(5), 7342–7360 (2021). [CrossRef]  

4. H. M. Choi, Y. S. Hwang, and E. S. Kim, “Field-of-view enhanced integral imaging with dual prism arrays based on perspective-dependent pixel mapping,” Opt. Express 30(7), 11046–11065 (2022). [CrossRef]  

5. X. B. Yu, H. Y. Li, X. W. Su, et al., “Image edge smoothing method for light-field displays based on joint design of optical structure and elemental images,” Opt. Express 31(11), 18017–18025 (2023). [CrossRef]  

6. C. J. Zhao, Z. D. Guo, H. Deng, et al., “Integral imaging three-dimensional display system with anisotropic backlight for the elimination of voxel aliasing and separation,” Opt. Express 31(18), 29132–29144 (2023). [CrossRef]  

7. Y. J. N. Gu, J. Zhang, Y. Piao, et al., “Integral imaging reconstruction system based on the human eye viewing mechanism,” Opt. Express 31(6), 9981–9995 (2023). [CrossRef]  

8. M. Martínez-Corral, B. Javidi, C. G. Luo, et al., “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512–566 (2018). [CrossRef]  

9. H. Deng, Q. H. Wang, C. G. Luo, et al., “Accommodation and convergence in integral imaging 3D display,” J. Soc. Inf. Disp. 22(3), 158–162 (2014). [CrossRef]  

10. F. L. Kooi and A. Toet, “Visual comfort of binocular and 3D displays,” Displays 25(2-3), 99–108 (2004). [CrossRef]  

11. C. Chen, H. Deng, Q. H. Wang, et al., “Measurement and analysis on the accommodation responses to real-mode, virtual-mode, and focused-mode integral imaging display,” J. Soc. Inf. Disp. 27(7), 427–433 (2019). [CrossRef]  

12. Z. Qin, J. Y. Wu, P. Y. Chou, et al., “Revelation and addressing of accommodation shifts in microlens array-based 3D near-eye light field displays,” Opt. Lett. 45(1), 228–231 (2020). [CrossRef]  

13. C. Ma, G. W. Chen, X. R. Zhang, et al., “Moving-tolerant augmented reality surgical navigation system using autostereoscopic three-dimensional image overlay,” IEEE J. Biomed. Health Inform. 23(6), 2483–2493 (2019). [CrossRef]  

14. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]  

15. X. Wang and H. Hua, “Depth-enhanced head-mounted light field displays based on integral imaging,” Opt. Lett. 46(5), 985–988 (2021). [CrossRef]  

16. Q. Li, W. He, H. Deng, et al., “High-performance reflection- type augmented reality 3D display using a reflective polarizer,” Opt. Express 29(6), 9446–9453 (2021). [CrossRef]  

17. J. Arai, M. Okui, M. Kobayashi, et al., “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am. A 21(6), 951–958 (2004). [CrossRef]  

18. M. Kawakita, H. Sasaki, J. F. Okano, et al., “Geometric analysis of spatial distortion in Projection-type integral imaging,” Opt. Lett. 33(7), 684–686 (2008). [CrossRef]  

19. C. C. Ji, C. G. Luo, H. Deng, et al., “Tilted elemental image array generation method for moiré-reduced computer generated integral imaging display,” Opt. Express 21(17), 19816–19824 (2013). [CrossRef]  

20. S. Li, Y. Yuan, Z. Gao, et al., “High-accuracy correction of a microlens array for plenoptic imaging sensors,” Sensors 19(18), 3922 (2019). [CrossRef]  

21. Z. L. Xiong, Y. Xing, H. Deng, et al., “Planar parallax based camera array calibration method for integral imaging three-dimensional information acquirement,” SID Symp. Dig. 47(1), 219–222 (2016). [CrossRef]  

22. Z. C. Fan, G. W. Chen, Y. Xia, et al., “Accurate 3D autostereoscopic display using optimized parameters through quantitative calibration,” J. Opt. Soc. Am. A 34(5), 804–812 (2017). [CrossRef]  

23. B. Tavakoli, M. Daneshpanah, B. Javidi, et al., “Performance of 3D integral imaging with position uncertainty,” Opt. Express 15(19), 11889–11902 (2007). [CrossRef]  

24. X. Xiao, M. Daneshpanah, M. Cho, et al., “3D integral imaging using sparse sensors with unknown positions,” J. Display Technol. 6(12), 614–619 (2010). [CrossRef]  

25. J. Jang and B. Javidi, “Improvement of viewing angle in integral imaging by use of moving lenslet arrays with low fill factor,” Appl. Opt. 42(11), 1996–2002 (2003). [CrossRef]  

26. S. W. Yang, X. Z. Sang, X. B. Yu, et al., “162-inch 3D light field display based on aspheric lens array and holographic functional screen,” Opt. Express 26(25), 33013–33021 (2018). [CrossRef]  

27. X. Yan, J. Wen, Z. Yan, et al., “Post-calibration compensation method for integral imaging system with macrolens array,” Opt. Express 27(4), 4834–4844 (2019). [CrossRef]  

28. W. P. Huo, X. Z. Sang, S. J. Xing, et al., “Backward ray tracing based rectification for real-time integral imaging display system,” Opt. Commun. 458, 124752 (2020). [CrossRef]  

29. Z. Yan, X. P. Yan, X. Y. Jiang, et al., “Calibration of the lens’ axial position error for macrolens array based integral imaging display system,” Opt. Laser. Eng. 142, 106585 (2021). [CrossRef]  

30. M. Lei, Y. Mao, and X. P. Yan, “Measurement and correction of the macrolens array’s position error in integral imaging,” Appl. Opt. 61(32), 9654–9665 (2022). [CrossRef]  

31. D. Shin and B. Javidi, “3D integral imaging with improved visualization using sub-pixel optical ray sensing,” Opt. Lett. 37(11), 2130–2132 (2012). [CrossRef]  

32. M. Cho and B. Javidi, “Computational reconstruction of three dimensional integral imaging by rearrangement of elemental image pixels,” J. Display Technol. 5(2), 61–65 (2009). [CrossRef]  

33. Y. Xing, X. Y. Lin, L. B. Zhang, et al., “Integral imaging-based tabletop light field 3D display with large viewing angle,” Opto-Electron. Adv. 6(6), 220178 (2023). [CrossRef]  

34. Y. M. Kim, K. H. Choi, and S. W. Min, “Analysis on expressible depth range of integral imaging based on degree of voxel overlap,” Appl. Opt. 56(4), 1052–1061 (2017). [CrossRef]  

35. C. G. Luo, X. Xiao, M. Martínez-Corral, et al., “Analysis of the depth of field of integral imaging displays based on wave optics,” Opt. Express 21(25), 31263–31273 (2013). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Diagram of voxel drifting caused by axial position error.
Fig. 2.
Fig. 2. (a) Schematic diagram of sub-pixel marking method, and (b) the process of sub-pixel searching.
Fig. 3.
Fig. 3. Schematic diagram of sub-pixel marking method in lateral position error measurement.
Fig. 4.
Fig. 4. Principle of proposed depth-based sub-pixel correction method.
Fig. 5.
Fig. 5. Variation curve of ki when g = 11.8 mm and z' lens-i = 12.2 mm.
Fig. 6.
Fig. 6. Drift error without correction.
Fig. 7.
Fig. 7. Drift error corrected by (a) the former method in Ref. [29] and (b) proposed depth-based sub-pixel correction method.
Fig. 8.
Fig. 8. Drift error curves of Δti, Δti’ and $\Delta t_i^{\prime \prime}$.
Fig. 9.
Fig. 9. (a)The experimental system, and (b) lens array.
Fig. 10.
Fig. 10. Measurement results of actual axial position of lens array. Results of the reference 3D voxel overlapping with the image points from the (a) ideal homologous sub-pixels and (b) actual homologous sub-pixels, (c) histogram of the actual axial position, and (d) distribution of the axial error.
Fig. 11.
Fig. 11. Optically reconstruction results of 1951 USAF resolution chart. Uncorrected results at depths of (a) 174 mm, and (b) 184 mm, and corrected results by our proposed method at depths of (c) 174 mm, (d) 184 mm.
Fig. 12.
Fig. 12. Optical reconstruction results of 3D scene, (a) uncorrected results, (b) corrected by the former method, and (c) corrected by our proposed method.

Tables (1)

Tables Icon

Table 1. Specifications of the developed integral imaging display system

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

x subpx i = z vx ref ( x lens i x lens ref ) z vx ref g + x subpx ref .
M = i | x vx i x vx ref | ,
z lens i = x subpx ref x subpx i ( x lens ref x lens i ) x subpx ref x subpx i l c ,
δ P s l 2 | x lens ref x lens i | ,
d lateral = l c g l c | x subpx i x subpx r e f | ,
x lens i = | x lens r e f d lateral | .
k i = Δ x i Δ x i = z lens i ( z vx i g ) g ( z vx i z lens i ) .
Δ z G e o m = z max z min = 2 g p p d f 2 ( g f ) 2 p 2 f 2 p d 2 ,
Δ z W a v e = 2 a 4 b 2 + a 2 b 4 tan ( α e / 2 ) + a 2 b 2 d 2 tan ( α e / 2 ) 2 | a 2 b 2 tan ( α e / 2 ) 2 |
k i = z lens i ( l c g ) g ( l c z lens i ) .
Δ t i = Δ x i | z lens i g | z vx i g Δ x i 2 + z lens i 2 ,
Δ T = i = 1 N Δ t i .
Δ t i = Δ x i | z lens i g | 1 Δ x i 2 + g 2 ,
Δ t i = Δ x i | z lens i g | | l c z vx i | Δ x i 2 ( l c g ) 2 + g 2 ( l c z lens i ) 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.