Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Enlargement of the viewing zone and size of a reconstructed image in electro-holography using multiple reconstruction lights by eye-tracking

Open Access Open Access

Abstract

To solve the problem of the narrow viewing zone in electro-holography, we propose a method using eye-tracking and a property of holography where the viewing zone varies with the angle of the reconstruction light. The method can enlarge the viewing zone without moving the optical elements for higher-order diffracted light removal and high-refresh-rate devices. The size of the reconstructed image is also enlarged using lenses. We conducted an experiment to validate the effectiveness of our method, and the results indicate that the viewing zone was enlarged by 1.44 times and the size of the reconstructed image was enlarged by 1.49 times.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. INTRODUCTION

Holography is an ideal technology that satisfies all physiological elements for stereoscopic perception when displaying three-dimensional (3D) images because it reconstructs the light waves of an object [1]. In a technique known as electro-holography [2], a hologram is displayed on a spatial light modulator (SLM), and a reconstructed image is formed by the incidence of the reconstruction light. In electro-holography, 3D images can be animated. However, electro-holography has the problem of a narrow viewing zone, which is the range of vision that can be used to observe a reconstructed image. The viewing angle, which is a measure of the size of the viewing zone, is expressed as twice the maximum diffraction angle of the SLM displaying the hologram. The maximum diffraction angle $\theta$ is calculated with the following equation:

$$\theta = {\sin}^{- 1} \left({\frac{\lambda}{{2p}}} \right),$$
where $\lambda$ indicates the wavelength of the reconstruction light, and $p$ indicates the pixel pitch of the SLM. Equation (1) shows that the viewing zone is determined by the pixel pitch of the SLM, but it is difficult to expect the pixel pitch to improve because of the enormous cost and time involved.

Therefore, to enlarge the viewing zone regardless of the pixel pitch, a number of methods have been proposed. One is spatial multiplexing [3]. This method uses multiple SLMs and spatially tiles the viewing zone reconstructed by each SLM to enlarge the viewing zone. However, each SLM is illuminated by a separate plane wave and tiled by a lens, requiring high-precision alignment and adjustment for a continuous viewing zone. Therefore, a method has been proposed to illuminate all SLMs by using cone mirrors and a single astigmatic wave illuminating all SLMs [4]. Although these methods can significantly enlarge the viewing zone regardless of device performance, they have the disadvantages of increased cost and a complex optical system due to the use of multiple SLMs.

Cylindrical holography is a technique [5,6] for achieving a 360° viewing angle. In this technique, a cylindrical hologram is recorded, and a mirror rotating in the center of the cylinder illuminates the reconstruction light in all 360° directions, making it possible to observe the reconstructed image over the entire 360° range. Although 360° is an ideal viewing zone, this technique records holograms on film and does not enable animated images. In addition, it is difficult to realize cylindrical holography with electro-holography because there are currently no cylindrical liquid crystal displays, such as SLMs, that are used to display animated images. Another method is to enlarge the viewing angle by sequentially combining higher-order diffracted light [7]. However, since diffracted light of different orders is combined, the degree to which the viewing angle can be enlarged is limited. The redistribution of the resolution method [8] enlarges the viewing angle by increasing the horizontal resolution of the SLM. However, the reconstructed images are horizontal parallax only.

Another method for enlarging the viewing zone regardless of the pixel pitch is time multiplexing. In this method, different viewing zones are tiled in order of time. One time-multiplexing method enlarges the viewing zone by changing the direction of the reconstruction light by rotating a mirror at high speed [911]. Although the viewing zone is small, the rotation of the mirror changes the orientation of the viewing zone, so the rotation must be very fast. Therefore, the SLM requires a high refresh rate and a mechanical system that rotates the mirror at high speed. These methods require precision design because of the need to synchronize the motor with the SLM, which operates at high speed, and are not scalable due to the use of special optics. Other time-multiplexing methods using a mechanical scanner such as a galvanometer mirror have been proposed [1215]. In these methods, the viewing zone is enlarged by scanning the viewing zone horizontally with a mechanical scanner. Since these methods require a high refresh rate to scan the viewing zone with a mechanical scanner, the problem is that a display device with a high refresh rate is required, and the optical system is complex.

Another time-multiplexing method has been proposed that uses a scalable and simple optical system that does not require a complex mechanical system [16,17]. This method focuses on the angle of incidence of the reference light onto the hologram, which is related to the viewing zone of holography, and time multiplexes multiple reconstruction lights to enlarge the viewing zone electronically. With this method, multiple holograms with different viewing zones are recorded. The recorded holograms are switched and displayed on the SLM at high speed, and the reconstruction light corresponding to each hologram is switched. Since this method uses a single SLM, the cost can be reduced and the optical system can be simplified. In these methods, the optical path of higher-order diffracted light varies depending on the incident angle of the reconstruction light. Therefore, to remove higher-order diffracted light, it is necessary to mechanically move the optical elements corresponding to the switching of reconstruction lights. Also, if the time required for reconstruction lights to switch is longer than the time resolution of the human eye, the reconstructed image will flicker. Therefore, a high-refresh-rate SLM is needed to enlarge the viewing zone by increasing the number of reconstruction lights. In addition, this method cannot record objects larger than the SLM because it requires that the same object be recorded with reference lights at various angles.

Another method for enlarging the viewing zone regardless of the pixel pitch is to use eye-tracking. Many 3D displays have been proposed that use eye-tracking. Eye-tracking has been used for non-holographic displays such as integral 3D displays [18], stereoscopic 3D displays [19], and light field displays [20]. A previous study combining holography and eye-tracking is SeeReal’s holographic display [21]. In this previous study, the farther away the reconstructed image is placed from the observer, the smaller the sub-hologram (SH) becomes, reducing the computational complexity, which is suitable for applications in which the reconstructed image is observed at a distance but not for interactive applications where the observer reaches out and touches the reconstructed image.

We propose a method that combines the method of using multiple reconstruction lights with eye-tracking to select one reconstruction light to prevent higher-order diffracted light from entering the eye. In addition, the reconstructed image is larger than the SLM due to the enlargement by the lenses. This proposed method prevents higher-order diffracted light from affecting observation, does not need the optical elements to be mechanically moved, and further enlarges the viewing zone without the need for a high-refresh-rate SLM. In our study, the reconstructed image can be formed within the reach of the observer’s hand. The distance between the observer and the reconstructed image does not affect the time required for switching the reconstruction lights, and since an amplitude-only hologram is used in this study, the computation time is less than that of a phase-only hologram. Many fast computation methods have been studied for the amplitude-only hologram, and real-time computation is also possible.

 figure: Fig. 1.

Fig. 1. Viewing zone at different angles of reconstruction lights.

Download Full Size | PDF

2. PROPOSED METHOD

With our method, multiple holograms are recorded with reference lights at different angles of incidence in advance. For each hologram, the reconstructed image can be observed in different viewing zones as shown in Fig. 1 by using the incidence of the reconstruction light at the same angle as the incident angle of the reference light. In this study, the angle of incidence of the reconstruction light is determined by the positional relationship between the convex lens and the light source, as shown in Fig. 2. The light source is placed on an axis vertically away at a distance equaling the focal length. The angle of incidence of the reconstruction light $\alpha$ is calculated with the following equation:

$$\begin{split}m\alpha&= \mathop {\tan}\nolimits^{- 1} \left({\frac{{{P_m}}}{{{f_c}}}} \right)\\&\quad\times\left\{{\begin{array}{*{20}{l}}{\left({m = 0, \pm 1, \ldots , \pm \frac{{M - 1}}{2}} \right)}&{{\rm if}\,M\,{\rm is\; odd,}}\\[6pt]{\left({m = \pm \frac{1}{2}, \pm \frac{3}{2}, \ldots , \pm \frac{{M - 1}}{2}} \right)}&{{\rm if}\,M\,{\rm is\; even,}}\end{array}} \right.\end{split}$$
where $M$ indicates the number of reconstruction lights. By arranging multiple light sources and turning them on and off appropriately, it is possible to switch between reconstruction lights with different angles of incidence. When a hologram is illuminated with a reconstruction light, higher-order diffracted light, which interferes with observation, is generated. The optical path of higher-order diffracted light varies depending on the angle of incidence of the reconstruction light as shown in Fig. 1. However, as shown in Fig. 1, there is a viewing zone that is not affected by higher-order diffracted light at any angle of incidence of the reconstruction light. Therefore, by switching the reconstruction light so that this zone follows the eye, it is possible to prevent higher-order diffracted light from affecting observation. The viewing zones that are not affected by higher-order diffracted light are determined geometrically by equations shown below, and one reconstruction light is selected on the basis of the determined zones and eye-tracking. To illuminate the reconstruction light, a microcomputer switches the light source on the basis of the selection result obtained from serial communication with the computer.
 figure: Fig. 2.

Fig. 2. Angle of reconstruction light.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Viewing zone in proposed method.

Download Full Size | PDF

A. Viewing Zone

The following is an explanation of the viewing zone in the proposed method. When the number of light sources is three, the viewing zone is the range shown in Fig. 3. The $u$-axis in Fig. 3 is the axis of the observer’s position. Each viewing zone shall be referred to as Viewing zone ${-}{1}$, Viewing zone 0, and Viewing zone 1, from left to right. However, only the zone not affected by higher-order diffracted light is shown. The viewing angle $\phi$ is calculated with the following equation:

$$\phi = \alpha (M - 1) + 2\theta .$$

In order for the eye to follow the viewing zone unaffected by higher-order diffracted light, it is necessary to switch reconstruction lights and holograms depending on the position of the eye. In this study, since the reconstruction light is switched in the horizontal direction, it is necessary to identify the width of each viewing zone and the overlapping width of the adjacent viewing zone in the horizontal direction. One viewing zone width ${U_m}$ is calculated with the following equation:

$${U_m} = Z(\tan ({|m|\alpha + \theta} ) - \tan ({|m|\alpha - \theta} )) - L,$$
where $Z$ indicates the vertical distance between the hologram surface and the $u$-axis, $M$ indicates the number of reconstruction lights, and $L$ indicates the width of the reconstructed image. The overlapping width of adjacent viewing zone ${N_{m,m \pm 1}}$ is calculated with the following equation:
$$\begin{split}{N_{m,m \pm 1}} &= Z(\tan ({|m|\alpha + \theta} ) - \tan ({(|m| + 1)\alpha - \theta} )) \\&\quad+ D(\tan |m|\alpha - \tan (|m| + 1)\alpha) - L,\end{split}$$
where $D$ indicates the distance between the reconstructed image and the hologram.

B. Size of Reconstructed Image

The following is an explanation of the size of the reconstructed image in the proposed method. The same object is recorded with reference lights at various angles. In this case, the object must be placed in the yellow shaded area in Fig. 4 because the light wave must be recorded on the hologram even at the maximum angle of incidence. The limitation on the size of the reconstructed image is calculated with the following equation:

$$L \lt H - 2D\tan ({|m|\alpha + \theta} ),$$
where $H$ indicates the width of the hologram. The maximum size of the recorded object is smaller than in Eq. (6), while the viewing angle increases as the incident angle $\alpha$ of the reconstruction light increases, according to Eq. (3). Therefore, it can be said that there is a trade-off relationship between the viewing zone and the size of the reconstructed image in the proposed method. The trade-off that occurs when the distance $D$ between the reconstructed image and the hologram is 3, 13, and 23 mm is shown in Fig. 5. From Fig. 5, the smaller the distance $D$ between the reconstructed image and the hologram, the more gradual the trade-off that occurs.
 figure: Fig. 4.

Fig. 4. Limitations on size of reconstructed image.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Trade-off between viewing zone and size of reconstructed image.

Download Full Size | PDF

From Eq. (6), it can be seen that objects larger than the SLM cannot be recorded. Therefore, a method to enlarge the reconstructed image is necessary. Several methods for enlarging reconstructed images have been proposed. Sasaki et al. [22] demonstrated a holographic display system using 16 SLMs of $4\;{\rm K} \times 32\;{\rm K}$ pixels to achieve a large image size. Lum et al. [23] achieved a large holographic display with a high pixel count of 240 Mpixels by tiling 24 sub-holograms with a fast rotating mirror. Tanjung et al. [24] realized a large holographic display with a high pixel count of 377.5 Megapixels by physically aligning 24 high-speed SLMs and tiling them with additional galvano mirrors. The method of using multiple SLMs involves a complicated and expensive optical system, while the method of using high-speed rotating mirrors requires a mechanical system and is very difficult to synchronize. The proposed method aims for a simple optical system that does not require a mechanical system. The reconstructed image is enlarged by using two lenses as shown in Fig. 6. There is a virtual hologram surface at a focal distance ${f_B}$ in the direction of light traveling from convex lens B. Equations (4) and (5) are calculated for the virtual hologram and the enlarged reconstructed image. The size of the enlarged reconstructed image $L^\prime $ is calculated with the following equation:

$$L^\prime = \frac{{{f_B}}}{{{f_A}}}L,$$
where ${f_A}$ and ${f_B}$ indicate the focal lengths of convex lenses ${A}$ and ${B}$, respectively. However, the two lenses reduce the maximum diffraction angle. The reduced maximum diffraction angle $\theta ^\prime $ is calculated with the following equation:
$$\theta ^\prime = {\tan}^{- 1} \left({\frac{{{f_A}}}{{{f_B}}}\tan \theta} \right).$$
 figure: Fig. 6.

Fig. 6. Enlargement of size of reconstructed image by lenses.

Download Full Size | PDF

From Eq. (3), the viewing angle is also reduced, but this can be resolved by increasing the number of reconstruction lights.

C. Recording Process

In the proposed method, a hologram is generated by using a computer for the holography recording process. This hologram is called a computer-generated hologram (CGH) [25]. First, the virtual object to be recorded on the hologram is defined in the computer, and the light wave propagation from the virtual object to the surface where the hologram is placed is calculated according to the set optical system parameters. Then, the light wave propagation from the reference light source used for recording to the hologram surface is calculated. Finally, the interference between the object light and the reference light is calculated, and the interference fringes are output as two-dimensional image data to generate CGH.

Although several methods have been proposed for calculating object light in CGH, the proposed method uses the point-based method [26] because of its features such as the capacity of representing complex objects and the simplicity of the calculation method. The hologram surface is an $xy$-surface and the hologram is positioned at $z = 0$. When the position coordinates of any point light source representing a virtual object are $({x_i},{y_i},{z_i})$, the light wave distribution on the hologram surface ${u_i}(x,y,z = 0)$ is calculated with the following equation:

$${u_i}(x,y,z = 0) = \frac{{{a_i}}}{{{r_i}}}\exp \{j(k{r_i} + {\psi _i})\} ,$$
$$k = \frac{{2\pi}}{\lambda},$$
$${r_i} = \sqrt {{{(x - {x_i})}^2} + {{(y - {y_i})}^2} + z_i^2} ,$$
where ${a_i}$ indicates the brightness value of the point light source, ${\psi _i}$ indicates the initial phase of the point light source, ${r_i}$ indicates the distance from point light source position $({x_i},{y_i},{z_i})$ to pixel $(x,y)$ on the hologram surface, and $k$ indicates the wave number and is determined according to the wavelength $\lambda$ of the reconstruction light.

The complex amplitude distribution of the object light $O(x,y)$ is obtained by calculating the light wave propagation from each point source using Eq. (9) and adding them all together. If the virtual object to be recorded consists of $N$ point light sources, $O(x,y)$ is calculated with the following equation:

$$O(x,y) = \sum\limits_{i = 1}^N {u_i}(x,y,z = 0).$$

The reference light is assumed to be collimated and incident on the hologram surface at an angle $m\alpha$ to the $x$-axis. Comparing the origin $O$ on the hologram surface with an arbitrary point $(x,y)$, the light wave arriving at the hologram surface has a phase difference of $kx\sin m\alpha$. Therefore, when the phase at the origin is zero, the light wave distribution of the reference light ${R_u}(x,y)$ is calculated with the following equation:

$${R_u}(x,y) = {A_0}\exp (jkx\sin m\alpha).$$
The amplitude distribution of the reference light ${A_0}$ is equivalent to the intensity of the object light and is calculated with the following equation:
$${A_0} = \max |O(x,y)|.$$
Therefore, the complex amplitude distribution of the reference light $R(x,y)$ is calculated with the following equation:
$$R(x,y) = {A_0}{R_u}(x,y) = \max |O(x,y)|\exp (jkx\sin m\alpha).$$

After calculating the complex amplitude distribution of the object light and the reference light on the hologram surface, the interference of these light waves is calculated and recorded as CGH. The light intensity distribution $I(x,y)$ of the interference fringe generated by the interference between the object light and the reference light is calculated with the following equation:

$$\begin{split}I(x,y) = & |O(x,y) + R(x,y{)|^2}\\ = & |O(x,y{)|^2} + |R(x,y{)|^2} + O(x,y){R^*}(x,y) \\&\quad+ {O^*}(x,y)R(x,y),\end{split}$$
where ${O^*}(x,y)$ and ${R^*}(x,y)$ indicate the complex conjugates of the object light $O(x,y)$ and the reference light $R(x,y)$, respectively.

D. Reconstruction Process

A flowchart of the reconstruction process is shown in Fig. 7. The eye tracker identifies the position of the observer’s eye and sends the three-dimensional coordinates of the eye to the computer. Upon receiving the coordinates, the computer determines the light source to be turned on and off, and the hologram to be displayed on the basis of Eqs. (4) and (5). Depending on the selection results, the computer tells the microcomputer which light source to turn on and off, and the microcomputer switches the light source. The computer also sends the hologram data to the SLM, and the SLM displays the holograms. A 3D scene can be observed with the appropriate irradiation of the reconstruction light done by switching the light source and the appropriate hologram display and optical system.

 figure: Fig. 7.

Fig. 7. Flowchart of reconstruction process.

Download Full Size | PDF

3. PROPOSED SYSTEM

Figure 8 shows a block diagram of the optical system in the proposed method. The parameters of the system are listed in Table 1, and a photograph of the system is shown in Fig. 9. The light sources are laser diodes (LDs). The SLM is a reflective phase-only SLM, GAEA-2 [HOLOEYE Photonics AG]. The eye tracker is a screen-based eye tracker, Tobii Pro Fusion [Tobii AB].

 figure: Fig. 8.

Fig. 8. Block diagram of optical system in proposed method.

Download Full Size | PDF

Tables Icon

Table 1. Parameters of Proposed System

 figure: Fig. 9.

Fig. 9. Photograph of optical system.

Download Full Size | PDF

The light source array is aligned horizontally to enlarge the viewing zone in the horizontal direction. Light emitted from the array is collimated through convex lens 1. After the collimated light is modulated using the SLM, it passes through the $4f$ optical system [27], which consists of convex lens 2, a barrier, and convex lens 3, where zero-order and conjugate light is removed and the reconstructed image is enlarged. The eye tracker is placed directly below convex lens 3 and does not interfere with observation. The notations ${f_1}$, ${f_2}$, and ${f_3}$ in Fig. 8 refer to the focal lengths of convex lenses 1, 2, and 3, respectively. From Eq. (7), the magnification of the size of the reconstructed image using ${f_2}$ and ${f_3}$ is $\frac{{{f_3}}}{{{f_2}}}$. From Table 1, the size of the reconstructed image is enlarged by 1.5 times.

 figure: Fig. 10.

Fig. 10. Observation in experiment.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Reconstructed image taken by moving in $u$-axis direction (see Visualization 1). (a) $u = - {30}$ in Viewing zone ${-}{1}$. (b) $u = {0}$ in Viewing zone 0. (c) $u = {30}$ in Viewing zone 1.

Download Full Size | PDF

From Table 1 and Eq. (1), the maximum diffraction angle is determined to be 4.89°. Since the viewing angle is twice the maximum diffraction angle, the viewing angle before enlargement is 9.78°. The theoretical value of the maximum diffraction angle of the virtual hologram after magnification by lenses is 3.26° from Eq. (8). Three light sources are used, and each light source is positioned so that the reconstruction light enters the virtual hologram at ${-}{3.76^ \circ}$, 0°, and 3.76°, respectively. Therefore, the theoretical value of the viewing angle is 14.04° from Eq. (3). Compared with the viewing angle of 9.78° before enlargement, the viewing zone in the proposed system is enlarged by approximately 1.44 times.

The eye tracker allows the observer to move one’s head within a range of $400 \times 250\;{\rm mm} $ when the observer is 650 mm from the eye tracker. From these values, the eye tracker’s visual range can be calculated geometrically to be 18.92° both vertically and horizontally. Since the viewing angle is theoretically $14.04 \times {3.26^ \circ}$, the entire viewing zone is within the effective area of the eye tracker.

4. EXPERIMENTS

The proposed system was constructed to demonstrate its effectiveness. Figure 10 shows the experiment. The reconstructed image was formed in front of the lens and observed in a dark room to prevent the influence of undesirable light such as sunlight.

A. Viewing Zone and Parallax

The viewing zone was determined using a camera to measure the range over which the reconstructed image was observed. A ruler was attached to the camera, which was moved horizontally by manually turning a knob, and the distance moved could be measured. The camera moved horizontally while capturing the reconstructed image, and when the camera reached a position beyond the edge of the viewing zone, the reconstructed image began to disappear, so the camera was stopped at the position just before that point, and the value of the ruler was measured. Since the ruler was graduated in 1 mm increments, the width of the viewing zone was measured with a 1 mm accuracy in this experiment. A 3D object of a teapot was recorded. The reconstructed image was formed 11.25 mm from the virtual hologram surface, and its size was $16.5 \times 9.56\;{\rm mm} $. Figure 11 shows the reconstructed image taken while moving in the $u$-axis direction in Fig. 3, at a distance of 450 mm vertically from the hologram surface. Figures 11(a)–11(c) show the case of $u = - 30$, $u = 0$, and $u = 30$ with the origin at the point where the optical axis and the $u$-axis intersect. From Eq. (4) and Eq. (5), Figs. 11(a)–11(c) show Viewing zone ${-}{1}$, Viewing zone 0, and Viewing zone 1.

We measured the horizontal moving distance at which the entire reconstructed image could be observed 400, 450, and 500 mm from the reconstructed image. Figure 12 shows the measurement results. The vertical axis of the graph shows the horizontal travel distance of the camera, and the horizontal axis shows the distance from the reconstructed image to the camera. The blue lines indicate theoretical values, and red dots indicate measured values. From Fig. 12, it can be seen that the measured values were almost close to the theoretical values. The viewing angle was determined geometrically from each of the measurements, and their average value was 14.06°. Since the theoretical value of the viewing angle is 14.04° from Section 3, it was confirmed that the viewing angle was measured to be almost the same as the theoretical one. In addition, since the reconstructed image was formed 11.25 mm from the virtual hologram, and could be observed 400–500 mm from the virtual hologram, it was confirmed that the reconstructed image can be formed within reach of the observer’s hand. Figure 13 shows that the observer’s hand and the reconstructed image are both in focus. Therefore, the reconstructed image was formed in the air, where the observer could touch it.

The viewing angle before enlargement was compared with the viewing angle enlarged by the proposed method. We measured the viewing angle before enlargement when one reconstruction light was incident at 0° and no enlargement was performed by the lens. When the observer moved away from the reconstructed image by 400, 450, and 500 mm, the horizontal movement distance of the observer was 57, 66, and 75 mm, respectively, so the actual measured value of the viewing angle was calculated to be approximately 9.78°. Therefore, it was confirmed that the viewing angle was enlarged 1.44 times. Figure 14 also shows reconstructed images taken at $u = 40, - 40$ before and after the viewing zone enlargement when the observer was 400 mm away from the reconstructed images. The positions of $u = 40, - 40$ were outside the viewing zone before enlargement but in the viewing zone after enlargement. It was confirmed that a part of the reconstructed image was missing and darkened before the viewing zone was enlarged.

 figure: Fig. 12.

Fig. 12. Measurement of viewing zone width.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Reconstructed image on finger.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Reconstructed image before and after enlargement of viewing zone. (a) $u = - {40}$ before enlargement of viewing zone. (b) $u = {40}$ before enlargement of viewing zone. (c) $u = - {40}$ after enlargement of viewing zone. (d) $u = {40}$ after enlargement of viewing zone.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Parallax of reconstructed image in Viewing zone 0. (a) $u = - {17}$ in Viewing zone 0. (b) $u = {0}$ in Viewing zone 0. (c) $u = {17}$ in Viewing zone 0.

Download Full Size | PDF

Then, the parallax was checked by using two objects. A rectangle was formed 3 mm from the virtual hologram surface, and its size was $12 \times 9\;{\rm mm} $. The rectangle also had vertical grids every 3 mm horizontally. A circle was formed 30 mm from the virtual hologram surface, and its diameter was 10.71 mm. As shown in Figs. 15 and 16, parallax was confirmed because the overlap between the rectangle in the foreground and the circle in the background varied depending on the observation position. Figure 15 shows the case of Viewing zone 0, and Figs. 16(a) and 16(b) show Viewing zone $-1$ and Viewing zone 1. The slightly distorted reconstructed image is probably due to distortion at the edge of the convex lens. This problem can be solved by using a Fresnel lens with a large diameter. From Figs. 15 and 16, no high-order diffraction image was observed. Therefore, by switching reconstruction lights in accordance with the observer’s eye position, mechanical movement of the optical element for higher-order diffracted light removal is not necessary.

B. Depth Perception

Figure 17(a) shows that when the circle in back was in focus, the rectangle was out of focus. Figure 17(b) shows that when the rectangle in front was in focus, the circle was out of focus. These figures demonstrate that our holographic display allows for selective eye accommodation to different parts of the 3D scene that are located at different depths.

 figure: Fig. 16.

Fig. 16. Parallax of reconstructed image in Viewing zone ${-}{1}$ and 1. (a) $u = - {30}$ in Viewing zone ${-}{1}$. (b) $u = {30}$ in Viewing zone 1.

Download Full Size | PDF

 figure: Fig. 17.

Fig. 17. Depth of reconstructed image. (a) Camera focus on circle in back. (b) Camera focus on rectangle in front.

Download Full Size | PDF

C. Size of Reconstructed Image

The size of the reconstructed image was measured by placing a ruler at the position where the reconstructed image was formed. Measurements were taken on the rectangle. Figure 18(a) shows that the size of the reconstructed image was 19.4 mm. Since the width of the SLM was 14.36 mm from Table 1, the size of the reconstructed image was confirmed to be enlarged beyond the limit shown in Eq. (6) due to the lenses. Since the object recorded on the hologram was 13 mm and the magnification was 1.5 from Section 3, the theoretical size of the reconstructed image was 19.5 mm. Differences between theoretical and measured values are considered to be due to assembly errors.

 figure: Fig. 18.

Fig. 18. Size of reconstructed image. (a) After enlargement. (b) Before enlargement.

Download Full Size | PDF

The reconstructed image size before enlargement was compared with the reconstructed image size after enlargement. The reconstructed image size before enlargement was measured. The results are shown in Fig. 18(b). From Fig. 18(b), the size before enlargement was measured to be 13 mm. Therefore, it was confirmed that the reconstructed image was enlarged 1.49 times.

D. Reconstructed Image Switching Time

The eye-tracking time and the time required for the computer to switch holograms and reconstruction lights after receiving the eye coordinates from the eye tracker were measured. The eye-tracking time was calculated from the number of times an eye was detected during 30 s. However, since Table 1 shows that the refresh rate of the eye tracker is 120 Hz while the refresh rate of the SLM is 60 Hz, in this study, the computer receives coordinates from the eye tracker once every two frames to switch holograms and reconstruction lights. The time taken to switch holograms and reconstruction lights was measured 2000 times and the average value was obtained. The results are shown in Table 2. It was confirmed that the switching of holograms and reconstruction lights was performed at a high speed of 0.51 ms. In addition, the eye-tracking time was measured to be 8.43 ms, a value corresponding to the eye tracker refresh rate of 120 Hz that was given beforehand. However, since the computer receives coordinates from the eye tracker once every two frames, the time from the previous frame until the computer receives coordinates from the eye tracker is 16.86 ms. From the sum of the above values, the total time required for switching was 17.37 ms. This value is extremely short compared with the time resolution of the human eye, approximately 50–100 ms, indicating that a human cannot perceive the switching of the reconstructed image. The time required for this switching is the same even if the number of reconstruction lights and holograms increases. Therefore, the viewing zone can be freely enlarged without the reconstructed image flickering.

 figure: Fig. 19.

Fig. 19. Positioning resolution of eye tracker.

Download Full Size | PDF

E. Positioning Resolution of Eye Tracker

Since the positioning resolution of the eye tracker is important for this method to achieve a continuous viewing zone, an experiment was conducted to analyze it. In Fig. 3, when the observer moves along the $u$-axis at $Z = 500$, the switching of holograms and reconstruction lights is performed at the center position of the overlapping area, and the switching positions are $u = - 16.8$ and $u = 16.8$. The center positions of Viewing zone ${-}{1}$, Viewing zone 0, and Viewing zone 1 are $u = - 33.7$, $u = 0$, and $u = 33.7$. In this experiment, we measured how much error occurred when the eye tracker acquired the coordinates of these five points compared with the actual values. The experiment was conducted by three people, and the average, maximum, and minimum errors are shown in Fig. 19. As shown in Fig. 19, the average value fell between errors ${-}{1}$ and 1. The outlier values also always fell between ${-}{2}$ and 2. Since ${N_{0,1}}$ and ${N_{0, - 1}}$ are approximately 6.87 mm, the accuracy was sufficient. As shown in Eq. (5), the larger the distance between the observer and the reconstructed image, the larger the overlap area, and therefore, the larger the allowable error. Considering the average value, the error is acceptable when the overlap area is 2 mm or more. When the observer is 400 mm away from the reconstructed image, the overlapping areas ${N_{0,1}}$ and ${N_{0, - 1}}$ are approximately 2.05 mm. Therefore, when the observer is approximately 400 mm or more away from the reconstructed image, a continuous viewing zone can be considered to be achieved.

5. CONCLUSION

We proposed a method combining eye-tracking with a method of enlarging the viewing zone for electro-holography using multiple reconstruction lights. Mechanical moving of the optical elements and a high-refresh-rate SLM are not necessary with our method. Due to enlargement by lenses, the reconstructed image is larger than that of the limitations on the size. Unlike the spatial multiplexing method, only one SLM is used, making the optical system inexpensive and simple. The reconstructed image can be formed within the reach of the observer’s hand. Since this study was limited to the use of only one eye, we aim to enable the use of both eyes in the future. In addition, in this study, holograms were computed in advance and only displayed according to eye position, but in the future, we aim to calculate holograms in real time according to eye position.

In the future, further enlargement of the viewing zone can be expected by further increasing the incident angle of the reconstruction light. However, if the incident reconstruction light has an extremely large angle of incidence, the phase difference may cause large distortion in the reconstructed image. In this study, the reconstruction light was incident at a somewhat small angle, and no fatal distortion of the reconstructed image was observed in the experimental results. Also, the size of the reconstructed image can be further enlarged than in this system by using a lens with a long focal length and a large diameter. However, extreme enlargement of the reconstructed image size would result in an extreme reduction of the viewing zone, the effects of aberration would be significant, and optical components suitable for such extreme enlargement would be difficult to obtain. Analysis of the theoretical limits of the viewing zone and the size of the reconstructed image enlargement is a subject for future work.

Acknowledgment

These research results were obtained from the commissioned research (No.06801) by National Institute of Information and Communications Technology (NICT), Japan.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

REFERENCES

1. D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948). [CrossRef]  

2. P. S. Hilaire, S. A. Benton, M. Lucente, et al., “Electronic display system for computational holography,” Proc. SPIE 1212, 174–182 (1990). [CrossRef]  

3. F. Yaraş, H. Kang, and L. Onural, “Circular holographic video display system,” Opt. Express 19, 9147–9156 (2011). [CrossRef]  

4. T. Senoh, T. Mishina, K. Yamamoto, et al., “Viewing-zone-angle-expanded color electronic holography system using ultra-high-definition liquid crystal displays with undesirable light elimination,” J. Disp. Technol. 7, 382–390 (2011). [CrossRef]  

5. O. D. D. Soares and J. C. A. Fernades, “Cylindrical hologram of 360° field of view,” Appl. Opt. 21, 3194–3196 (1982). [CrossRef]  

6. Y. Sando, M. Itoh, and T. Yatagai, “Fast calculation method for cylindrical computer-generated holograms,” Opt. Express 13, 1418–1423 (2005). [CrossRef]  

7. T. Mishina, M. Okui, and F. Okano, “Viewing-zone enlargement method for sampled hologram that uses high-order diffraction,” Appl. Opt. 41, 1489–1499 (2002). [CrossRef]  

8. Y. Takaki and Y. Tanemoto, “Modified resolution redistribution system for frameless hologram display module,” Opt. Express 18, 10294–10300 (2010). [CrossRef]  

9. Y. Lim, K. Hong, H. Kim, et al., “360-degree tabletop electronic holographic display,” Opt. Express 24, 24999–25009 (2016). [CrossRef]  

10. T. Inoue and Y. Takaki, “Table screen 360-degree holographic display using circular viewing-zone scanning,” Opt. Express 23, 6533–6542 (2015). [CrossRef]  

11. Y. Sando, D. Barada, and T. Yatagai, “Holographic 3D display observable for multiple simultaneous viewers from all horizontal directions by using a time division method,” Opt. Lett. 39, 5555–5557 (2014). [CrossRef]  

12. Y. Takaki and K. Fujii, “Viewing-zone scanning holographic display using a MEMS spatial light modulator,” Opt. Express 22, 24713–24721 (2014). [CrossRef]  

13. Y. Takekawa, Y. Takashima, and Y. Takaki, “Holographic display having a wide viewing zone using a MEMS SLM without pixel pitch reduction,” Opt. Express 28, 7392–7407 (2020). [CrossRef]  

14. Y. Matsumoto and Y. Takaki, “Time-multiplexed color image generation by viewing-zone scanning holographic display employing MEMS-SLM,” J. Soc. Inf. Disp. 25, 515–523 (2017). [CrossRef]  

15. Y. Takaki and M. Nakaoka, “Scalable screen-size enlargement by multi-channel viewing-zone scanning holography,” Opt. Express 24, 18772–18781 (2016). [CrossRef]  

16. R. H.-Y. Chen and T. D. Wilkinson, “Field of view expansion for 3-D holographic display using a single spatial light modulator with scanning reconstruction light,” presented at the 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video, Potsdam, Germany, May 4–6, 2009, pp. 1–4.

17. N. Fujimori and Y. Sakamoto, “Wide-viewing zone electro-holography system by using switching of reconstruction light,” in Proceedings of the International Display Workshops (2020), Vol. 27, pp. 496–1499.

18. N. Okaichi, H. Sasaki, M. Kano, et al., “Integral three-dimensional display system with wide viewing zone and depth range using time-division display and eye-tracking technology,” Opt. Eng. 61, 013103 (2022). [CrossRef]  

19. K. H. Yoon, M. K. Kang, H. Lee, et al., “Autostereoscopic 3D display system with dynamic fusion of the viewing zone under eye tracking: principles, setup, and evaluation,” Appl. Opt. 57, A101–A117 (2018). [CrossRef]  

20. C. Jang, K. Bang, S. Moon, et al., “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36, 190 (2017). [CrossRef]  

21. A. Schwerdtner, R. Haussler, and N. Leister, “Large holographic displays for real-time applications,” Proc. SPIE 6912, 69120T (2008). [CrossRef]  

22. H. Sasaki, K. Yamamoto, K. Wakunami, et al., “Large size three-dimensional video by electronic holography using multiple spatial light modulators,” Sci. Rep. 4, 6177 (2014). [CrossRef]  

23. Z. M. A. Lum, X. Liang, Y. Pan, et al., “Increasing pixel count of holograms for three-dimensional holographic display by optical scan-tiling,” Opt. Eng. 52, 015802 (2013). [CrossRef]  

24. R. B. A. Tanjung, X. Xu, X. Liang, et al., “Digital holographic three-dimensional display of 50-Mpixel holograms using a two-axis scanning mirror device,” Opt. Eng. 49, 025801 (2010). [CrossRef]  

25. J. P. Waters, “Holographic image synthesis utilizing theoretical methods,” Appl. Phys. Lett. 9, 405–407 (1966). [CrossRef]  

26. Y. Ogihara and Y. Sakamoto, “Fast calculation method of a CGH for a patchmodel using a point-based method,” Appl. Opt. 54, A76–A83 (2015). [CrossRef]  

27. T. Kurihara and Y. Takaki, “Improving viewing region of 4f optical system for holographic displays,” Opt. Express 19, 17621–17631 (2011). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Reconstructed image taken by moving in u-axis direction

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Fig. 1.
Fig. 1. Viewing zone at different angles of reconstruction lights.
Fig. 2.
Fig. 2. Angle of reconstruction light.
Fig. 3.
Fig. 3. Viewing zone in proposed method.
Fig. 4.
Fig. 4. Limitations on size of reconstructed image.
Fig. 5.
Fig. 5. Trade-off between viewing zone and size of reconstructed image.
Fig. 6.
Fig. 6. Enlargement of size of reconstructed image by lenses.
Fig. 7.
Fig. 7. Flowchart of reconstruction process.
Fig. 8.
Fig. 8. Block diagram of optical system in proposed method.
Fig. 9.
Fig. 9. Photograph of optical system.
Fig. 10.
Fig. 10. Observation in experiment.
Fig. 11.
Fig. 11. Reconstructed image taken by moving in $u$-axis direction (see Visualization 1). (a) $u = - {30}$ in Viewing zone ${-}{1}$. (b) $u = {0}$ in Viewing zone 0. (c) $u = {30}$ in Viewing zone 1.
Fig. 12.
Fig. 12. Measurement of viewing zone width.
Fig. 13.
Fig. 13. Reconstructed image on finger.
Fig. 14.
Fig. 14. Reconstructed image before and after enlargement of viewing zone. (a) $u = - {40}$ before enlargement of viewing zone. (b) $u = {40}$ before enlargement of viewing zone. (c) $u = - {40}$ after enlargement of viewing zone. (d) $u = {40}$ after enlargement of viewing zone.
Fig. 15.
Fig. 15. Parallax of reconstructed image in Viewing zone 0. (a) $u = - {17}$ in Viewing zone 0. (b) $u = {0}$ in Viewing zone 0. (c) $u = {17}$ in Viewing zone 0.
Fig. 16.
Fig. 16. Parallax of reconstructed image in Viewing zone ${-}{1}$ and 1. (a) $u = - {30}$ in Viewing zone ${-}{1}$. (b) $u = {30}$ in Viewing zone 1.
Fig. 17.
Fig. 17. Depth of reconstructed image. (a) Camera focus on circle in back. (b) Camera focus on rectangle in front.
Fig. 18.
Fig. 18. Size of reconstructed image. (a) After enlargement. (b) Before enlargement.
Fig. 19.
Fig. 19. Positioning resolution of eye tracker.

Tables (2)

Tables Icon

Table 1. Parameters of Proposed System

Tables Icon

Table 2. Switching Time

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

θ = sin 1 ( λ 2 p ) ,
m α = tan 1 ( P m f c ) × { ( m = 0 , ± 1 , , ± M 1 2 ) i f M i s o d d , ( m = ± 1 2 , ± 3 2 , , ± M 1 2 ) i f M i s e v e n ,
ϕ = α ( M 1 ) + 2 θ .
U m = Z ( tan ( | m | α + θ ) tan ( | m | α θ ) ) L ,
N m , m ± 1 = Z ( tan ( | m | α + θ ) tan ( ( | m | + 1 ) α θ ) ) + D ( tan | m | α tan ( | m | + 1 ) α ) L ,
L < H 2 D tan ( | m | α + θ ) ,
L = f B f A L ,
θ = tan 1 ( f A f B tan θ ) .
u i ( x , y , z = 0 ) = a i r i exp { j ( k r i + ψ i ) } ,
k = 2 π λ ,
r i = ( x x i ) 2 + ( y y i ) 2 + z i 2 ,
O ( x , y ) = i = 1 N u i ( x , y , z = 0 ) .
R u ( x , y ) = A 0 exp ( j k x sin m α ) .
A 0 = max | O ( x , y ) | .
R ( x , y ) = A 0 R u ( x , y ) = max | O ( x , y ) | exp ( j k x sin m α ) .
I ( x , y ) = | O ( x , y ) + R ( x , y ) | 2 = | O ( x , y ) | 2 + | R ( x , y ) | 2 + O ( x , y ) R ( x , y ) + O ( x , y ) R ( x , y ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.