Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Correction of the wavelength error in transmission of a full-color holographic 3D image

Open Access Open Access

Abstract

When a digital holographic image represented by a sampled wavefield is transmitted and the wavelength used in the three-dimensional (3D) display devices does not agree exactly with the wavelength of the original image data, the reconstructed 3D image will differ slightly from the original. This slight change is particularly problematic for full-color 3D images reconstructed using three wavelengths. A method is proposed here to correct the holographic image data and reduce the problems caused by wavelength mismatch. The effectiveness of the method is confirmed via theoretical analysis and numerical experiments that evaluate the reconstructed images using several image indices.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Technologies related to digital holography are currently used in a variety of fields, including recreation of heritage artifacts, microscopy, and defect detection in semiconductor structures [15]. In computer-generated holograms (CGH) in particular, the scale has expanded beyond gigapixels because of advances in both computational performance and calculation algorithms [69]; at present, it is possible to create sub-terapixel-scale full-parallax CGHs [10,11]. Recently, studies have been conducted to enable use of holography in various displays, e.g., head-up displays and head-mounted displays [1219]. Research into holographic displays also has a long history [20]. The holographic display offers numerous advantages over conventional three-dimensional (3D) displays, including low power consumption, a high see-through effect, large image depths, reduced dizziness effects, and lower eye fatigue.

In the near future, transmission of holographic images instead of conventional two-dimensional (2D) images will become a reality. Consequently, research into holographic telecommunication and broadcasting technologies for transmission through digital networks will become increasingly important in networks beyond 5G. In the transmission of holographic images, however, considerable problems are expected to be caused by parameter mismatches between the holographic display device and the transmitted holographic images. In holographic broadcasting, where a specific holographic image is sent to multiple receivers, it is impractical to match the parameters of the transmitted data closely to the parameters of the various receiving devices because the holographic display parameters usually vary from device to device.

Unless an appropriate correction algorithm is used, some issues are bound to occur because of parameter mismatch. However, unlike ordinary digital images, in which each pixel has a distinct meaning, a holographic image data includes the physical dimensions of the object, e.g., the depth information, and thus each sample point is related to the other. This complexity makes correction of aberrations caused by parameter mismatches in reconstructed holographic images challenging.

Various studies have been conducted into correction of aberrations and adjustment of the parameters of digital holographic images [2124]. For example, the zero-padding method is used to resize reconstructed images, regardless of the recording distance of the particular digital hologram [25]. Similarly, a combination of zero-padding with scalable convolution has been proposed to correct chromatic aberrations [26]. Use of numerical post-processing was also proposed to correct spherical aberrations [27], along with deep learning to improve aberration correction techniques [28]. However, most of these previous studies have focused on enhancing the quality or correcting aberrations during numerical reconstruction of digital holograms; i.e., the reconstructed image in these works is a 2D image and it is expected to be displayed on conventional 2D monitors.

In this paper, we propose a method to correct the chromatic issues that occur when reconstructing a holographic image as a 3D image, as illustrated in Fig. 1. When a color holographic image is reconstructed by a variety of display devices that use red-green-blue (RGB) light sources, it is likely to be difficult to match the wavelengths to those of the holographic image data in all the devices. If there are wavelength mismatches, then the reconstructed holographic image may experience chromatic issues such as color smear. We concentrate our discussion in this work on the cases in which slightly different wavelengths for the three primary colors are used to perform optical reconstruction in 3D display devices. Therefore, we assume that the other parameters of the holographic image, e.g., the number of samples and the sampling intervals, are not changed in the reconstruction.

 figure: Fig. 1.

Fig. 1. Problem caused by wavelength mismatch in the transmission of full-color holographic 3D images.

Download Full Size | PDF

We have two options for the format of the holographic images: the first is a fringe pattern generated by the interference of object light with reference light, and the second is a sampled wavefield that is composed of a 2D data array of the complex amplitudes of the object light. We selected the latter format here because a wavefield is the most fundamental expression of light as electromagnetic waves, and because it offers more flexibility when compared with interference fringes affected by the various parameters of the reference light used to generate the fringe pattern. In the following section, we refer to the color smear or color misalignment in reconstructed images caused by wavelength mismatch as chromatic aberration. We evaluate the chromatic aberrations of the reconstructed images and propose a method to correct the sampled wavefields and thus reduce the chromatic aberration in a particular 3D display device.

2. Analysis and compensation of chromatic aberration

In this study, we assume that only the wavelength of the transmitted wavefield is changed in optical reconstruction by receiver devices, whereas other parameters such as the phase and amplitude of the wavefield are not changed in the reconstruction. This condition can be expressed as follows:

$$g({x,y;{\lambda_1}} )= g({x,y;{\lambda_2}} ), $$
where $g({x,y;{\lambda_1}} )$ represents the original sampled wavefield at a wavelength ${\lambda _1}$. $g({x,y;{\lambda_2}} )$ also represents the same sampled wavefield, but where the wavelength alone has been changed to ${\lambda _2}$. Here, we should emphasize that $g({x,y;{\lambda_2}} )$ maintains the same phase and amplitude as the original sampled wavefield $g({x,y;{\lambda_1}} )$. This situation is analyzed on the basis of optical refraction in this section. A more precise discussion based on scalar diffraction theory is presented in Section 4.

2.1 Analysis based on optical refraction

As shown in Fig. 2(a), the wavelength change can be analyzed simply as the optical refraction that occurs at the interface between different media because the phase of the light remains unchanged but the wavelength is varied at the boundary surface, i.e., the $({x,y,0} )$ plane in Fig. 2(a). Here, we disregard surface reflection occurring at the boundary in Eq. (1) for simplicity.

 figure: Fig. 2.

Fig. 2. Similarity between optical refraction at the boundary between media and the wavelength change in the sampled wavefields. (a) Analysis of the shift in terms of optical refraction. The positions of the object as reconstructed using wavefields with (b) correct and (c) wrong wavelengths.

Download Full Size | PDF

As shown in Fig. 2(a), observation of the image of an object through the wavefield $g({x,y;{\lambda_2}} )$ is analogous to viewing the object when it is submerged in a transparent Medium 2, e.g., water. It is well known that an underwater object appears to be at a shallower depth than the true depth. This is because a point $\mathrm{P^{\prime}}$ located at a depth $z ={-} d^{\prime}$ appears to be at a shallower point $\textrm{P}$ at $z ={-} d$ rather than at $\mathrm{P^{\prime}}$. According to Snell’s law, the angles depicted in Fig. 2(a) satisfy the equation

$${n_1}\textrm{sin}{\theta _1} = {n_2}\textrm{sin}{\theta _2}, $$
where ${n_1}$ and ${n_2}$ are the refractive indices of Media 1 and 2, respectively. When Medium 2 is water, it is apparent that $d < d^{\prime}$ because ${n_1} < {n_2}$. Here, we consider a case where ${n_1} = {n_2}$, i.e., where Medium 2 is identical to Medium 1, and thus the refraction does not occur. If a viewer sees the light emitted from point P in this case, then the true depth of P is $z ={-} d$. When Medium 2 is changed to water and ${n_1} < {n_2}$, the viewer sees the same light as that at P, but it comes from the point $\mathrm{P^{\prime}}$, which is at a depth $d^{\prime}$. Similarly, the depth of an object is d when the viewer observes the object through the original wavefield $g({x,y;{\lambda_1}} )$ at the correct wavelength, as shown in Fig. 2(b). However, the depth may be changed to $d^{\prime}$ when the object is seen through the same wavefield $g({x,y;{\lambda_2}} )$ at the wrong wavelength, as shown in Fig. 2(c).

In Fig. 2(a), the wavelengths in the media are correctly given as:

$${\lambda _1} = \frac{{{\lambda _0}}}{{{n_1}}},\,{\lambda _2} = \frac{{{\lambda _0}}}{{{n_2}}}, $$
where ${\lambda _0}$ is the wavelength in a vacuum. By substituting $\textrm{sin}{\theta _1}$ and $\textrm{sin}{\theta _2}$, which were obtained geometrically from Fig. 2(a), and Eq. (3) into Eq. (2), the depth of point $\textrm{P}^{\prime}$ can be written as follows:
$$d^{\prime} = \,\frac{{{\lambda _1}}}{{{\lambda _2}}}d\sqrt {1 + {{\left( {\frac{{{x_0}}}{d}} \right)}^2}\left[ {1 - {{\left( {\frac{{{\lambda_2}}}{{{\lambda_1}}}} \right)}^2}} \right]} . $$

When the size of the object is small enough to satisfy the condition that ${x_0}^2 \ll {d^2}$, or when the magnitude of the wavelength change is small, i.e., it satisfies ${\lambda _1} \simeq {\lambda _2}$, the equation above can be approximated as follows:

$$d^{\prime} \simeq \frac{{{\lambda _1}}}{{{\lambda _2}}}d. $$

Because the commonly used laser wavelength ranges for R, G, and B are 780–622 nm, 577–492 nm, and 492–455 nm, respectively [29], the error will be small enough to allow this approximation to work in most situations. Although we have only analyzed this problem in the $({x,0,z} )$ plane, the same results can be obtained in the y direction. Equation (5) is also given by the more precise analysis based on the theory of angular spectrum propagation and its Fresnel approximation that is presented in Section 4. Here, note that it is reported without a detailed explanation that the same shift given by Eq. (5) occurs in the reconstructed images when the holograms are reconstructed at a different wavelength from that used in their recording [30].

The chromatic aberration may not affect the reconstructed images in monochromatic displays significantly because the slight shift is not likely to be detected by human perception. However, in the case of full color holographic displays that use RGB light sources, because the amount of the shift is dependent on the wavelength, the chromatic aberration will lead to serious color smear. In this case, a technique for wavelength error compensation will be essential for operation of holographic displays in the near future.

2.2 Compensation for wavelength error

Suppose that the shift in the distance of the reconstructed image from the original image is defined as $\Delta d = d^{\prime} - d$; this shift is then given by:

$$\Delta d = d\left( {\frac{{{\lambda_1}}}{{{\lambda_2}}} - 1} \right). $$

Here, when $\Delta d > 0$, the reconstructed image appears at a further position than the original position given in the case where ${\lambda _1} = {\lambda _2}$.

Because the chromatic aberration caused by the wavelength error is a shift in the $z$-position, the aberration can be canceled by the numerical propagation of distance $\Delta d$ given in Eq. (6) as follows:

$$g^{\prime}({x,y;{\lambda_2}} )= {\mathrm{{\cal P}}_{ - \Delta d}}\{{g({x,y;{\lambda_2}} )} \}, $$
where ${\mathrm{{\cal P}}_\delta }\{\cdot \}$ represents an operator for the numerical propagation of a distance $\delta $ in the $z$-direction.

3. Numerical experiments

3.1 Generation of object wavefield of a 2D object

To evaluate the correction performed using Eq. (7), a simulation was conducted of the wavefield of a 2D object in full color, as shown in Fig. 3(a). Here, the 2D object is a Jack playing card in the object plane placed at $z ={-} {d_{\textrm{obj}}}$. The color image and the RGB images of this card are shown in Fig. 3(b) and (c)–(e), respectively. The source wavefield at the object plane is generated by randomizing the phase as follows:

$${g_C}({x,y; - {d_{\textrm{obj}}},{\lambda_C}} )= \sqrt {{I_C}({x,y} )} \textrm{exp}[{i{\phi_{\textrm{dif}}}({x,y} )} ], $$
where $C = R,\; G,\; \textrm{or}\; B$, and ${I_C}({x,y} )$ is the intensity of the image in the color C. ${\phi _{\textrm{dif}}}({x,y} )$ represents the random phase. The sampled wavefield at $z = 0$ was then obtained by numerical propagation:
$${g_C}({x,y;0,{\lambda_C}} )= {\mathrm{{\cal P}}_{{d_{\textrm{obj}}}}}\{{{g_C}({x,y; - {d_{\textrm{obj}}},{\lambda_C}} )} \}, $$
where ${\mathrm{{\cal P}}_\delta }\{\cdot \}$ is again a propagation operator for a distance $\delta $. We used the band-limited angular spectrum method (BLASM) to perform the numerical propagation [31]. The simulations in the following section were conducted for the three wavefields that were obtained using Eq. (9).

 figure: Fig. 3.

Fig. 3. (a) Generation of the test wavefield by forward propagation. (b) Original color image of the 2D object and (c)–(e) monochromatic images corresponding to the three color channels.

Download Full Size | PDF

3.2 Evaluation of chromatic aberration of a 2D object

3.2.1 Evaluation based on back-propagation

To evaluate the chromatic aberration, we generated the wavefields ${g_C}({x,y;0,{\lambda_C}} )$ given by Eq. (9) using the parameters that are summarized in Table 1, where the original wavelengths are denoted by RGB1. These wavefields are then propagated back with BLASM after the wavelength ${\lambda _C}$ is changed to $\lambda {^{\prime}_C}$:

$${g_C}({x,y; - {d_{\textrm{back}}},\lambda {\mathrm{^{\prime}}_C}} )= {\mathrm{{\cal P}}_{ - {d_{\textrm{back}}}}}\{{{g_C}({x,y;0,\lambda {\mathrm{^{\prime}}_C}} )} \}, $$
where ${d_{\textrm{back}}}$ is a distance of back-propagation. The reconstructed monochromatic RGB images are then given by ${|{{g_c}({x,y; - {d_{\textrm{back}}},\lambda {^{\prime}_c}} )} |^2}$.

Tables Icon

Table 1. Parameters used in the numerical experiments.

Figure 4 shows the monochromatic and combined full color images in ${d_{\textrm{back}}} = {d_{\textrm{obj}}}$, i.e., round-trip propagation. The first row shows the reconstructed images in the case where the wavefields are propagated back using the same wavelength set RGB1 that was used in the forward propagation. Because this is simply round-trip propagation at the same wavelengths, the reconstructed images agree exactly with the original images. In contrast, in the second row, the wavefields are back-propagated using the different wavelength set RGB2. As a result of the chromatic aberration caused by wavelength mismatch, the reconstructed images are out of focus, i.e., they become blurred when they propagate back to the original position. In the third row, the wavefields are also propagated back using RGB2, but the wavefields in this case are corrected using Eq. (7) before back-propagation. This verifies that the images obtained in the third row are almost the same as the images in the first row except for color tone of the red and blue images. Here note that all images in Fig. 4 are obtained using CIE1931 RGB color matching functions. Because the color reflects the wavelength, a strong color change is caused especially in the blue images of the second and third rows where the wavelength is changed from 457 nm to 480 nm.

 figure: Fig. 4.

Fig. 4. Comparison between images reconstructed using back-propagation.

Download Full Size | PDF

Figure 5 shows the relative values of the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) of the reconstructed images when propagation distance ${d_{\textrm{back}}}$ in the back-propagation is varied. Here, note that the images from the first row in Fig. 4 are used as the ground truth. The PSNR and SSIM shown in (a) and (b), respectively, correspond to the second row of Fig. 4, i.e., they are the characteristics of the wavefields propagated back with the RGB2 wavelengths without correction. Because the G color in RGB2 has the same wavelength as that in RGB1, the peak position does not change for G. However, the peak positions of both R and B are shifted because of the wavelength mismatch. The magnitudes of these peak shifts agree exactly with the expected values obtained using Eq. (6).

 figure: Fig. 5.

Fig. 5. Evaluation of the reconstructed images in terms of their peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) in the (a), (b) uncorrected wavefields and (c), (d) corrected wavefields.

Download Full Size | PDF

Figure 5(c) and (d) also show the PSNR and SSIM characteristics, respectively, of images propagated back using the RGB2 wavelengths, but in this case, the wavefields have been corrected using the proposed method. The peak positions are obtained at ${d_{\textrm{back}}} = 3\; \textrm{cm}$ for every color because the chromatic aberration is corrected using Eq. (7). As a result, the chromatic aberration vanishes and the image in the third row of Fig. 4 is clear. Note that we do not correct the color changes caused by wavelength changes here. Therefore, the B color peak is not sharp, particularly in the SSIM characteristics of Fig. 5(b), because the wavelength changes in the B color cause more severe color changes when compared with the R color, as mentioned above.

3.2.2 Simulated reconstruction by virtual image formation

To verify the correction of the chromatic aberration, the wavelength-changed wavefields ${g_C}({x,y;0,\lambda {^{\prime}_C}} )$ given in Eq. (9) are visualized from different viewpoints by applying the numerical image formation technique using a virtual lens [32]. The arrangement for the viewpoint is shown in Fig. 3(a). The viewpoint moves along a circular arc in the $({x,\; 0,\; z} )$ plane, which has a radius of 6 cm and is centered at the center of the object. The focal distance of the lens is adjusted in virtual imaging to ensure that the lens is focused on the object’s center. As a result, the observation distance is 6 cm.

Figure 6 shows simulated reconstructions of the wavefields where the wavelengths have been changed to RGB2 at various observation angles ${\theta _x}$, as indicated in Fig. 3(a). The complete reconstructed images at ${\theta _x} = 0^\circ $ without and with correction are shown in Fig. 6(a) and (b), respectively, and the enlarged images at the different observation angles are shown in Fig. 6(c)–(f). These images confirm that the color misalignment of the reconstructed image increases as the observation angle increases in the uncorrected wavefield. In contrast, the color misalignment is no longer detected in the corrected wavefield.

 figure: Fig. 6.

Fig. 6. Simulated reconstruction using virtual image formation for (a) uncorrected and (b) corrected wavefields at ${\theta _x} = 0^\circ $. (c)–(f) Enlarged images at the different observation angles.

Download Full Size | PDF

3.3 Simulated reconstruction of a 3D object

Through the simulation presented above, we confirmed that chromatic aberration can be corrected for 2D objects. However, for 3D objects that extend in the $z$-direction, the chromatic aberration is not corrected fully by the proposed method, and the influence of the chromatic aberration increases as the object depth increases. We conducted another simulation to verify this problem using 3D objects called 3D cubic array and the Venus, as shown in Fig. 7(a) and (b), respectively. The wavefields of these 3D objects were calculated using the polygon-based method with the RGB1 wavelengths [33].

 figure: Fig. 7.

Fig. 7. 3D scenes and objects used to generate the test wavefields. (a) 3D cubic array and (b) the Venus.

Download Full Size | PDF

3.3.1 Evaluation based on back-propagation

Figure 8(a) shows the simulated reconstruction of the 3D cubic array using back-propagation with the RGB2 wavelengths and ${d_{\textrm{back}}} = 3\; \textrm{cm}$. The simulation parameters except for the object distance are the same as those presented in Table 1. The object distances are indicated for each surface in Fig. 7(a). Because the total number of sample points of the wavefield is approximately 67 × 106, we assume that the parameters are roughly of holographic near-eye displays.

 figure: Fig. 8.

Fig. 8. (a) Reconstructed image of the 3D object when using back-propagation with the RGB2 wavelengths and ${d_{back}} = 3\,\textrm{cm}$. Enlarged images from the red region are shown in (b) and (c), and those from the yellow region are shown in (d) for different wavelengths and ${d_{back}}$ values, and with correction. The wavefield in the third row of (b)-(d) is corrected with ${d_{\textrm{cor}}} = 3\; \textrm{cm}$.

Download Full Size | PDF

Enlarged images from the red and yellow regions are shown in (b)–(d), where ${d_{\textrm{back}}}$ is adjusted to focus on the different surfaces of the 3D cubic array. The first row shows the results obtained by back-propagation with the RGB1 wavelengths, i.e., a simple round-trip propagation, whereas the second and third rows show the results obtained by back-propagation with the RGB2 wavelengths.

The wavefield in the 3rd row is corrected using Eq. (7). Here note that the distance d in Eq. (6), which gives the propagation distance $\Delta d$ in Eq. (7), is an adjustable parameter for a 3D object because 3D objects have a given depth. Therefore, we refer to d in Eq. (6) as correction distance ${d_{\textrm{cor}}}$ when correcting the wavefield for a 3D object. In the proposed algorithm, the chromatic aberration is perfectly corrected in the plane at $z ={-} {d_{\textrm{cor}}}$. However, the effect of the correction weakens as the distance from this plane increases.

The wavefield in the third row in Fig. 8(b)-(d) is corrected with ${d_{\textrm{cor}}} = 3\; \textrm{cm}$ and thus color aberration is exactly corrected in the surface of the center cube, as shown in (b) where ${d_{\textrm{back}}} = {d_{\textrm{obj}}} = 3\; \textrm{cm}$. On the other hand, the images are focused on other surfaces where ${d_{\textrm{back}}} = {d_{\textrm{obj}}} = 3.2\; \textrm{cm}$ in (c) and $3.4\; \textrm{cm}$ in (d). In these results, the chromatic aberration is not completely corrected; the aberration becomes somewhat more detectable because ${d_{\textrm{obj}}} \ne {d_{\textrm{cor}}}$. However, comparison of the second and third rows in Fig. 8(b)–(d) confirms that significant reductions are obtained in chromatic aberration.

3.3.2 Simulated reconstruction by virtual image formation

The Venus, as shown in Fig. 7(b), is used for this experiment. The center of the Venus statue, which has a depth of 1.4 cm, is positioned 5 cm behind the hologram. The number of samples of the wavefields is 65,536 × 65,536, and the other parameters are the same as those given in Table 1. Because the total number of the sample points is approximately 4.3 billion, and the size of the sampling window is approximately 5.2 cm × 5.2 cm, the wavefield cannot be displayed by current holographic displays but can be reconstructed by large-scale static CGHs [6,9].

After the wavefields are calculated at the RGB1 wavelengths using the polygon-based method, the wavelengths are then changed to RGB2 in the generated wavefields. The simulated reconstruction performed by virtual imaging is shown in Fig. 9, where the observation distance is $15\,\textrm{cm}$. Figure 9(a) and the first row of Fig. 9(c)–(f) show the results for the uncorrected wavefields, while Fig. 9(b) and the second row of Fig. 9(c)–(f) show the results for the corrected wavefields with the correction distance of ${d_{\textrm{cor}}} = 5\,\textrm{cm}$. Color smear due to chromatic aberration worsens at the edges of the object as the observation angle increases in the uncorrected wavefields. In contrast, the chromatic aberration is reduced significantly and is barely noticeable in the corrected wavefields.

 figure: Fig. 9.

Fig. 9. Reconstructed image of the 3D object using virtual image formation at ${\theta _x} = 9^\circ $ (a) without and (b) with correction. (c)–(f) Comparison of enlarged images obtained for the corrected and uncorrected wavefields at various observation angles.

Download Full Size | PDF

3.3.3 Correction errors for 3D objects

Chromatic aberration in 3D objects is not corrected completely when using the proposed method. This is because the entire 3D space appears to be shrunken or expanded in accordance with Eq. (5) when a viewer sees the 3D space through the wavefield with the wavelength errors. This situation is illustrated schematically in Fig. 10. In Fig. 10(a), ${d_{\textrm{obj}}}$ is again the object distance. $\mathrm{\Delta }{d_t}$ is the difference between the object center and the surface of interest and roughly given by half of the depth or thickness of the object. As shown in Fig. 10(b), when wavelength errors occur in the wavefield, $\mathrm{\Delta }{d_t}$ changes to $\mathrm{\Delta }d{^{\prime}_t} = ({\lambda _1}/{\lambda _2})\mathrm{\Delta }{d_t}$ and $d({ = {d_{\textrm{obj}}}} )$ changes to $d^{\prime}({ = d{^{\prime}_{\textrm{obj}}}} )$ in Eq. (5). Here, the values of $\mathrm{\Delta }d{^{\prime}_t}$ and $d^{\prime}$ vary depending on the wavelength. When we correct the wavefield by appropriate setting for the correction distance, i.e., ${d_{\textrm{cor}}} = {d_{\textrm{obj}}}$, as shown in Fig. 10(c), the center position is retrieved but $\mathrm{\Delta }d{^{\prime}_t}$ does not change because the field propagation in Eq. (7) only moves the object position and does not cause the depth to shrink or expand.

 figure: Fig. 10.

Fig. 10. Schematics of the correction errors for a 3D object in the proposed method.

Download Full Size | PDF

The correction error is defined as the depth change caused by a wavelength change. This error is proportional to the original depth and is written as follows:

$$\Delta {d_e} = \mathrm{\Delta }d{^{\prime}_t} - \mathrm{\Delta }{d_t} = \left( {\frac{{{\lambda_1}}}{{{\lambda_2}}} - 1} \right)\Delta {d_t}. $$

Table 2 summarizes the actual correction errors caused by changing the wavelengths from RGB1 to RGB2 in the 3D scenes shown in Fig. 7. The positional changes are not small but can be corrected, whereas the depth changes cannot be corrected but are not as large. This is the reason why the color misalignment is almost imperceptible in Figs. 8 and 9.

Tables Icon

Table 2. Actual correction errors caused in the 3D objects used in the numerical experiments.

The maximum color-shift caused in the object surface is given by $\Delta {d_{\textrm{max}}}$ that is the maximum difference in $\Delta {d_e}$ for each color, as shown in Fig. 10(c). For example, because the value of $\Delta {d_e}$ is 0.21 mm for R and −0.19 mm for B in 3D cube array of Table 2, the color-shift is $\Delta {d_{\textrm{max}}} = 0.4\; \textrm{mm}$. The actual values of $\Delta {d_{\textrm{max}}}$ are summarized for all 3D objects in Table 2.

So far we have used the relatively thin white objects shown in Fig. 7 for numerical experiments. To examine the proposed method comprehensively, we conducted another experiment using a colorful and deep 3D object named Color cube, whose 3D scene is shown in Fig. 11. The depth of the bounding box of the Color cube, which is the minimum cuboid includes all objects inside, is approximately 80 mm. The objects are arranged in the 3D scene where the center of the bounding box is placed at $z ={-} 90\; \textrm{mm}$. At the wavelengths RGB1 in Table 1, a large color-shift of $\Delta {d_{\textrm{max}}} = 4.01\; [{\textrm{mm}} ]$ is caused at the nearest and furthest objects from the hologram.

 figure: Fig. 11.

Fig. 11. 3D scene of Color cube. The red dashed lines show the bounding box of the included objects.

Download Full Size | PDF

The sampled wavefield of the 3D scene was calculated using the polygon-based method at the RGB1 wavelengths. In addition, because the Color cube has occlusion unlike the Venus, the occlusion was processed using the switchback technique [34] with surface masks [35]. The parameters of the sampled wavefield are the same as those of the Venus.

Figure 12 shows the simulated reconstructions of the wavefield whose wavelengths are changed to RGB2. The wavefield in the second row is corrected with ${d_{\textrm{cor}}} = 90\; \textrm{mm}$. Because images obtained by virtual imaging is focused on the same plane as the correction, i.e., the plane at $z ={-} 90\; \textrm{mm}$, objects far from the focal plane, such as the background checker image, are considerably blurred in the reconstructed images. As shown in Fig. 12(a), the color misalignment due to chromatic aberration is scarcely noticeable from the viewpoint at ${\theta _x} = 0^\circ $ in both corrected and uncorrected wavefields. However, color smear of the uncorrected wavefield increase rapidly as ${\theta _x}$ increases, as in (b) and (c). The color misalignment is visible even in the corrected wavefield especially in the background image, but degradation of the reconstructed image is much smaller than that without correction.

 figure: Fig. 12.

Fig. 12. Images of the colorful deep 3D objects, reconstructed using virtual image formation with an observation distance of 20 cm.

Download Full Size | PDF

4. Discussion based on scalar diffraction theory

We discussed the chromatic aberration on the basis of an analogy to optical refraction in Section 2. In other words, the earlier analysis was based on ray optics. In this section, we present a more rigorous discussion based on wave optics.

Suppose that $g({x,y;{z_0}} )$ is a sampled wavefield in the $x$-$y$ plane at $z = {z_0}$. Here, note that $g({x,y;{z_0}} )$ is simply a two-dimensional data array of complex values. According to scalar diffraction theory, the diffraction and propagation of the wavefield are obtained by solving the Rayleigh-Sommerfeld integral or through solution of the Helmholtz equation. One of the most rigorous techniques to calculate the translational propagation of the wavefield is the angular spectrum method [31,36]. Using this technique, when the wavelength is ${\lambda _1}$, the wavefield that propagates back over a distance ${d_1}$ is expressed as follows:

$$g({x,y;{z_0} - {d_1}} )= {\mathrm{{\cal F}}^{ - 1}}\{{G({u,v;{z_0}} )H({u,v; - {d_1},{\lambda_1}} )} \}, $$
where $G({u,v;{z_0}} )$ is given by the Fourier transform of the original wavefield, i.e., $G({u,v;{z_0}} )= \mathrm{{\cal F}}\{{g({x,y;{z_0}} )} \}$. u and v are spatial frequencies in the x and y direction, respectively. $\mathrm{{\cal F}}\{\cdot \}$ and ${\mathrm{{\cal F}}^{ - 1}}\{\cdot \}$ denote the Fourier and inverse Fourier transforms. The function $H({u,v;d,\lambda } )$ is known as the transfer function defined by
$$H({u,v;d,\lambda } )= \textrm{exp}[{i2\pi w({u,v;\lambda } )d} ], $$
where $w({u,v;\lambda } )= \sqrt {{\lambda ^{ - 2}} - {u^2} - {v^2}} $ is a spatial frequency in the $z$-direction.

Here, we consider the case where the sampled wavefield $g({x,y;{z_0}} )$ is again propagated back, but the distance and the wavelength in this case are ${d_2}$ and ${\lambda _2}$, respectively. The propagated wavefield is then given by:

$$g({x,y;{z_0} - {d_2}} )= {\mathrm{{\cal F}}^{ - 1}}\{{G({u,v;{z_0}} )H({u,v; - {d_2},{\lambda_2}} )} \}. $$

When we see images reconstructed using the wavefields given in Eq. (12) and (14), we can see identical images at different $z$-positions if the two wavefields satisfy the following:

$${|{g({x,y;{z_0} - {d_1},{\lambda_1}} )} |^2} = {|{g({x,y;{z_0} - {d_2},{\lambda_2}} )} |^2}. $$

Here, we introduce the Fresnel approximation, where the condition of this approximation is given by ${u^2} + {v^2} \ll {\lambda ^{ - 2}}$ in the angular spectrum method. In this case, $w({u,v;\lambda } )$ is approximated as follows:

$$w({u,v;\lambda } )\approx \frac{1}{\lambda } - \frac{\lambda }{2}({{u^2} + {v^2}} ). $$

Substitution of Eq. (16) into the transfer function of Eq. (13) allows Eq. (15) to be rewritten as

$${|{{\mathrm{{\cal F}}^{ - 1}}\{{G({u,v;{z_0}} )\textrm{exp}[{i\pi {\lambda_1}({{u^2} + {v^2}} ){d_1}} ]} \}} |^2}$$
$$= {|{{\mathrm{{\cal F}}^{ - 1}}\{{G({u,v;{z_0}} )\textrm{exp}[{i\pi {\lambda_2}({{u^2} + {v^2}} ){d_2}} ]} \}} |^2}. $$

The equation above is clearly satisfied when ${\lambda _1}{d_1} = {\lambda _2}{d_2}$. Therefore, the distance ${d_2}$ is given by:

$${d_2} = \frac{{{\lambda _1}}}{{{\lambda _2}}}{d_1}$$

This means that the image reconstructed at $z ={-} {d_1}$ from the wavefield $G({u,v;{z_0}} )$ with wavelength ${\lambda _1}$ is the same as the image that was reconstructed with wavelength ${\lambda _2}$ at $z ={-} {d_2}$. Because Eq. (18) is essentially identical to Eq. (5) derived in Section 2, we can conclude that the same result that was obtained using ray optics is obtained from the transfer function in wave optics. Finally, we note that the condition $x_0^2 \ll {d^2}$ used to derive Eq. (5) is also the condition of the Fresnel approximation.

5. Conclusion

We have proposed a simple and effective correction algorithm for the chromatic aberration that occurs when sampled wavefields are reconstructed using wavelengths that differ from those used to calculate or capture the original wavefields. According to our analysis based on an analogy to optical refraction, the chromatic aberration appears as a shift in the position of the reconstructed image along the optical axis, i.e., along the $z$-axis in this study. This was also confirmed here by a more rigorous analysis based on scalar diffraction theory in wave optics. Because the magnitude of the shift is dependent on the wavelength, severe color misalignment can be caused in full-color reconstruction.

The color misalignment caused by chromatic aberration can be removed almost entirely when using the proposed method. In this technique, the sampled wavefields are simply corrected via numerical propagation in the $z$-direction to cancel the position shift. Simulations confirmed that the chromatic aberration appears to vanish from the wavefields of 2D objects when using the proposed method. This is supported by evaluations using the PSNR and SSIM characteristic of images obtained by back-propagation. With respect to the wavefields of 3D objects, a slight correction error remains, particularly in the cases of objects with large depths. However, the results verified that the chromatic aberration is improved sufficiently when compared with the results from the uncorrected wavefields.

The proposed method allows us to correct chromatic aberrations caused by wavelength errors. As a result, the technique makes it possible to transmit holographic 3D images in various applications and devices in which the wavelengths are not matched completely with those of the original wavefields. Because the proposed method does not require complex computations, it is expected that the correction operation can be performed in real time. Therefore, the technique proposed here is expected to contribute to the development of holographic telecommunication technology.

Funding

National Institute of Information and Communications Technology (NICT, Japan) Commissioned Research (JPJ012368C06801).

Acknowledgment

We thank NICT for the financial support.

Disclosures

The authors declare no conflicts of interest.

Data availability

No data were generated or analyzed in the presented research.

References

1. A. Osanlou, S. Wang, and P. S. Excell, “3D re-creation of heritage artefacts using a hybrid of CGI and holography,” in Proceedings of IEEE 2017 International Conference on Cyberworlds (2017), pp. 194–197.

2. H. Byeon, T. Go, and S. J. Lee, “Deep learning-based digital in-line holographic microscopy for high resolution with extended field of view,” Opt. Laser Technol. 113, 77–86 (2019). [CrossRef]  

3. M. A. Schulze, M. A. Hunt, E. Voelkl, et al., “Semiconductor wafer defect detection using digital holography,” Proc. SPIE 5041, 183–193 (2003). [CrossRef]  

4. N. Kumar Nishchal, J. Joseph, and K. Singh, “Securing information using fractional Fourier transform in digital holography,” Opt. Commun. 235(4-6), 253–259 (2004). [CrossRef]  

5. T. Kreis, “Application of digital holography for nondestructive testing and metrology: A review,” IEEE Trans. Ind. Inf. 12(1), 240–247 (2016). [CrossRef]  

6. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009). [CrossRef]  

7. S. Igarashi, T. Nakamura, K. Matsushima, et al., “Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion,” Opt. Express 26(8), 10773–10786 (2018). [CrossRef]  

8. D. Blinder and T. Shimobaba, “Efficient algorithms for the accurate propagation of extreme-resolution holograms,” Opt. Express 27(21), 29905–29915 (2019). [CrossRef]  

9. K. Matsushima, Introduction to Computer Holography: Creating Computer-Generated Holograms as the Ultimate 3D Image (Springer, 2020), Chap. 1.

10. K. Matsushima and H. Nishi, “Challenges to tera-pixel-scale full-parallax computer holography for 3D imaging,” in Frontiers in Optics + Laser Science (FIO, LS) (2022), paper FM5E.2.

11. K. Matsushima, H. Nishi, R. Katsura, et al., “Computation techniques in tera-pixel-scale full-parallax computer holography for 3D display,” in 12th Laser Display and Lighting Conference 2023 (LDC) (2023), paper LDC10-02.

12. R. Häussler, Y. Gritsai, E. Zschau, et al., “Large real-time holographic 3D displays: Enabling components and results,” Appl. Opt. 56(13), F45–F52 (2017). [CrossRef]  

13. N. Padmanaban, Y. Peng, and G. Wetzstein, “Holographic near-eye displays based on overlap-add stereograms,” ACM Trans. Graph. 38(6), 1–13 (2019). [CrossRef]  

14. H. Yeom, H. Kim, S. Kim, et al., “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]  

15. J. Christmas and N. Collings, “75-2: Invited Paper: Realizing automotive holographic head up displays,” SID Symp. Dig. Tech. Pap. 47(1), 1017–1020 (2016). [CrossRef]  

16. B. Mullins, P. Greenhalgh, and J. Christmas, “59-5: Invited Paper: The holographic future of head up displays,” SID Symp. Dig. Tech. Pap. 48(1), 886–889 (2017). [CrossRef]  

17. W. Wang, X. Zhu, K. Chan, et al., “Digital holographic system for automotive augmented reality head-up-display,” in IEEE 27th International Symposium on Industrial Electronics (ISIE) (2018), pp. 1327–1330.

18. C. Mu, W. Lin, and C. Chen, “Zoomable head-up display with the integration of holographic and geometrical imaging,” Opt. Express 28(24), 35716–35723 (2020). [CrossRef]  

19. J. Skirnewskaja and T. Wilkinson, “Automotive holographic head-up displays,” Adv. Mater. 34(19), 2110463 (2022). [CrossRef]  

20. J. An, K. Won, and H. Lee, “Past, current, and future of holographic video display,” Appl. Opt. 61(5), B237–B245 (2022). [CrossRef]  

21. Y. L. Piao, M. U. Erdenebat, K. C. Kwon, et al., “Chromatic-dispersion-corrected full-color holographic display using directional-view image scaling method,” Appl. Opt. 58(5), A120–A127 (2019). [CrossRef]  

22. D. Wang, C. Liu, and Q. Wang, “Method of chromatic aberration elimination in holographic display based on zoomable liquid lens,” Opt. Express 27(7), 10058–10066 (2019). [CrossRef]  

23. L. Shi, B. Li, and W. Matusik, “End-to-end learning of 3d phase-only holograms for holographic display,” Light: Sci. Appl. 11(1), 247 (2022). [CrossRef]  

24. D. G. Sirico, L. Miccio, Z. Wang, et al., “Compensation of aberrations in holographic microscopes: Main strategies and applications,” Appl. Phys. B 128(4), 78 (2022). [CrossRef]  

25. P. Ferraro, S. De Nicola, G. Coppola, et al., “Controlling image size as a function of distance and wavelength in Fresnel-transform reconstruction of digital holograms,” Opt. Lett. 29(8), 854 (2004). [CrossRef]  

26. M. Leclercq and P. Picart, “Method for chromatic error compensation in digital color holographic imaging,” Opt. Express 21(22), 26456–26467 (2013). [CrossRef]  

27. T. Colomb, F. Montfort, J. Kühn, et al., “Numerical parametric lens for shifting, magnification, and complete aberration compensation in digital holographic microscopy,” J. Opt. Soc. Am. A 23(12), 3177–3190 (2006). [CrossRef]  

28. D. Yoo, S. Nam, Y. Jo, et al., “Learning-based compensation of spatially varying aberrations for holographic display [Invited],” J. Opt. Soc. Am. A 39(2), A86–A92 (2022). [CrossRef]  

29. E. Hecht, Optics, 5th ed. (Pearson, 2009), Chap 3.6.

30. B. J. Thompson, J. H. Ward, and W. R. Zinky, “Application of hologram techniques for particle size analysis,” Appl. Opt. 6(3), 519–526 (1967). [CrossRef]  

31. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17(22), 19662–19673 (2009). [CrossRef]  

32. K. Matsushima, Introduction to Computer Holography: Creating Computer-Generated Holograms as the Ultimate 3D Image (Springer, 2020), Chap. 13.3.

33. K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44(22), 4607–4614 (2005). [CrossRef]  

34. K. Matsushima, M. Nakamura, and S. Nakahara, “Silhouette method for hidden surface removal in computer holography and its acceleration using the switch-back technique,” Opt. Express 22(20), 24450–24465 (2014). [CrossRef]  

35. K. Nakamoto and K. Matsushima, “Exact mask-based occlusion processing in large-scale computer holography for 3D display,” SPIE Proc. 11062, 1106204 (2019).

36. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts & Co., 2005), Chap. 3.

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Problem caused by wavelength mismatch in the transmission of full-color holographic 3D images.
Fig. 2.
Fig. 2. Similarity between optical refraction at the boundary between media and the wavelength change in the sampled wavefields. (a) Analysis of the shift in terms of optical refraction. The positions of the object as reconstructed using wavefields with (b) correct and (c) wrong wavelengths.
Fig. 3.
Fig. 3. (a) Generation of the test wavefield by forward propagation. (b) Original color image of the 2D object and (c)–(e) monochromatic images corresponding to the three color channels.
Fig. 4.
Fig. 4. Comparison between images reconstructed using back-propagation.
Fig. 5.
Fig. 5. Evaluation of the reconstructed images in terms of their peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) in the (a), (b) uncorrected wavefields and (c), (d) corrected wavefields.
Fig. 6.
Fig. 6. Simulated reconstruction using virtual image formation for (a) uncorrected and (b) corrected wavefields at ${\theta _x} = 0^\circ $. (c)–(f) Enlarged images at the different observation angles.
Fig. 7.
Fig. 7. 3D scenes and objects used to generate the test wavefields. (a) 3D cubic array and (b) the Venus.
Fig. 8.
Fig. 8. (a) Reconstructed image of the 3D object when using back-propagation with the RGB2 wavelengths and ${d_{back}} = 3\,\textrm{cm}$. Enlarged images from the red region are shown in (b) and (c), and those from the yellow region are shown in (d) for different wavelengths and ${d_{back}}$ values, and with correction. The wavefield in the third row of (b)-(d) is corrected with ${d_{\textrm{cor}}} = 3\; \textrm{cm}$.
Fig. 9.
Fig. 9. Reconstructed image of the 3D object using virtual image formation at ${\theta _x} = 9^\circ $ (a) without and (b) with correction. (c)–(f) Comparison of enlarged images obtained for the corrected and uncorrected wavefields at various observation angles.
Fig. 10.
Fig. 10. Schematics of the correction errors for a 3D object in the proposed method.
Fig. 11.
Fig. 11. 3D scene of Color cube. The red dashed lines show the bounding box of the included objects.
Fig. 12.
Fig. 12. Images of the colorful deep 3D objects, reconstructed using virtual image formation with an observation distance of 20 cm.

Tables (2)

Tables Icon

Table 1. Parameters used in the numerical experiments.

Tables Icon

Table 2. Actual correction errors caused in the 3D objects used in the numerical experiments.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

g ( x , y ; λ 1 ) = g ( x , y ; λ 2 ) ,
n 1 sin θ 1 = n 2 sin θ 2 ,
λ 1 = λ 0 n 1 , λ 2 = λ 0 n 2 ,
d = λ 1 λ 2 d 1 + ( x 0 d ) 2 [ 1 ( λ 2 λ 1 ) 2 ] .
d λ 1 λ 2 d .
Δ d = d ( λ 1 λ 2 1 ) .
g ( x , y ; λ 2 ) = P Δ d { g ( x , y ; λ 2 ) } ,
g C ( x , y ; d obj , λ C ) = I C ( x , y ) exp [ i ϕ dif ( x , y ) ] ,
g C ( x , y ; 0 , λ C ) = P d obj { g C ( x , y ; d obj , λ C ) } ,
g C ( x , y ; d back , λ C ) = P d back { g C ( x , y ; 0 , λ C ) } ,
Δ d e = Δ d t Δ d t = ( λ 1 λ 2 1 ) Δ d t .
g ( x , y ; z 0 d 1 ) = F 1 { G ( u , v ; z 0 ) H ( u , v ; d 1 , λ 1 ) } ,
H ( u , v ; d , λ ) = exp [ i 2 π w ( u , v ; λ ) d ] ,
g ( x , y ; z 0 d 2 ) = F 1 { G ( u , v ; z 0 ) H ( u , v ; d 2 , λ 2 ) } .
| g ( x , y ; z 0 d 1 , λ 1 ) | 2 = | g ( x , y ; z 0 d 2 , λ 2 ) | 2 .
w ( u , v ; λ ) 1 λ λ 2 ( u 2 + v 2 ) .
| F 1 { G ( u , v ; z 0 ) exp [ i π λ 1 ( u 2 + v 2 ) d 1 ] } | 2
= | F 1 { G ( u , v ; z 0 ) exp [ i π λ 2 ( u 2 + v 2 ) d 2 ] } | 2 .
d 2 = λ 1 λ 2 d 1
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.