Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Pre-compensation of an image blur in holographic projection display using light emitting diode light source

Open Access Open Access

Abstract

Holographic projection displays suffer from image blur when reconstructed from an incoherent light source like a light emitting diode. In this paper, we propose a method that enhances the reconstruction sharpness by pre-compensating the target image. The image blur caused by the incoherent nature of the light emitting diode is analyzed and the corresponding spatially varying point spread function is obtained. The pre-compensation is then performed using an iterative optimization algorithm. Finally, the hologram of the pre-compensated target image is loaded onto a spatial light modulator to obtain optically reconstructed image with reduced blur. The numerically simulated results and optically reconstructed results are in good agreement, showing feasibility of the proposed method.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

A digital holographic projection display forms two-dimensional (2D) images onto a screen by controlling the phase of the light using a spatial light modulator (SLM). Traditional projectors display images by amplitude modulation which either blocks light or allows it to pass through [1,2]. To the contrary, the holographic projectors work by diffraction through phase patterns, losing hardly any light in ideal case [35]. Moreover, holographic projection displays have simpler optical configuration without requiring high precision projection optics. It is also focus-free and one does not need to adjust the focal length of the projection optics according to the screen distance.

The digital holographic projection displays usually use a laser diode (LD) as the light source. The use of the lasers in such projection displays are due to their narrow bandwidth and very small emitting area, i.e. high temporal and spatial coherence [6,7]. The high coherence enables precise phase modulation, reconstructing sharp target images. At the same time, however, the precise and stable phase modulation also results in sustained speckle noise in the final reconstructed images, which affects the overall image quality.

In order to reduce the high speckle noise, several research works have been proposed including spectral averaging and temporal averaging [8,9]. The spectral averaging technique uses a polychromatic light source like a light emitting diode (LED). Each monochromatic component in the spectrum of the LED reconstructs the image with different speckle patterns and they are accumulated to result in suppressed speckle noise. The compact form factor of the LED in comparison with the LD is another advantage of this technique [10]. However, the broad spectral bandwidth creates wavelength-dependent magnifications of the reconstruction. Also, the extended light emitting area of the LED creates spatial shifts of the reconstructions. These two effects result in blurred images in the final reconstruction plane. On the other hand, the temporal averaging technique involves sequential superposition of the images reconstructed by using a LD. Each reconstructed image has the same target image but with different random phase distribution, which averages down the final speckle noise in the observed superposition [11,12]. The use of the LD with narrow linewidth enables sharp reconstructions. This technique, however, increases system complexity by requiring mechanical motion like rotating diffuser or a high speed SLM fast enough to achieve sufficient speckle suppression within eye integration time [13].

In this paper, we propose a method to reduce the image blur in the spectral averaging technique using a LED source. From an analysis on the system under consideration, we obtain the point spread function (PSF) which is spatially varying across the reconstruction plane. We then pre-compensate a target image based on the obtained PSF by using an iterative simultaneous algebraic reconstruction technique (SART) algorithm [14]. The pre-compensated image is finally transformed to the hologram and loaded onto the SLM to observe an optically reconstructed image with reduced blur. Numerical simulation and optical experiments are presented for the verification of the proposed method. Structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) are also calculated for the numerically simulated results to have the quantitative comparison of the blur reduction in the final reconstruction image.

2. Analysis on spatially varying PSF of the system under consideration

In this section, we analyze the PSF of simple holographic projection systems using a LED. Two configurations are considered including Fourier hologram projection using collimated, and converging illumination.

2.1 Fourier hologram configuration under collimated beam

Figure 1 shows a simple geometry of the Fourier hologram configuration with collimated illumination. In this configuration, the light from the source is collimated by a lens L1 of a focal length fo and illuminates the SLM. The SLM is located in the front focal plane of the Fourier lens L2 of a focal length f. The reconstruction is observed in its rear focal plane.

 figure: Fig. 1.

Fig. 1. Optical system configuration for collimated illumination.

Download Full Size | PDF

2.1.1 Monochromatic point light source

In an ideal case, a monochromatic point light is used as the light source. Suppose that a point light source of a wavelength λ0 is positioned at an origin (ξ,η)=(0,0). At the reconstruction plane, the optical field is given by the Fourier transform of the optical field at the SLM plane [15,16];

$${U_{{\lambda _0},0,0}}(u,v) = \int {\int {H(x,y)\exp \left[ { - j\frac{{2\pi }}{{{\lambda_0}f}}({xu + yv} )} \right]} dxdy} = F{[{H(x,y)} ]_{{f_x} = \frac{u}{{{\lambda _0}f}},{f_y} = \frac{v}{{{\lambda _0}f}}}},$$
where F[·] represents Fourier transform, and H(x,y) is the hologram in the SLM plane. The observed reconstruction is the intensity, which is given by,
$${I_{{\lambda _0}0,0}}(u,v) = {|{{U_{{\lambda_0}0,0}}(u,v)} |^2} = {\left|{F{{[{H(x,y)} ]}_{{f_x} = \frac{u}{{{\lambda_0}f}},{f_y} = \frac{v}{{{\lambda_0}f}}}}} \right|^2}.$$

2.1.2 LED with extended spectral bandwidth and spatial area

A LED is an incoherent light source with extended spectral bandwidth and spatial area. The extended spatial area of the LED can be considered as distributed multiple point sources. The extended bandwidth can also be considered as a collection of different wavelengths. In this paper it is assumed that the point light components of different positions and wavelengths are uncorrelated each other such that their contributions are added in intensity in the reconstruction plane. It is also assumed that there is no dependency of the spectrum on the spatial position within the light emitting area.

Consider a single point source of a wavelength λ at a position (ξ,η)=(ξ1,η1) in the source plane. The light from the point source is collimated by the lens L1 to be a tilted plane wave with an angle (-ξ1/fo,-η1/fo). The tilted plane wave illuminates the SLM and is finally Fourier transformed in the reconstruction plane. From Fig. 1, the optical field in the reconstruction plane is given by

$${U_{\lambda ,{\xi _1},\eta }}_{_1}(u,v) = \int {\int {H(x,y)\exp \left[ { - \frac{{2\pi }}{{\lambda {f_0}}}({x{\xi_1} + y{\eta_1}} )} \right]\exp \left[ { - j\frac{{2\pi }}{{\lambda f}}({xu + yv} )} \right]} dxdy} ,$$
where the first exponential term represents the tilted plane wave. Equation (3) can be slightly modified to
$${U_{\lambda ,{\xi _1},{\eta _1}}}(u,v) = \int {\int {H(x,y)\exp \left[ { - j\frac{{2\pi }}{{{\lambda_0}f}}\left\{ {\frac{{{\lambda_0}}}{\lambda }\left( {u + \frac{f}{{{f_0}}}{\xi_1}} \right)x + \frac{{{\lambda_0}}}{\lambda }\left( {v + \frac{f}{{{f_0}}}{\eta_1}} \right)y} \right\}} \right]} dxdy} ,$$
which reveals that Uλ,ξ1,η1(u,v) is related to Uλo,0,0(u,v) of Eq. (1) by
$${U_{\lambda ,{\xi _1},{\eta _1}}}(u,v) = {U_{{\lambda _0},0,0}}\left( {\frac{{{\lambda_o}}}{\lambda }\left( {u + \frac{f}{{{f_0}}}{\xi_1}} \right),\frac{{{\lambda_0}}}{\lambda }\left( {v + \frac{f}{{{f_0}}}{\eta_1}} \right)} \right),$$
or
$${I_{\lambda ,{\xi _1},\eta }}_1({u,v} )= {|{{U_{\lambda ,{\xi_1},{\eta_1}}}({u,v} )} |^2} = {I_{{\lambda _0},0,0}}\left( {\frac{{{\lambda_0}}}{\lambda }\left( {u + \frac{f}{{{f_0}}}{\xi_1}} \right),\frac{{{\lambda_0}}}{\lambda }\left( {v + \frac{f}{{{f_0}}}{\eta_1}} \right)} \right).$$
Equation (6) indicates that the image reconstructed by a point light source of the wavelength λ and the position (ξ1,η1) has the same shape as the Iλo,0,0(u,v) but with a shift (-f/foξ1, -f/foη1) and a scaling λo/λ.

Now considering the LED with a spatial intensity distribution A(ξ,η) and the spectrum L(λ), the final reconstructed image is given by

$$\begin{array}{l} I({u,v} )= \int\!\!\!\int\!\!\!\int {{I_{\lambda ,\xi ,\eta }}({u,v} )A({\xi ,\eta } )L(\lambda )d\lambda d\xi d\eta } \\ \quad \quad \;\; = \int\!\!\!\int\!\!\!\int {{I_{{\lambda _0},0,0}}\left( {\frac{{{\lambda_0}}}{\lambda }\left( {u + \frac{f}{{{f_0}}}\xi } \right),\frac{{{\lambda_0}}}{\lambda }\left( {v + \frac{f}{{{f_0}}}\eta } \right)} \right)A({\xi ,\eta } )L(\lambda )d\lambda d\xi d\eta } . \end{array}$$
Therefore, the LED reconstruction can be considered as the weighted addition of the shifted and scaled target images. Note that because the spectrum L(λ) is assumed to be independent from the spatial position (ξ, η) in the LED light emitting area, Eq. (7) can be decomposed into two sequential steps corresponding only to L(λ) and A(ξ, η), respectively, relieving the memory requirement in the implementation, as will be explained in later section. Also note that the light emitting area size, i.e. the spatial extent of A(ξ, η) is sufficiently small with respect to the collimating lens so that the vigenetting effect in the SLM plane is negligible.

2.2 System configuration under converging illumination

The system configuration under converging illumination is shown in Fig. 2. The source plane and the reconstruction plane are conjugated by a lens L1, giving 1/a + 1/b = 1/fo where a and b are distances of the source plane and the reconstruction plane from the L1 of the focal length fo.

 figure: Fig. 2.

Fig. 2. Optical system configuration for convergent illumination.

Download Full Size | PDF

For a point light source of the wavelength λ and the spatial position (ξ,η)=(ξ1,η1), the optical field just before the SLM is written by

$${U_{illum}}(x,y) = \exp \left[ { - j\frac{\pi }{{\lambda d}}\left\{ {{{\left( {x + \frac{b}{a}{\xi_1}} \right)}^2} + {{\left( {y + \frac{b}{a}{\eta_1}} \right)}^2}} \right\}} \right],$$
where d is the distance of the SLM from the reconstruction plane. The complex field just after the SLM is given by the product of the hologram H(x,y) and the illumination Uillum(x,y). The complex field in the reconstruction plane is then be calculated by applying the Fresnel propagation over the distance d by
$$\begin{array}{l} {U_{\lambda ,{\xi _1},{\eta _1}}}(u,v) = \int {\int {H(x,y){U_{illum}}(x,y)\exp \left[ {j\frac{\pi }{{\lambda d}}\{{{{({x - u} )}^2} + {{({y - v} )}^2}} \}} \right]} dxdy} \\ \quad \quad \quad \quad = \exp \left[ { - j\frac{\pi }{{\lambda d}}\left\{ {{{\left( {\frac{b}{a}{\xi_1}} \right)}^2} + {{\left( {\frac{b}{a}{\eta_1}} \right)}^2}} \right\}} \right]\exp \left[ {j\frac{\pi }{{\lambda d}}({{u^2} + {v^2}} )} \right]\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \times \int {\int {H(x,y)} \exp \left[ { - j\frac{{2\pi }}{{\lambda d}}\left\{ {x\left( {u + \frac{b}{a}{\xi_1}} \right) + y\left( {v + \frac{b}{a}{\eta_1}} \right)} \right\}} \right]dxdy} . \end{array}$$
The intensity in the reconstruction plane is given by,
$${I_{\lambda ,{\xi _1},{\eta _1}}}({u,v} )= {|{{U_{\lambda ,{\xi_1},{\eta_1}}}({u,v} )} |^2} = {I_{{\lambda _0},0,0}}\left( {\frac{{{\lambda_0}}}{\lambda }\left( {u + \frac{b}{a}{\xi_1}} \right),\frac{{{\lambda_0}}}{\lambda }\left( {v + \frac{b}{a}{\eta_1}} \right)} \right),$$
where Iλo,0,0(u,v) is defined by the intensity reconstruction corresponding to the point light source of the wavelength λo and position (ξ,η)=(0,0). Finally, the reconstruction of the LED with a spatial intensity distribution A(ξ,η) and the spectrum L(λ) is given by
$$I({u,v} )= \int {\int {\int {{I_{{\lambda _0},0,0}}\left( {\frac{{{\lambda_0}}}{\lambda }\left( {u + \frac{b}{a}\xi } \right),\frac{{{\lambda_0}}}{\lambda }\left( {v + \frac{b}{a}\eta } \right)} \right)} A({\xi ,\eta } )L(\lambda )d\lambda d\xi d\eta } } .$$
Equation (11) indicates that the LED reconstruction in the case of the converging illumination is given by a weighted addition of the shifted and scaled target images like the collimated illumination case.

3. Proposed pre-compensation using SART

The analysis in the previous section reveals that both configurations have spatially variant PSFs across the reconstruction plane. The spatial variance of the PSF prevents the use of simple frequency domain deconvolution techniques for the pre-compensation of the target images. As an alternative, the proposed technique uses an iterative algorithm known as SART. The SART has often been used to reconstruct an image from its sinogram using angular projection system model [14,1719]. We use the SART to obtain the pre-compensated image from the target reconstruction using the image blur model, i.e. the system PSF, given by Eqs. (7) and (11).

The spatially variant image blur model given in Eqs. (7) and (11) can be represented by a matrix equation I(u,v)col= PIλo,0,0(u,v)col, as shown in Fig. 3. Iλo,0,0(u,v)col is the reconstruction when a monochromatic point light source at (ξ,η)=(0,0) of the wavelength λo is used. It is also an input image for the hologram generation. I(u,v)col is the reconstruction when a LED with a spectrum L(λ) and the spatial emission intensity distribution A(ξ, η) is used. P is the projection matrix which relates Iλo,0,0(u,v)col and I(u,v)col as given by Eqs. (7) and (11). The superscript col means the column vector representation of a 2D image.

 figure: Fig. 3.

Fig. 3. Optical system in matrix representation.

Download Full Size | PDF

Without the proposed pre-compensation, the input image for the hologram generation Iλo,0,0(u,v)col is set to be a target image T(u,v)col, i.e. Iλo,0,0(u,v)col=T(u,v)col. The actual reconstruction I(u,v)col =PT(u,v)col will then be a blurred version of the target image T(u,v)col as given by Eqs. (7) and (11). In the proposed method, we obtain the pre-compensated input image Iλo,0,0(u,v)col using the SART such that the difference between the target image T(u,v)col and the actual reconstruction I(u,v)col=PIλo,0,0(u,v)col decreases.

Figure 4 illustrates the iterative update of the pre-compensated input image Iλo,0,0(u,v)col using the SART algorithm in the proposed method. In Fig. 4, the term under the red box represents the error or the difference between the target image, T(u,v)col and the actual reconstruction I(k)(u,v)col at the current iteration k. This error term reduces to a saturated value after sufficient iterations. The weighting term in Fig. 4 is defined in SART algorithm [14] and it contributes to the suppression of additional noise caused by the iterative process.

 figure: Fig. 4.

Fig. 4. Iterative update in SART algorithm.

Download Full Size | PDF

After iterations, the hologram is generated for the final pre-compensated image Iλo,0,0(u,v). This hologram is then be uploaded to the SLM in our optical experiment using the LED, giving reconstructed images with reduced blur.

4. Numerical simulation

In numerical simulation, the projection matrix P is implemented for both collimated and converging illumination configurations. The pre-compensated image is obtained using the SART algorithm. The blur reduction by the pre-compensation is examined by evaluating its simulated reconstructions using SSIM and PSNR against the original target image.

The pixel resolution of the computation window used in all simulations and the experiments is N × N = 721 × 721 pixels. The projection matrix P has 7212 × 7212 size with significant number of non-zero elements, which requires huge amount of memory for processing. In order to reduce the memory requirement, we divide the PSF given in Eqs. (7) and (11) into spatially variant and invariant parts. The image blur given in Eqs. (7) and (11) can be decomposed into following two sequential steps;

$$I^{\prime}({u,v} )= \int {{I_{{\lambda _0},0,0}}\left( {\frac{{{\lambda_0}}}{\lambda }u,\frac{{{\lambda_0}}}{\lambda }v} \right)} L(\lambda )d\lambda ,$$
and
$$I({u,v} )= \int {\int {I^{\prime}({u + m\xi ,v + m\eta } )} A({\xi ,\eta } )d\xi d\eta } ,$$
where m represents f/fo for Eq. (7) and b/a for Eq. (11). Equation (12) is the blur caused by the wavelength dependent magnification which is spatially variant. Equation (13) is the blur caused by the spatial shift which can be given in spatially invariant form
$$I({u,v} )= \int {\int {I^{\prime}({u - \xi ,v - \eta } )} A^{\prime}({\xi ,\eta } )d\xi d\eta } ,$$
with the spatially invariant PSF part A′(u,v);
$$A^{\prime}({u,v} )= A\left( { - \frac{u}{m}, - \frac{v}{m}} \right).$$
In our implementation, the spatially variant blur caused by the extended linewidth given in Eq. (12) is implemented using a matrix form, while the spatially invariant blur caused by the extended source size given in Eq. (14) is implemented using a function. Since Eq. (12) describes one-dimensional (radial direction) magnification, the number of the non-zero elements in the matrix representation is manageable. Since Eq. (14) is spatially invariant, it can be implemented using a simple convolution function. Therefore, by the combined use of the matrix and function implementation, the memory requirement is relieved. Note that the proposed technique deals with the spatially variant and invariant parts of the PSF in a single SART framework rather than sequentially applying two different deconvolution algorithms to the spatially variant and invariant parts, respectively. Also note that if the spectrum L(λ) is significantly dependent on the position in the emission area (ξ, η), the separation into the spatially variant and invariant parts explained in this section is not valid. Nevertheless, the matrix implementation of the overall model given in Eqs. (7) and (11) is still possible at the expense of the memory requirement.

Figure 5 shows the spectrum L(λ) of the LED (EP470S04 pigtailed LED, Thorlabs) used in the simulation and optical experiments. The spectrum in Fig. 5 shows the peak wavelength 473 nm and the full-width-at-half-maximum (FWHM) of 25 nm. The 473 nm peak wavelength in the LED spectrum is assumed to be the reference wavelength λo for the pre-compensation. The LED is pigtailed with a multimode fiber of 0.4 mm core diameter. In the simulation, the spatial intensity distribution A(ξ,η) of the LED is assumed to be a unit intensity square, i.e. A(ξ,η) = 1, of various size ranging from 0.1mm × 0.1 mm to 1.0mm × 1.0 mm in order to test the algorithm performance.

 figure: Fig. 5.

Fig. 5. Spectrum of the LED used in numerical simulation and optical experiment.

Download Full Size | PDF

In the collimated illumination case shown in Fig. 1, the focal lengths of the collimating lens fo and the 4-f optics lens f are set to be fo=200 mm and f = 250 mm following the actual optical experiment setup. The sampling pitch in the reconstruction plane is Δuv = 35.7um which corresponds to the pixel pitch Δxy = 4.6um of the SLM used in the optical experiment by Δu=λof/NΔx and Δv=λof/NΔy. With the LED emission area 0.4mm × 0.4 mm, this sampling pitch in the reconstruction plane makes the convolution kernel A′(u,v) of Eq. (15) with m = f/fo=1.25 have about 14 × 14 pixel size in the reconstruction plane in the collimated illumination case.

In the converging illumination case shown in Fig. 2, the distances a, b, and d are set to be a = 165 mm, b = 135 mm, and d = 100 mm following the optical experimental setup. The sampling pitch in the reconstruction plane is Δuv = 14.3um which is given by Δu=λod/NΔx and Δv=λod/NΔy. With m = b/a = 0.82, the LED emission area 0.4mm × 0.4 mm corresponds to the convolution kernel A′(u,v) of about 23 × 23 pixels size in the reconstruction plane in the converging illumination case.

In the simulation and the experiment, an off-axis image configuration is used. In the 721 × 721 pixel computation window, the input image with or without proposed pre-compensation is located at the left half plane as shown in Fig. 6(a) to avoid the overlapping with the DC and the conjugate term in the final hologram reconstruction plane as shown in Fig. 6(b). Note that due to this off-axis configuration, the radial blur of Eq. (12) which is caused by the extended linewidth of the LED affects the reconstruction in a de-centered way as shown in Fig. 6(b), making the left end of the reconstruction be more blurred than the right end as can be seen in all simulation and experimental results. In the following simulation results, only the valid input and reconstruction area, i.e. green shaded area in Fig. 6, is shown for clarity.

 figure: Fig. 6.

Fig. 6. Off-axis image configuration. (a) Input image plane, (b) reconstruction plane.

Download Full Size | PDF

Figure 7 shows one example of the simulation results obtained for the collimated illumination case with the LED emission area 0.15mm × 0.15 mm. Figure 7(a) is the target leaf image T(u,v). This target image is multiplied with the projection matrix, resulting in the blurred reconstruction as shown in Fig. 7(b). The pre-compensated image obtained after 50 iterations is Fig. 7(c). Finally, the simulated reconstruction of the pre-compensated image is obtained by multiplying it with the projection matrix as shown in Fig. 7(d). From the comparison of Figs. 7(b) and 7(d), we can observe the reduction in blur with the proposed method.

 figure: Fig. 7.

Fig. 7. Numerical simulation in our proposed method using SART algorithm. (a) Original target image, (b) original blur in final reconstruction, (c) pre-compensated input image using SART algorithm, and (d) reduced blur using proposed method.

Download Full Size | PDF

Figure 8 shows examples of the pre-compensated images and their simulated reconstructions at different number of iterations. The images in Fig. 8 were also obtained in the collimated illumination case with 0.15mm × 0.15 mm LED emitter size. Figure 8 reveals that the error reduces to the saturated level quickly and the improvement is negligible after 50 iterations. In all simulations and optical experiments described below, the final pre-compensated images were obtained by 50 iterations.

 figure: Fig. 8.

Fig. 8. Pre-compensated images and simulated reconstructions at different iterations.

Download Full Size | PDF

Figure 9 shows the simulated reconstructions with and without the proposed pre-compensations for the collimated and converging illumination cases at 0.15mm × 0.15 mm emitter size. The blur reduction by the proposed method can be verified for both configurations as expected.

 figure: Fig. 9.

Fig. 9. Simulated reconstructions in the collimated and converging illumination configurations. Left and right columns in each configuration show the results without and with the proposed pre-compensation, respectively.

Download Full Size | PDF

For the quantitative evaluation of the blur reduction, we used two metrics; SSIM and PSNR. Table 1 shows the result. T1 and T2 represent two target images in Fig. 9 top and bottom rows, respectively. It can be confirmed from Table 1 that the proposed pre-compensation technique provides the reconstructions more similar to the original target images in all cases.

Tables Icon

Table 1. Image quality measurement using SSIM and PSNR

In order to further explore the performance of the proposed technique, we performed 10 × 10 numerical simulations for different combinations of the spectral bandwidth and the emitter size. For spectral bandwidth, the original spectrum with the 25 nm FWHM shown in Fig. 5 was contracted or expanded around the peak wavelength 473 nm, making 10 different FWHMs ranging from 5 nm to 50 nm. The side length of the assumed square light emitting area was also varied from 0.1 mm to 1.0 mm, giving 10 different values. For all 10 × 10 combinations of the spectral bandwidth and the emitter size, the quality of the reconstructed images with and without the proposed pre-compensation was measured using the SSIM and PSNR. The simulation was performed for three target images including the leaf image in Fig. 7 and the cubic and the half flower images in Fig. 9, and their SSIM and the PSNR values were averaged. The illumination configuration considered in this simulation is the collimated one with fo=200 mm and f = 250 mm as before. Figure 10 shows the result. As the spatial emitting area and the bandwidth increases, the quality of the reconstructed images decreases regardless of the application of the proposed pre-compensation technique. However, it is clearly shown that the proposed pre-compensation consistently enhances the quality of the reconstruction in all cases, proving the feasibility of the proposed technique successfully.

 figure: Fig. 10.

Fig. 10. Performance of the proposed pre-compensation technique for various LED spectrum bandwidth and emitter size. The quality of the simulated reconstructions with and without the proposed pre-compensation is shown using (a) SSIM and (b) PSNR metric.

Download Full Size | PDF

Although the reconstruction quality enhancement over wide bandwidth range from 5 nm FWHM to 50 nm FWHM was shown in Fig. 10, the current implementation and simulations of the proposed technique do not consider the chromatic aberration of the lens optics in the optical system. The lens chromatic aberration could be minimized by using achromatic optics in the implementation. Also note that for color reconstruction using three LEDs, i.e. red, green, and blue, the proposed technique generates three holograms, each of which is generated for the corresponding color channel of the target image with pre-compensation considering the emitting area size and the spectral bandwidth of the corresponding LED.

5. Optical experiment

Figure 11 shows the optical experimental setup for the collimated and converging illumination configurations. The holograms of the pre-compensated images were calculated and loaded to the SLM to observe their optical reconstructions. The EP470S04 pigtailed LED, Thorlabs, was used in the optical experiment as in the numerical simulation. The original spectrum shown in Fig. 5 was used in the pre-compensation as it is without any contraction or expansion. For the light emitter area, the convolution kernel A′(u,v) of Eq. (15) was approximated to a unit intensity square of 9 × 9, 11 × 11, and 13 × 13 pixel sizes in the reconstruction plane of the collimated illumination case, which corresponds to 0.26mm × 0.26 mm, 0.31mm × 0.31 mm, and 0.37 × 0.37 mm square light emitting area of the LED, respectively. In the converging illumination case, the convolution kernel A′(u,v) was set to be 15 × 15, 17 × 17, and 21 × 21 pixel squares, which corresponds to 0.26mm × 0.26 mm, 0.30mm × 0.30 mm, and 0.37 × 0.37 mm square light emitting area of the LED, respectively. Note that the actual LED used in the experiment, i.e. EP470S04 Thorlabs, is a pigtailed one with a multimode fiber of 0.4 mm diameter circular core. As the SLM, a reflection-type SLM with 4.6um pixel pitch and 1920 × 1080 resolution was used. The computation window for the pre-compensation was 721 × 721 as in the simulations, which gives 721 × 721 resolution holograms. Therefore, only central 721 × 721 pixels out of full 1920 × 1080 pixels of the SLM was used in the experiment. All holograms were synthesized using random phase carrier wave.

 figure: Fig. 11.

Fig. 11. Optical experimental setup. (a) Collimated and (b) converging illumination configurations. In collimated illumination configuration, fo=200 mm, f = 250 mm. In the converging illumination configuration, a = 165 mm, b = 135 mm, d = 100 mm and the focal length of the lens L is 75 mm.

Download Full Size | PDF

In Fig. 11(a) for the collimated configuration, the light from the LED is collimated by the lens L1 of the focal length fo. Then the light is reflected from the SLM, passes through the lens L2, and converges at its rear focal plane, i.e. Fourier plane. Here the final image with reduced blur is formed along with DC and conjugate terms which can be filtered out using an aperture in actual applications. In Fig. 11(b) for the converging illumination configuration, the light is reflected from the SLM and converges at a distance b (Fourier plane) from the lens L. Here, again the reconstructed image with reduced blur is formed along with DC and conjugate terms.

The optical experimental results for the collimated and converging illumination configurations are shown in Figs. 12 and 13, respectively. In Figs. 12 and 13, the desired reconstruction area excluding the DC and the conjugate terms is indicated by green dotted rectangles. In the collimated illumination configuration, the reconstructions without the proposed pre-compensation are blurred as shown in Fig. 12(a). It is also observed from the blurred pattern shown in Fig. 12(a) that the left part in the green dotted rectangle (far side from the optical axis or DC) is more blurred than the right part in the rectangle (near side to the optical axis) as observed in the simulations of Figs. 79, due to the radial blur caused by the extended bandwidth of the LED. Figures 12(b)–12(d) show the reconstruction results with the proposed pre-compensations. The 0.4mm-diameter circular core of the multimode fiber coupled to the LED worked as the light emitting area A(ξ, η) in our experiment and it was approximated to a unit intensity square of 0.26mm × 0.26 mm, 0.31mm × 0.31 mm, and 0.37mm × 0.37 mm sizes in Figs. 12(b), 12(c), and 12(d), respectively. Comparison between Figs. 12(b)–12(d) against Fig. 12(a) reveals that the blur in the reconstruction is much reduced by the proposed pre-compensation in all cases successfully. It is also observed from Figs. 12(b)–12(d) that as the considered light emitting area increases, the artefacts in the reconstruction also increase. In our experiment, it is observed that Figs. 12(b) and 12(c) with 0.26mm × 0.26 mm, 0.31mm × 0.31 mm light emitting area sizes show better subjective quality than Fig. 12(d) with 0.37mm × 0.37 mm size due to increased artefact in Fig. 12(d).

 figure: Fig. 12.

Fig. 12. Optical experimental results in collimated illumination configuration. (a) Reconstructions without the pre-compensation. (b)–(d) Reconstructions with proposed pre-compensation with different pixel sizes of the convolution kernel A′(u,v). (b) 9 × 9 pixel, or 0.26mm × 0.26 mm light emitting area, (c) 11 × 11 pixel, or 0.31mm × 0.31 mm light emitting area, (d) 13 × 13 pixel, or 0.37mm × 0.37 mm light emitting area.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Optical experimental results in converging illumination configuration. (a) Reconstructions without the pre-compensation. (b)–(d) Reconstructions with proposed pre-compensation with different pixel sizes of the convolution kernel A′(u,v). (b) 15 × 15 pixel, or 0.26mm × 0.26 mm light emitting area, (c) 17 × 17 pixel, or 0.30mm × 0.30 mm light emitting area, (d) 21 × 21 pixel, or 0.37mm × 0.37 mm light emitting area.

Download Full Size | PDF

Figure 13 shows the optical experiment results for the converging illumination case. As explained in the simulation section, under our experimental setup parameters, the same physical light emitting area size corresponds to the larger convolution kernel size in the reconstruction plane of the converging illumination configuration than the collimated configuration. This makes the reconstruction is more blurred in the converging illumination setup than in the collimated illumination setup in our experiment. Figure 13(a) shows the reconstructions without the proposed pre-compensation. As expected, the reconstructions in Fig. 13(a) are more blurred than those in the collimated case shown in Fig. 12(a). Figures 13(b)–13(d) show the reconstructions with the proposed pre-compensation, obtained approximating the light emitting area to 0.26mm × 0.26 mm, 0.30mm × 0.30 mm, and 0.37mm × 0.37 mm, respectively. Similar to the collimated configuration, the artefact in the reconstruction becomes more apparent as larger light emitting area is considered in the pre-compensation. However, again it is clearly observed from the comparison between Figs. 13(b)–13(d) and Fig. 13(a) that the proposed pre-compensation successfully reduces the reconstruction blur in the converging illumination configuration as well, confirming the validity of the proposed technique.

6. Conclusion

In this paper, a technique to reduce the blur in the holographic projection displays using a LED as a source is proposed. The image blur caused by the broad linewidth and the extended emission area of the LED are analyzed to reveal the spatially variant wavelength dependent radial magnification and the spatially invariant emission position dependent shift. The pre-compensation of the target image against the analyzed image blur is performed using the iterative algorithm SART. In the SART, the spatially invariant and variant parts are implemented separately using the function and matrix forms to reduce the memory reuirement. Numerical simulations and optical experiments verify that the proposed pre-compensation technique can reconstruct the target images with reduced blur as compared to the original blur in reconstructions without any pre-compensation.

Funding

National Research Foundation of Korea (NRF-2017R1A2B2011084); Institute for Information and Communications Technology Promotion (2017-0-00417, GK19D0100, IITP-2018-2015-0-00448).

Disclosures

The authors declare no conflicts of interest.

References

1. L. Seime and J. Y. Hardeberg, “Colorimetric characterization of LCD and DLP projection displays,” J. Soc. Inf. Disp. 11(2), 349–358 (2003). [CrossRef]  

2. N. F. Borrelli, “Efficiency of microlens arrays for projection LCD,” in Proceedings of IEEE on Electronic Components and Technology Conference (IEEE, 1994), pp. 338–345.

3. M. Makowski, I. Ducin, K. Kakarenko, J. Suszek, M. Sypek, and A. Kolodziejczyk, “Simple holographic projection in color,” Opt. Express 20(22), 25130–25136 (2012). [CrossRef]  

4. E. Buckley, “Holographic laser projection,” J. Disp. Technol. 7(3), 135–140 (2011). [CrossRef]  

5. K. Wakunami, P. Y. Hsieh, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, M. Okui, Y. P. Huang, and K. Yamamoto, “Projection-type see-through holographic three-dimensional display,” Nat. Commun. 7(1), 12954 (2016). [CrossRef]  

6. Y. Deng and D. Chu, “Coherence properties of different light sources and their effect on the image sharpness and speckle of holographic displays,” Sci. Rep. 7(1), 5893 (2017). [CrossRef]  

7. D. Lee, G. Li, and B. Lee, “Comparison between LED and LD as a light source for near-eye holographic display,” Proc. SPIE 10834, 1083419 (2018). [CrossRef]  

8. V. Bianco, P. Memmolo, M. Leo, S. Montresor, C. Distante, M. Paturzo, P. Picart, B. Javidi, and P. Ferraro, “Strategies for reducing speckle noise in digital holography,” Light: Sci. Appl. 7(1), 48 (2018). [CrossRef]  

9. L. Golan and S. Shoham, “Speckle elimination using shift-averaging in high-rate holographic projection,” Opt. Express 17(3), 1330–1339 (2009). [CrossRef]  

10. F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48(34), H48–H53 (2009). [CrossRef]  

11. J. Amako, H. Miura, and T. Sonehara, “Speckle-noise reduction on kinoform reconstruction using a phase-only spatial light modulator,” Appl. Opt. 34(17), 3165–3171 (1995). [CrossRef]  

12. W. F. Hsu and C. F. Yeh, “Speckle suppression in holographic projection displays using temporal integration of speckle images from diffractive optical elements,” Appl. Opt. 50(34), H50–H55 (2011). [CrossRef]  

13. T. Shimobaba, M. Makowski, T. Kakue, M. Oikawa, N. Okada, Y. Endo, R. Hirayama, and T. Ito, “Lensless zoomable holographic projection using scaled Fresnel diffraction,” Opt. Express 21(21), 25285–25290 (2013). [CrossRef]  

14. A. H. Andersen and A. C. Kak, “Simultaneous algebraic reconstruction technique (SART): a superior implementation of the ART algorithm,” Ultrason. Imaging 6(1), 81–94 (1984). [CrossRef]  

15. D. Wang, C. Liu, and Q.-H. Wang, “Holographic zoom micro-projection system based on three spatial light modulators,” Opt. Express 27(6), 8048–8058 (2019). [CrossRef]  

16. H. Zhang, J. Xie, J. Liu, and Y. Wang, “Elimination of a zero-order beam induced by a pixelated spatial light modulator for holographic projection,” Appl. Opt. 48(30), 5834–5841 (2009). [CrossRef]  

17. G. T. Herman, A. Lent, and S. W. Rowland, “ART: Mathematics and applications: A report on the mathematical foundations and on the applicability to real data of the algebraic reconstruction techniques,” J. Theor. Biol. 42(1), 1–32 (1973). [CrossRef]  

18. R. Gordon, “A tutorial on art (algebraic reconstruction techniques),” IEEE Trans. Nucl. Sci. 21(3), 78–93 (1974). [CrossRef]  

19. M. Askari and J.-H. Park, “Pre-compensation for holographic image blur caused by light source of extended spatial area and spectral linewidth,” in Digital Holography and Three-Dimensional Imaging 2019, OSA Technical Digest (Optical Society of America, 2019), paper Tu4A.6.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Optical system configuration for collimated illumination.
Fig. 2.
Fig. 2. Optical system configuration for convergent illumination.
Fig. 3.
Fig. 3. Optical system in matrix representation.
Fig. 4.
Fig. 4. Iterative update in SART algorithm.
Fig. 5.
Fig. 5. Spectrum of the LED used in numerical simulation and optical experiment.
Fig. 6.
Fig. 6. Off-axis image configuration. (a) Input image plane, (b) reconstruction plane.
Fig. 7.
Fig. 7. Numerical simulation in our proposed method using SART algorithm. (a) Original target image, (b) original blur in final reconstruction, (c) pre-compensated input image using SART algorithm, and (d) reduced blur using proposed method.
Fig. 8.
Fig. 8. Pre-compensated images and simulated reconstructions at different iterations.
Fig. 9.
Fig. 9. Simulated reconstructions in the collimated and converging illumination configurations. Left and right columns in each configuration show the results without and with the proposed pre-compensation, respectively.
Fig. 10.
Fig. 10. Performance of the proposed pre-compensation technique for various LED spectrum bandwidth and emitter size. The quality of the simulated reconstructions with and without the proposed pre-compensation is shown using (a) SSIM and (b) PSNR metric.
Fig. 11.
Fig. 11. Optical experimental setup. (a) Collimated and (b) converging illumination configurations. In collimated illumination configuration, fo=200 mm, f = 250 mm. In the converging illumination configuration, a = 165 mm, b = 135 mm, d = 100 mm and the focal length of the lens L is 75 mm.
Fig. 12.
Fig. 12. Optical experimental results in collimated illumination configuration. (a) Reconstructions without the pre-compensation. (b)–(d) Reconstructions with proposed pre-compensation with different pixel sizes of the convolution kernel A′(u,v). (b) 9 × 9 pixel, or 0.26mm × 0.26 mm light emitting area, (c) 11 × 11 pixel, or 0.31mm × 0.31 mm light emitting area, (d) 13 × 13 pixel, or 0.37mm × 0.37 mm light emitting area.
Fig. 13.
Fig. 13. Optical experimental results in converging illumination configuration. (a) Reconstructions without the pre-compensation. (b)–(d) Reconstructions with proposed pre-compensation with different pixel sizes of the convolution kernel A′(u,v). (b) 15 × 15 pixel, or 0.26mm × 0.26 mm light emitting area, (c) 17 × 17 pixel, or 0.30mm × 0.30 mm light emitting area, (d) 21 × 21 pixel, or 0.37mm × 0.37 mm light emitting area.

Tables (1)

Tables Icon

Table 1. Image quality measurement using SSIM and PSNR

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

U λ 0 , 0 , 0 ( u , v ) = H ( x , y ) exp [ j 2 π λ 0 f ( x u + y v ) ] d x d y = F [ H ( x , y ) ] f x = u λ 0 f , f y = v λ 0 f ,
I λ 0 0 , 0 ( u , v ) = | U λ 0 0 , 0 ( u , v ) | 2 = | F [ H ( x , y ) ] f x = u λ 0 f , f y = v λ 0 f | 2 .
U λ , ξ 1 , η 1 ( u , v ) = H ( x , y ) exp [ 2 π λ f 0 ( x ξ 1 + y η 1 ) ] exp [ j 2 π λ f ( x u + y v ) ] d x d y ,
U λ , ξ 1 , η 1 ( u , v ) = H ( x , y ) exp [ j 2 π λ 0 f { λ 0 λ ( u + f f 0 ξ 1 ) x + λ 0 λ ( v + f f 0 η 1 ) y } ] d x d y ,
U λ , ξ 1 , η 1 ( u , v ) = U λ 0 , 0 , 0 ( λ o λ ( u + f f 0 ξ 1 ) , λ 0 λ ( v + f f 0 η 1 ) ) ,
I λ , ξ 1 , η 1 ( u , v ) = | U λ , ξ 1 , η 1 ( u , v ) | 2 = I λ 0 , 0 , 0 ( λ 0 λ ( u + f f 0 ξ 1 ) , λ 0 λ ( v + f f 0 η 1 ) ) .
I ( u , v ) = I λ , ξ , η ( u , v ) A ( ξ , η ) L ( λ ) d λ d ξ d η = I λ 0 , 0 , 0 ( λ 0 λ ( u + f f 0 ξ ) , λ 0 λ ( v + f f 0 η ) ) A ( ξ , η ) L ( λ ) d λ d ξ d η .
U i l l u m ( x , y ) = exp [ j π λ d { ( x + b a ξ 1 ) 2 + ( y + b a η 1 ) 2 } ] ,
U λ , ξ 1 , η 1 ( u , v ) = H ( x , y ) U i l l u m ( x , y ) exp [ j π λ d { ( x u ) 2 + ( y v ) 2 } ] d x d y = exp [ j π λ d { ( b a ξ 1 ) 2 + ( b a η 1 ) 2 } ] exp [ j π λ d ( u 2 + v 2 ) ] × H ( x , y ) exp [ j 2 π λ d { x ( u + b a ξ 1 ) + y ( v + b a η 1 ) } ] d x d y .
I λ , ξ 1 , η 1 ( u , v ) = | U λ , ξ 1 , η 1 ( u , v ) | 2 = I λ 0 , 0 , 0 ( λ 0 λ ( u + b a ξ 1 ) , λ 0 λ ( v + b a η 1 ) ) ,
I ( u , v ) = I λ 0 , 0 , 0 ( λ 0 λ ( u + b a ξ ) , λ 0 λ ( v + b a η ) ) A ( ξ , η ) L ( λ ) d λ d ξ d η .
I ( u , v ) = I λ 0 , 0 , 0 ( λ 0 λ u , λ 0 λ v ) L ( λ ) d λ ,
I ( u , v ) = I ( u + m ξ , v + m η ) A ( ξ , η ) d ξ d η ,
I ( u , v ) = I ( u ξ , v η ) A ( ξ , η ) d ξ d η ,
A ( u , v ) = A ( u m , v m ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.