Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compact lensless full-color holographic projection system with digital phase

Open Access Open Access

Abstract

A lensless full-color holographic projection system is proposed, satisfying the requirement of compactness and flexibility. The system enables projection by illuminating a single-chip spatial light modulator (SLM) simultaneously with red (R), green (G), and blue (B) lasers, in which the SLM loads a color-multiplexed phase-only hologram. To strengthen compactness, filtering and achromatic systems are achieved by digital phase, where the digital lens phase focuses the light field onto the filter plane, and the digital blazed gratings shift the RGB images to achieve a fine alignment. Besides, the flexibility of diffraction calculation is enhanced by the cascaded D-FFT and S-FFT algorithm (CDS algorithm, where D-FFT is acronym of double fast fourier transform and S-FFT is acronym of single fast fourier transform). Both simulation and optical experiments are carried out. We conducted 2D image and animation projection and multi-image-plane projection. The results confirm the feasibility of our method.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

After the German scientist Roman made the first CGH (Computer Generated Hologram) in 1965, the technology was introduced into the field of optics, which greatly promoted the development of holography [1]. To realize holographic projection, one could use the SLM to modulate the light field by loading a CGH. The liquid crystal on silicon (LCoS) is a reflective SLM. If one uses LCoS to display CGH, its merits could be extracted such as high fill factor, high light utilization, high resolution, and programmable control [2].

There are four typical implementations of holographic full-color projection: time division [3], space division [4,5], SLM region division [6,7], and spatial superposition [8,9].

In the time division method [3], a single SLM is periodically illuminated by R, G, and B lasers, and meanwhile displays corresponding RGB CGHs on the SLM. This method requires a precise synchronization subsystem, which generates a time sequence to control the whole system. Besides, the high retardation of liquid crystal limits the frame rate, yet the time division makes this problem even serious. Thus, it is not suitable for the animation projection.

In the space division method [4,5], three SLMs are illuminated respectively with R, G, and B lasers. There is few crosstalk between every color channel with high image quality. However, the cost is much higher due to the extra two SLMs.

In the SLM region division method [6,7], one SLM chip is divided into three regions. Each region is illuminated by one of the R, G, and B lasers, and displays the corresponding CGH. However, a complicated beam clipping system is required to ensure that wavefront of each laser matches its counterpart on SLM precisely. Besides, only one third of whole pixels can be utilized due to the region division.

In the spatial superposition method [8,9], information of R, G, and B channels are encoded into one single color-multiplexed CGH. Pixels can be fully utilized, and do not require redundant subsystems. So it is one of the most simplified systems in color projection. However, this method has a great problem–irrelevant images [10,11]. Since R, G and B channels are encoded into one single CGH, three images would be reconstructed when illuminated monochromatically. The correct image only appears when the calculating wavelength is consistent with the reconstructing wavelength; and other irrelevant images should be eliminated without blocking useful signals. Frequency domain filtering has been widely used as a solution. Tomasz et al [12] and Eun Soo et al [13] used 4-f systems and color masks for filtering. Besides, Han et al [14] used iteration method to weaken the impact of irrelevant images. All methods mentioned above can successfully eliminate irrelevant images. However, they introduced new problems and meanwhile left some remaining problems unsolved. In this paper, we will shoot two problems in the context of previous researches.

Firstly, compactness and irrelevant images elimination are hard to coexist. For filtering, the 4-f system [12,13] is used, which needs two additional lenses and their alignment requires much space. For achromatic operation, Xue et al [10] and Chang et al [11] proposed a mechanical method that tilted the lasers to shift images. In this method, the tilting angle is determined by the image plane size and the light propagation distance; and the angle should be tuned if parameters were changed. To simplify the system and achieve compactness, multiple digital phases were introduced into our CGH algorithm, including a digital lens phase [15] and three digital blazed gratings, for filtering and achromatic operation respectively.

Secondly, hardly did any research mention the numerical calculation algorithm of multicolor diffraction. Actually, flexibility is crucial to scale the image and ensure the same zoom ratio of R, G, and B channels. Thus, the calculation algorithm must be capable to zoom sampling rate. Aliasing-Reduced with Scale and Shift (ARSS) algorithm [16] was proposed to solve this problem and can be implemented by applying scaling and defocusing factors. But its aliasing condition of the chirp functions is hard to satisfy. Double Fourier transform (DBFT) [17] algorithm is another solution, in which an intermediate plane is set to connect two S-FFT procedures. Sampling rate can be tuned by adjusting the light propagation distance of the two procedures. However, diffraction of plane wave cannot be correctly simulated if the intermediate plane is smaller than the source plane, in which condition the intermediate plane could not cover all light field and thus causes aliasing. To flexibly zoom the sampling rate, the CDS algorithm is proposed and used in our diffraction calculation.

This work is organized as follows: Section 2 introduces our method, including the application of digital lens phase, the CDS algorithm, and the achromatic operation by digital blazed gratings. Besides, simulation is done to demonstrate the feasibility of this method. In section 3, the imaging quality is evaluated. Our experimental results proved that this method is effective in projecting 2D pictures, animations, and 3D pictures. Finally, conclusions are made in section 4.

2. Method and simulation

2.1 Digital lens phase and the CDS method

Before CGH calculation, a digital lens phase should be applied onto the object plane. Thus, the light propagates as 1(a) and focuses at point F, which is the focus of the lens phase. The distance between F and the CGH plane is $f_H$, and between F and the object plane is $f_O$, $f_H+f_O=z$.

 figure: Fig. 1.

Fig. 1. Schematic diagrams of light propagation and its simulation. (a) the digital lens phase; (b) undersampling caused by an too big sampling plane; (c) aliasing caused by a too small sampling plane.

Download Full Size | PDF

This phase has two functions. First, filtering. Spectrum of the spatial frequency can be got on the focal plane of the lens phase [18] , where we can set a filter, replacing the complicated 4-f system. After filtering, higher diffraction orders and irrelevant images could be effectively eliminated. Second, signal collection. Parallel light can be transformed to converging light. Suppose the length and width of the object plane are $L_{OM}, L_{ON}$ respectively, and the length and width of the SLM are $L_{HM}$ and $L_{HN}$ respectively $(L_{HM}\;>\;L_{HN})$. According to the properties of similar triangles, if we ensure $\dfrac {f_O}{f_H} \leq \max \left (\dfrac {L_{ON}}{L_{HN}}, \dfrac {L_{OM}}{L_{HN}}\right )$, the information of the light field could be completely received by the SLM, without losing low-frequency information [19].

Now the object (a 2D image) $A_{iO} (x_O,y_O)$ is multiplied with a digital lens phase $\phi _{iS}(x_O, y_O) = -{\pi }({x_O}^2 + {y_O}^2)/\lambda _if_O$, where $\lambda _i \;(\textrm {above}\; i = R, G, B)$ is the wavelength of the R, G, or B laser. Before introducing our CDS algorithm, we first analyze its two parts–D-FFT and S-FFT algorithm separately. Their schematics are shown in Fig. 1(b) and (c).

The Sampling Plane in Fig. 1(b)(c) refers to the termination plane of diffraction calculation. Suppose its length is $L_{SM}$ and width is $L_{SN}$. They are determined by the numerical diffraction algorithm. Suppose M and N is the number of samples on the length and width sides of the sampling plane. When using D-FFT algorithm [20], the size of the sampling plane is given by:

$$L_{SM}=L_{OM}, \ L_{SN}=L_{ON}$$
The sampling plane size is the same as the object plane, not determined by the distance. In our setup, $L_{HM}$, $L_{HN}$ are smaller than $L_{OM}$, $L_{OM}$. If we use D-FFT algorithm only, the sampling plane is so large that useful signals are only included by a few pixels, in which situation the sampling rate would be too low (See Fig. 1(b)). When using S-FFT algorithm [20] , the size of the sampling plane is given by:
$$L_{SM} = \dfrac{M\lambda \; z}{L_{OM}}, \ L_{SN} = \dfrac{N\lambda \; z}{L_{ON}}$$
The size is proportional to the distance. If we use S-FFT algorithm only, chances are that z is too short that $L_{SM}$, $L_{SN}$ are smaller than $L_{HM}$, $L_{HN}$. Then aliasing would be caused (Fig. 1(c)).

We combined the features of these two algorithms and proposed the CDS algorithm. The whole diffraction distance $z$ is decomposed into $z_{i1}$ and $z_{i2}$, where $i = R,G,B$. D-FFT algorithm is applied to $z_{i1}$, ensuring that the diffraction distance is identical to $z$; S-FFT algorithm is applied to $z_{i2}$, converting the size of sampling plane from $L_{OM}$, $L_{ON}$ to $L_{HM}$, $L_{HN}$. An intermediate plane is set to relay (Note that the intermediate plane is not the focal plane of the lens phase). Under this circumstance, CGH planes of R, G, and B channels would have the same size and achieve the same zoom ratio. Besides, pixels of the SLM can be fully used.

In our experiment, the size of the object plane is a square, thus $L_{OM} = L_{ON} = L_O$. And the SLM pixel is also a square with a width of $p$.

The distance of S-FFT $z_{i2}$ is derived from Eq. (2) as:

$$z_{i2}=\dfrac{L_OL_{HM}}{M\lambda_i} = \dfrac{L_OL_{HN}}{N\lambda_i} = \dfrac{L_O p}{\lambda_i}$$
The distance of D-FFT $z_{i1}$ (can be negative) is:
$$z_{i1} = z - z_{i2}$$
The whole process of the CDS algorithm is shown in Fig. 2. Light propagates from the object plane to the CGH plane, and change from D-FFT to S-FFT algorithm at the intermediate plane. The initial complex amplitude on the object plane is $U_{iO}(x_O,y_O) = A_{iO} (x_O,y_O)\ \exp {\left [ \textrm {j} \phi _{iS}(x_O,y_O)\right ]}$ , where $\textrm {j}^2=-1$, and $i=R,G,B$.

 figure: Fig. 2.

Fig. 2. Schematic diagram of the CDS algorithm.

Download Full Size | PDF

The first step is the D-FFT algorithm of distance $z_{i1}$, based on the transfer function of angle spectra. The light field propagates from the object plane to the intermediate plane. Coordinates on the object plane is $x_O$, $y_O$; and spectra of spatial frequency on the object plane is $f_x$ and $f_y$. Coordinates on the intermediate plane is $x_V$, $y_V$. The complex amplitude $U_{iV}(x_V,y_V)$ on this plane can be calculated by [20]:

$$U_{iV}(x_V,y_V)= \mathcal{F}^{{-}1}\left\{\mathcal{F} \left\{ U_{iO}(x_O,y_O) \right\} \ \exp \left[ \textrm{j}k_i z_{i1} \sqrt{1-(\lambda_i f_x)^2 - (\lambda_i f_y)^2 } \right] \right\}$$
where $\lambda _i$ and $k_i$ $(i=R,G,B)$ are the wavelength and wave number of the R, G, and B light sources.

The second step is the S-FFT algorithm of distance $z_{i2}$. The light field propagate from the intermediate plane to the CGH plane. Coordinates on the CGH plane is $x_H$, $y_H$. The complex amplitude $U_{iH} (x_H,y_H)$ on it can be calculated by [20]:

$$U_{iH} (x_H,y_H) = \dfrac{\exp(\textrm{j}k_i z_{i2})}{\textrm{j} \lambda_i z_{i2}} \exp \left[ \dfrac{\textrm{j} k_i}{2z_{i2}} (x_H^2 + y_H^2) \right] \mathcal{F} \left\{ U_{iV} (x_V,y_V) \exp \left[ \dfrac{\textrm{j} k_i}{2z_{i2}} (x_V^2 + y_V^2) \right] \right\}$$
After this step, the size of the CGH plane will tuned to match the SLM size, and the CDS algorithm is terminated on the image plane. Before we convert the complex amplitude into phase-only CGH, results of Eq. (6) should be processed. As achromatic operation, the result of R channel $U_{RH} (x_H,y_H)$ is adjusted by a digital blazed grating $G_R (x_H,y_H)$; and the result of B channel $U_{BH} (x_H,y_H)$ is adjusted by a digital blazed grating $G_B (x_H,y_H)$. The detailed expression would be given in Section 2.2. Finally, the complex amplitude to be reconstructed on the SLM plane $U(x_H,y_H)$ is given by:
$$\begin{aligned}U(x_H,y_H) = &U_{RH} (x_H,y_H) \exp \left[ \textrm{j} G_R (x_H,y_H) \right] + U_{GH} (x_H,y_H) \ + \\ & U_{BH} (x_H,y_H) \exp \left[ \textrm{j} G_B (x_H,y_H) \right] \end{aligned}$$
We used the double phase method [21,22] to convert the complex amplitude into phase-only hologram. Suppose $A(x_H,y_H)$ and $\phi (x_H,y_H)$ are amplitude and phase of $U(x_H,y_H)$ respectively. The complex amplitude could be decomposed into two phase-only components $\theta _1 (x_H,y_H)$ and $\theta _2 (x_H,y_H)$, which is calculated by:
$$\theta_1 (x_H,y_H) = \phi(x_H,y_H) + P(x_H,y_H)$$
$$\theta_2 (x_H,y_H) = \phi(x_H,y_H) - P(x_H,y_H)$$
where $P(x_H,y_H) = \arccos \left [ A(x_H,y_H)/A_{\max } \right ]$, and $A_{\max }$ is the maximum of $A(x_H,y_H)$. Eventually, the expression of the CGH is:
$$H(x_H,y_H)= \theta_1 (x_H,y_H) M_1 (x_H,y_H)+ \theta_2 (x_H,y_H) M_2 (x_H,y_H)$$
where $M_1$, $M_2$ are two complementary 2D binary masks (checkerboard patterns), whose expressions could be found in [22]. And the CGH calculation is completed here.

2.2 Chromatic aberration and correction by the digital blazed gratings

During reconstruction, the images of $\pm 1$ order locate at the 0.5 diffraction order [23]. But locations of the 0.5 order for different color components are different. For wavelength $\lambda$, location of the $n^{th}$ diffraction order is given by: $x = y = n\lambda z/p$. Therefore, chromatic aberrations would occur. Lateral chromatic aberration between component G and R is $\Delta x_{G,R}$ on the x axis; and that between component G and B is $\Delta x_{G,B}$. On the y axis they are $\Delta y_{G,R}$ and $\Delta y_{G,B}$. Their values at the diffraction order $n=0.5$ are given by:

$$\Delta x_{G,R} = \Delta y_{G,R} = 0.5 \times \dfrac{z(\lambda_G - \lambda_R)}{p}$$
$$\Delta x_{G,B} = \Delta y_{G,B} = 0.5 \times \dfrac{z(\lambda_G - \lambda_B)}{p}$$
where $\lambda _R$, $\lambda _G$, and $\lambda _B$ are wavelengths of R, G, and B lasers. These aberrations cause problems to filtering. As is shown by the simulation result in Fig. 3(a), initially, each color component will reconstruct three images, one focused while two defocused, centered at the same location of its corresponding 0.5 order. In this case, we cannot eliminate the defocused images (irrelevant images) by filtering effectively, since they overlap with each other. We propose that digital blazed gratings could be designed to solve this problem.

 figure: Fig. 3.

Fig. 3. Images on the focal plane (the filter plane) (a) BEFORE tuned by blazed gratings; (b) AFTER tuned by blazed gratings, an enlarged view of the central focused point is provided upperleft.

Download Full Size | PDF

The blazed grating is a step structure. Applying a digital blazed grating is identical to illuminating the SLM at a tilting angle [18]. Before we encode the complex amplitudes into phase-only CGH, we could tune them by blazed gratings. In detail, we pull the red image closer to the zero order, and push blue image away from the zero order. The values of the gratings are decided by the chromatic aberration in Eq. (11) and Eq. (12). In our experiment, the digital blazed gratings are written as:

$$G_R (x_H,y_H) = \dfrac{2\pi}{\lambda_R f_H} \left( x_H \Delta x_{G,R} + y_H \Delta y_{G,R} \right)$$
$$G_B (x_H,y_H) = \dfrac{2\pi}{\lambda_B f_H} \left( x_H \Delta x_{G,B} + y_H \Delta y_{G,B} \right)$$
where the first term represents eliminating lateral chromatic aberration in Eq. (11) and Eq. (12) on the x axis, the second term eliminating those on the y axis. After the aberration is compensated, the defocused images will be separated and the focused images will center perfectly at the same location. As is shown in Fig. 3(b), the central focal point will no longer be interfered. Advantages are obvious when achromatic this way. First, the system is compact, since the mechanism controlling the tilting angles of the light sources is replaced by digital phases. Second, using digital phase is flexible. When system parameters change, we need not move or rotate any parts in the setup. We could just modify these digital gratings to keep the system working.

Figure 4 illustrates what happens when R, G, and B lasers simultaneously illuminate the SLM. All irrelevant images are stopped by the filter while correct images pass through.

 figure: Fig. 4.

Fig. 4. Schematic diagram of optical reconstruction

Download Full Size | PDF

Notice that the focal length of the lens phase should not be too small, otherwise it causes aliasing at the periphery due to undersampling. The $f_H$ should satisfy the aliasing condition of the shortest wavelength ($\lambda _B$) [16]:

$$f_H\;>\;\dfrac{2p^2}{\lambda_B} \left( M^2+N^2 \right)^{\frac{1}{2}}$$
Where $p$, $M$, $N$ are SLM parameters introduced in Section 2.1. In our experiment, we should ensure that $f_H\;>\;313 \textrm {mm}$. When there are zero padding at the edge, $f_H$ could be smaller.

2.3 Simulation

To evaluate the performance of our method, a computer simulation was conducted. The parameters selected are the same as specification of our devices for optical reconstruction. The wavelength of the R, G, and B lasers are 635 nm, 532 nm, and 450 nm respectively. $f_H$ is chosen as 300 mm and $f_O$ as 600 mm. Under this circumstance, the image has a zoom ratio of 2. Sampling pitch on the object plane is 16 µand sampling pitch of the CGH plane is 8 µm (consistent with the pixel size of the SLM). Resolution of diffraction calculation is $1920\times 1080$. Figure 5(a) is the original image. Figure 5(b) is the result of simulated reconstructed image. Theoretically, this method has an excellent performance according to the simulation. There are flaws that some corrugated patterns could be found, which is caused by the ideal low-pass filtering [24]. Figure 5(c) is the CGH generated by our proposed method. Notice that there are three ring-like patterns. This is because the R and B components were applied by blazed gratings and they moved in opposite directions.

 figure: Fig. 5.

Fig. 5. Results of computer simulation. (a) the original image; (b) the result of numerical reconstrcution; (c) the CGH from the proposed method.

Download Full Size | PDF

3. Experimental result and discussions

3.1 Projection system setup

The setup for optical experiment of lensless color holographic projection using the proposed method is presented in Fig. 6. The optical path is divided into two parts–the illuminating path and the imaging path.

 figure: Fig. 6.

Fig. 6. Optical setup for lensless color holographic projection

Download Full Size | PDF

The illuminating path mainly consists of three semiconductor lasers (with embedded collimators) and two beam splitters. Wavelengths of lasers are 635 nm (R), 520 nm (G), and 450 nm (B). The optimal power ratio of R, G, and B lasers is set to 0.46:0.34:0.20, in which case they could achieve the same tristimulus value as CIE Standard Illuminant D65 and reproduce the color precisely according to the CIE 1931. The transmitted B laser is mixed with the reflected R laser at the first beam splitter; and then mixed with the reflected G laser. The mixed-color propagates to the polarizer, then to the SLM. The SLM is a HOLOEYE LCoS, of type PLUTO-VIS-016, and with a response time 100 ms, resolution 1920 $\times$ 1080, and pixel size 8µm $\times$ 8µm (This means length is 15.36mm and width is 8.64mm). The optical axis of the polarizer should be of consistent direction with the long side of SLM in order to maximize the diffraction efficiency.

The imaging path consists of the SLM, the low-pass filter, and the screen. The CGH is displayed on the SLM. The light field from lasers is modulated after incidence on the SLM. According to the reversible principle of the linear space invariant system (The proposed system is such a system under paraxial approximation [18].), the light would converge again on the focal plane of the lens phase. A small-aperture filter is set here to block out irrelevant images and other diffraction orders. Finally, the color image will be correctly reconstructed at a distance of $f_O$ from the filter.

3.2 Experimental results of color images and animations

The system is set up as Fig. 6 to examine the quality of optical reconstruction. The first experiment was conducted to a 2D image in Fig. 5(a) "Pattern of opera make-up". Because the simulation parameters are identical to the specification of our experimental system, we directly loaded the SLM with the CGH in Fig. 5(c). After a fine tuning of the orientation of beam splitters and the position of the filter, the results in Fig. 7 are seen on the screen. The images in Fig. 7 are shot by a CMOS camera (Canon EOS-50D) focused around the image plane. Of course, we can also directly observe the reconstructed images clearly. Figure 7(a)(b)(c) are results of optical reconstruction of R, G, and B components. Figure 7(d) is the color image superimposed by the three components. Irrelevant images are eliminated, resulting in a good image quality. However, the quality is not as good as the numerical simulation in Fig. 5(b). There are several possible reasons: First, the laser source is highly coherent, introducing inevitable speckle noise. Second, the cross-talk effect among R, G, and B channels is virtually severe. The crosstalk may occur at the SLM, but the simulation does not take this factor into account. Third, the linearity of the phase modulation of the SLM is not perfect, resulting in modulation errors which cause noise on the image plane.

 figure: Fig. 7.

Fig. 7. Optical reconstruction results of image "Pattern of opera make-up". (a) R component; (b) G component; (c) B component; (d) the color reconstructed image; (e)-(g) images with different sampling pitch (12 µm, 16µm, and 24 µm)

Download Full Size | PDF

Then, we changed the ratio of $f_H$ to $f_O$ to zoom the image. Three images were shot with a universal $f_H$ of 300 mm. For Fig. 7(e), $f_O$ is 450 mm; magnification m is 1.5; and image width is 12.96 mm. For Fig. 7(f), $f_O$ is 600 mm; magnification m is 2.0; and image width is 17.28 mm. For Fig. 7(f), $f_O$ is 900 mm; magnification m is 3.0; and image width is 25.92 mm. This experiment proves that the proposed method can effectively scale up the image without changing the projection setup, meanwhile, it maintains a good image quality.

In addition, we showed two animations (Visualization 1, Visualization 2) by displaying a CGH sequence on the SLM at a frame rate of 10 Hz (The response time of our SLM is 100 ms). Figure 8 show one frame from each reconstructed animation. This experiment proves that the proposed method is able to make full use of the time response characteristic of the SLM.

 figure: Fig. 8.

Fig. 8. One frame from reconstructed animations. (a) Car; (b) Whale.

Download Full Size | PDF

3.3 Multi-plane projection

In order to investigate the feasibility to project 3D object, we conducted another experiment to reconstruct a 3D image "RGBCMY". The schematic of our experiment is exhibited in Fig. 9, which is modified from Fig. 4. Two image planes are set. And the distances from the filter to Image plane 1 and Image plane 2 are $f_O$ and ${f_O}'$ respectively. Text "RGB" is on Image plane 1, $f_O = 450 \textrm {mm}$. Text "CMY" is on Image plane 2, ${f_O}'= 600 \textrm {mm}$. We did both numerical simulation and optical reconstruction. And the results on each image plane are both shown in Fig. 10. When changing the focal plane of our camera, we could see one image getting blurred while the other one getting clear. It is obvious that our eyes could do the same and sense the information of depth. Notice that there are restrictions on this method. First, small aperture causes a large depth of focus [25], limiting the cognition of depth information. Second, the phase of lens results in a cone-shaped beam. Therefore, the Image plane 2 is larger than Image plane 1, which is hard for people to understand the scene.

 figure: Fig. 9.

Fig. 9. Schematic of the multi-plane projection

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Results of multi-plane projection of "RGBCMY". (a) numerical simulation on Image plane 1; (b) optical reconstruction on Image plane 1; (c) numerical simulation on Image plane 2; (d) optical reconstruction on Image plane 2.

Download Full Size | PDF

4. Conclusion

A lensless full-color holographic projection system based on spatial superposition method is proposed. To simplify the system, a digital lens phase and digital blazed gratings are used to replace some substantial mechanism. Moreover, the CDS algorithm is used to flexibly zoom the sampling pitch and match the SLM size. The proposed method has the following advantages: (i) The filter is capable of eliminating the irrelevant images caused by the color-multiplexed CGH in spatial superposition method. (ii) The color-projection system is as simple and compact as a monochromatic projection sy stem. The optical path only consists of a single-chip SLM and a small-aperture filter. (iii) The system is highly flexible because digital phases and the CDS algorithm are used. (iv) The SLM pixels can be fully used since there is no chip division. And the time response characteristic can be fully used because time division is not needed. (v) Use a non-iterative algorithm, which is of high speed. Meanwhile, there are still some shortcomings. For example, the speckle and color channel crosstalk are still so severe that affect the image quality greatly. Besides, many irrelevant images are generated because of the color-multiplexed CGH. They alone with the higher diffraction orders result in a low utilization of light energy. In general, this method has certain advantages and application prospects in color lensless holographic dynamic projection, and has much potential for further improvement.

Funding

National Natural Science Foundation of China (6077702).

Acknowledgments

The LCoS and other optical devices are provided by Department of Physics, Zhejiang University. And these equipments are supported by the Zhejiang university physics department 2019 teaching reform research project (201904)

Disclosures

The authors declare no conflicts of interest.

References

1. G. Tricoles, “Computer generated holograms: an historical review,” Appl. Opt. 26(20), 4351–4360 (1987). [CrossRef]  

2. X. K.-S. Ge Ai-Ming and S. Zhan, “Characteristics of phase-only modulation using a reflective liquid crystal on si licon device,” Acta Phys. Sin. 52(10), 2481 (2003).

3. W. Tao, Y. Ying-jie, and Z. Huadong, “Removal of magnification chromatism in optoelectronic full color holography,” Opt. Precis. Eng. 19(6), 1414–1420 (2011). [CrossRef]  

4. W. Yue, S. Chuan, Z. Cheng, L. Kaifeng, and W. Sui, “Research on color holographic display with space division multiplexing based on liquid crystal on silicon,” Chin. J. Lasers 39(12), 1209001 (2012). [CrossRef]  

5. D. Xiao, D. Wang, S. Liu, and Q. Wang, “Color holographic system without undesirable light based on area sampling of digital lens,” J. Soc. Inf. Disp. 25(7), 458–463 (2017). [CrossRef]  

6. M. Makowski, I. Ducin, M. Sypek, A. Siemion, A. Siemion, J. Suszek, and A. Kolodziejczyk, “Color image projection based on fourier holograms,” Opt. Lett. 35(8), 1227–1229 (2010). [CrossRef]  

7. M. Makowski, I. Ducin, K. Kakarenko, J. Suszek, M. Sypek, and A. Kolodziejczyk, “Simple holographic projection in color,” Opt. Express 20(22), 25130–25136 (2012). [CrossRef]  

8. T. Ito and K. Okano, “Color electroholography by three colored reference lights simultaneously incident upon one hologram panel,” Opt. Express 12(18), 4320–4325 (2004). [CrossRef]  

9. M. Makowski, M. Sypek, I. Ducin, A. Fajst, A. Siemion, J. Suszek, and A. Kolodziejczyk, “Experimental evaluation of a full-color compact lensless holographic display,” Opt. Express 17(23), 20840–20846 (2009). [CrossRef]  

10. G. Xue, J. Liu, X. Li, J. Jia, Z. Zhang, B. Hu, and Y. Wang, “Multiplexing encoding method for full-color dynamic 3d holographic display,” Opt. Express 22(15), 18473–18482 (2014). [CrossRef]  

11. C. Chang, Y. Qi, J. Wu, C. Yuan, S. Nie, and J. Xia, “Numerical study for the calculation of computer-generated hologram in color holographic 3d projection enabled by modified wavefront recording plane method,” Opt. Commun. 387, 267–274 (2017). [CrossRef]  

12. T. Kozacki and M. Chlipala, “Color holographic display with white light led source and single phase only slm,” Opt. Express 24(3), 2189–2199 (2016). [CrossRef]  

13. S.-F. Lin and E.-S. Kim, “Single slm full-color holographic 3-d display based on sampling and selective frequency-filtering methods,” Opt. Express 25(10), 11389–11404 (2017). [CrossRef]  

14. Z. Han, B. Yan, Y. Qi, Y. Wang, and Y. Wang, “Color holographic display using single chip lcos,” Appl. Opt. 58(1), 69–75 (2019). [CrossRef]  

15. C. Chang, Y. Qi, J. Wu, J. Xia, and S. Nie, “Speckle reduced lensless holographic projection from phase-only computer-generated hologram,” Opt. Express 25(6), 6568–6580 (2017). [CrossRef]  

16. T. Shimobaba, T. Kakue, N. Okada, M. Oikawa, Y. Yamaguchi, and T. Ito, “Aliasing-reduced fresnel diffraction with scale and shift operations,” J. Opt. 15(7), 075405 (2013). [CrossRef]  

17. F. Zhang, I. Yamaguchi, and L. P. Yaroslavsky, “Algorithm for reconstruction of digital holograms with adjustable magnification,” Opt. Lett. 29(14), 1668–1670 (2004). [CrossRef]  

18. D. Yu and H. Tan, Engineering Optics (China Machine Press, 2011).

19. T. Shimobaba and T. Ito, “Random phase-free computer-generated hologram,” Opt. Express 23(7), 9549–9554 (2015). [CrossRef]  

20. L. Junchang and W. Yanmei, Diffraction calculation and digital holography I (Science press, 2014).

21. C. K. Hsueh and A. A. Sawchuk, “Computer-generated double-phase holograms,” Appl. Opt. 17(24), 3874–3883 (1978). [CrossRef]  

22. O. Mendoza-Yero, G. Mínguez-Vega, and J. Lancis, “Encoding complex fields by using a phase-only optical element,” Opt. Lett. 39(7), 1740–1743 (2014). [CrossRef]  

23. Z. H.-D. Yu Ying-Jie and W. Tao, “Optimization of optoelectronic reconstruction of phase hologram by use of digital blazed grating,” Acta Phys. Sin. 58, 3154–3160 (2009). [CrossRef]  

24. S. Mitra, Digital Signal Processing: A Computer-Based Approach (McGraw Hill higher education, 2005).

25. M. Makowski, T. Shimobaba, and T. Ito, “Increased depth of focus in random-phase-free holographic projection,” Chin. Opt. Lett. 14(12), 120901 (2016). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       "Car" by color holographic dynamic projection
Visualization 2       "Whale" by color holographic dynamic projection

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Schematic diagrams of light propagation and its simulation. (a) the digital lens phase; (b) undersampling caused by an too big sampling plane; (c) aliasing caused by a too small sampling plane.
Fig. 2.
Fig. 2. Schematic diagram of the CDS algorithm.
Fig. 3.
Fig. 3. Images on the focal plane (the filter plane) (a) BEFORE tuned by blazed gratings; (b) AFTER tuned by blazed gratings, an enlarged view of the central focused point is provided upperleft.
Fig. 4.
Fig. 4. Schematic diagram of optical reconstruction
Fig. 5.
Fig. 5. Results of computer simulation. (a) the original image; (b) the result of numerical reconstrcution; (c) the CGH from the proposed method.
Fig. 6.
Fig. 6. Optical setup for lensless color holographic projection
Fig. 7.
Fig. 7. Optical reconstruction results of image "Pattern of opera make-up". (a) R component; (b) G component; (c) B component; (d) the color reconstructed image; (e)-(g) images with different sampling pitch (12 µm, 16µm, and 24 µm)
Fig. 8.
Fig. 8. One frame from reconstructed animations. (a) Car; (b) Whale.
Fig. 9.
Fig. 9. Schematic of the multi-plane projection
Fig. 10.
Fig. 10. Results of multi-plane projection of "RGBCMY". (a) numerical simulation on Image plane 1; (b) optical reconstruction on Image plane 1; (c) numerical simulation on Image plane 2; (d) optical reconstruction on Image plane 2.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

L S M = L O M ,   L S N = L O N
L S M = M λ z L O M ,   L S N = N λ z L O N
z i 2 = L O L H M M λ i = L O L H N N λ i = L O p λ i
z i 1 = z z i 2
U i V ( x V , y V ) = F 1 { F { U i O ( x O , y O ) }   exp [ j k i z i 1 1 ( λ i f x ) 2 ( λ i f y ) 2 ] }
U i H ( x H , y H ) = exp ( j k i z i 2 ) j λ i z i 2 exp [ j k i 2 z i 2 ( x H 2 + y H 2 ) ] F { U i V ( x V , y V ) exp [ j k i 2 z i 2 ( x V 2 + y V 2 ) ] }
U ( x H , y H ) = U R H ( x H , y H ) exp [ j G R ( x H , y H ) ] + U G H ( x H , y H )   + U B H ( x H , y H ) exp [ j G B ( x H , y H ) ]
θ 1 ( x H , y H ) = ϕ ( x H , y H ) + P ( x H , y H )
θ 2 ( x H , y H ) = ϕ ( x H , y H ) P ( x H , y H )
H ( x H , y H ) = θ 1 ( x H , y H ) M 1 ( x H , y H ) + θ 2 ( x H , y H ) M 2 ( x H , y H )
Δ x G , R = Δ y G , R = 0.5 × z ( λ G λ R ) p
Δ x G , B = Δ y G , B = 0.5 × z ( λ G λ B ) p
G R ( x H , y H ) = 2 π λ R f H ( x H Δ x G , R + y H Δ y G , R )
G B ( x H , y H ) = 2 π λ B f H ( x H Δ x G , B + y H Δ y G , B )
f H > 2 p 2 λ B ( M 2 + N 2 ) 1 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.