Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Designing optical 3D images encryption and reconstruction using monospectral synthetic aperture integral imaging

Open Access Open Access

Abstract

This paper realizes an optical 3D images encryption and reconstruction by employing the geometric calibration algorithm to the monospectral synthetic aperture integral imaging system. This method has the simultaneous advantages of improving the quality of 3D images by eliminating the crosstalk from the unaligned cameras and increasing security of the multispectral 3D images encryption by importing the random generated maximum-length cellular automata into the Fresnel transform encoding algorithm. Furthermore, compared with the previous 3D images encryption methods of encrypting 3D multispectral information, the proposed method only encrypts monospectral data, which will greatly minimize the complexity. We present experimental results of 3D image encryption and volume pixel computational reconstruction to test and verify the performance of the proposed method. Experimental results validate the feasibility and robustness of our proposed approach, even under severe degradation.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical techniques for image encryption have triggered much interest owing to their unique advantages, such as parallel computing and multiple dimensional capabilities [1–7]. Various optical techniques, Fourier transform, light imaging, diffraction and interference [8–14], are exploited for achieving optical encryption. Integral imaging is a three-dimensional (3D) imaging and display technique [15–20]. It can capture many two-dimensional (2D) images with different perspectives from a 3D scene using a microlens array or digital camera (i.e. CCD camera) array. These captured 2D small images are referred to as the elemental images. Integral imaging has obtained increasing attraction due to the distributed memory properties of these small elemental images. Integral imaging is a 3D imaging technique used to capture image information and to display it optically or computationally. This technique is composed of two processes: pickup process and reconstruction process. In the pickup process, the multiple small elemental images are captured through a lenslet array and a CCD camera. The reconstruction process is the inverse process of the pickup process. The image reconstruction is implemented optically or computationally by simulating back-projection of pixels [21–23]. However, the lenslet array-based integral imaging methods are diffraction limited in pixel resolution due to the small numerical aperture of the lenslet [24–27].

In [28], the 2D photon-counting algorithm was presented with the double random phase encoding (DRPE) in the Fourier transform domain [29–31]. To increase the resolution of the decrypted images, the researchers used the synthetic aperture integral imaging to capture the image information. 3D scene encryption based on the multispectral computational integral imaging and DRPE has been designed [32] in which the authors utilized a Bayer color filter array (CFA) for 3D images acquisition. 3D images reconstruction was achieved by the computational integral imaging method. The use of the Bayer CFA is an effective way to obtain color image data from single sensor camera. During the recent years show that, for both video and still cameras, the trend will increase the spatial resolution of the sensor without modifying the sensor size. This trend will lead cameras to register more details and to have the higher Nyquist frequency, decreasing the aliasing problems, but also generating a main drawback: the color crosstalk of Bayer CFA degrades the quality of images due to the increased integration among closer adjacent pixels.

To decrease the color crosstalk of Bayer CFA, the authors in [33] proposed an effective monospectral integral imaging technique. In their method, 8 × 8 monospectral imaging sensors were used to capture different parallaxes of a 3D scene. Each camera in this work is sensitive to a monochromatic spectral and is optically isolated from its neighbored camera, thus eliminating the color crosstalk that is endemic to all color image sensors with CFA. However, the super-resolution reconstruction method used in their method only can realize 2D image reconstruction. 3D images reconstruction by borrowing their method will result in severely smearing. We will discuss it in the following sections. In other words, the previous monospectral image encryption method only can realize 2D image encryption. Meanwhile, the high cost of imaging sensor array will make the experiment difficult to be implemented.

In order to address the aforementioned problems, in this study, we propose a 3D monospectral synthetic aperture integral imaging (SAII) algorithm and a captured monospectral elemental images encryption by employing the cellular automata (CA)-based Fresnel transform encoding algorithm. Compared with the previous approaches, we utilize the monospectral camera to capture the 3D scene and move the camera in an array of positions to capture the different parallaxes of the 3D scene. Benefiting from the proposed monospectral SAII method, we not only can save the expensive experimental cost in work [33], but also can eliminate the crosstalk from unaligned camera capturing process. Furthermore, by employing the geometric calibration correction algorithm on the monospectral SAII system, the proposed method verifies the 3D scene can be clearly reconstructed at different depths. Therefore, it is noteworthy to mention here that, in addition to enhancing the image security due to the high key-space of CA; our approach also provides high image quality to the reconstructed 3D scene at different scene depths.

2. Proposed 3D image encryption method

2.1. 3D object pickup by geometry-calibrated monospectral SAII

In order to eliminate the color crosstalk that is endemic to all color imaging sensors with CFA, the authors in [33] utilized 8 × 8 expensive monospectral imaging sensors to capture a color scene, which seriously increases the cost of experiments. In order to address the aforementioned problem, in this study, the monospectral SAII image capturing system is used. The system consists of a digital monospectral camera and a motorized translation stage. The monospectral camera utilized in this experiment can be composed by combining a monospectral filter (red, green, or blue color band filter) with a digital camera. Figure 1 shows the 3D monospectral SAII capturing process through the monospectral camera. A 3D object located at some arbitrary distance from the pickup plane is imaged, and we move the camera in an array of positions to capture the different parallaxes of the 3D scene. The monospectral elemental image array can be captured by moving each monospectral camera along a specific position. For example, as shown in Fig. 1, the red spectral camera only captures the elemental images which located at the positions of (0, 0), (0, 2), (2 2), and (2, 2).

 figure: Fig. 1

Fig. 1 Capturing process of the proposed monospectral SAII system.

Download Full Size | PDF

In the SAII system, the desired camera array should project the 3D scene on the reference plane and all of the optical axes should have the same convergence point. However, in practice, it is very difficult to control the optical axes of the camera array. The unwished parallaxes from the other direction increase the crosstalk from the center of the projection plane. Accurate camera calibration and orientation procedures are a necessary prerequisite for the extraction of precise and reliable 3D metric information from the captured parallax images.

To resolve this problem, the geometric correction algorithm [34–36] is introduced to calibrate cameras in our work. At first, we utilize a “chessboard” pattern as the calibration board. The monospectral cameras should capture the calibration board (“chessboard”) as the initial central depth plane (CDP) before capturing the 3D scene. Suppose the monospectral camera array has M × N cameras, and correspondingly M × N “chessboard” patterns are captured as the calibration boards (see the left part of Fig. 2). The calibration board with P × Q effect corners has the same size of the displayed 3D. If we suppose the correlation matrix of (m, n)th monospectral camera is Hm,n, the correlation matrix Hm,n can be defined as follows:

Hm,n=[h11h12h13h21h22h23h31h32h33].

Suppose the size of the elemental image array is S × V, the correlation matrix Hm,n and the (m, n)th “chessboard” should satisfy the following equation [34]:

[xiyi1]m.n=sm,nHm,n[xkyk1]m,n,
where sm,n denotes the scaling factor, (xk, yk)m,n denotes the coordinates of the kth corner of the (m, n)th “chessboard”. Here, in order to determine the relationship between the correction matrix Hm,n and the coordinates (xk, yk)m,n, the constraint equations should be satisfied by: k = 0, P − 1, P × Q − 1, P × (Q − 1); i = 0, 1, 2, 3; (x′0, y′0)m,n = (0, 0); (x′1, y′1)m,n = (S − 1, 0); (x′2, y′2)m,n = (S − 1, V − 1); (x′3, y′3)m,n = (0, V − 1). According to the above constraint equations, the correction matrix Hm,n and scaling factor sm,n of the (m, n)th camera can be obtained. The calibrated elemental image array is shown in the right part of Fig. 2.

 figure: Fig. 2

Fig. 2 The captured “chessboard” patterns and calibrated elemental image array.

Download Full Size | PDF

2.2. Monospectral elemental images encrypted by CA-based Fresnel transform encoding algorithm

After 3D monospectral data acquisition, the captured monospectral elemental image array is used for image encryption. The image encoding algorithm in Fresnel domain is implemented in our proposed method. Two random phase masks, used in the conventional encryption method, which are replaced by two random generated CA mask (RGCM). CA offers significant advantages over image encryption techniques [37, 38]. These include the ease with which the large space of CA keys can be increased by changing the gateway values such as neighborhood size, rules, maximum state, and initial values. CA encoding allows the key selection to be biased toward those generation functions that permit the avalanche effect, where a one-bit change of the CA key will result in a significant change in the ciphertext using the same plaintext, and one-bit change of the plaintext should yield a significant change in the ciphertext using the same key. In our method, the random mask can be generated by the CA maximum-length sequences (m-sequences).

In a one-dimensional (1D), two-state and three-site neighborhood CA, where the next state of a particular cell is assumed to depend on itself and its two neighbors, each of cells can take value 0 or 1. Using a special rule, the values are updated synchronously in discrete time steps for all cells. Generally, the maximum size of a neighborhood in a 1D CA is three. Thus, the next state of a cell can be considered as a Boolean function of three inputs.

si(k+1)=F(si1(k),si(k),si+1(k)),
where si(k + 1) denotes the next value of cell i, si(k) denotes the current value of the cell i, si−1(k) denotes the current value of the left cell in the neighborhood, si+1(k) is the current state of the right cell in the neighborhood, and F() represents the Boolean function defining the rule.

A 1D CA with the size n = 8, which has 23 = 8 possible states in the state lattices, and 28 = 256 possible state lattices based on different possible next states. Each of these is called a Wolfram rule. Wolfram developed set of simple rules for describing dual-state 1D cellular automata. There are 28 rules for the two-state/three-site CA. The rules are numbered 0 to 255, based on the next state generated by the respective state lattices. There are eight Wolfram rules 0, 60, 90, 102, 150, 170, 204 and 240 that are linear. Rules 0, 60, 102, 170, 204 and 240 produce poor results with respect to pseudorandom sequence generation. However, rules 90 and 150 can be used in a CA to generate maximum-length sequences (m-sequences), rule 90 takes only adjacent cells as inputs (2 inputs), while rule 150 takes adjacent cells and the cell itself as inputs (3 inputs), to generate the next state of the cell. The logical equations of these linear rules are given by

Rule90:si(k+1)=si1(k)si+1(k),
Rule150:si(k+1)=si1(k)si(k)si+1(k).

Now we considered a null boundary CA (NBCA), if a n-bit NBCA can generate a high-quality pseudo-noise sequence and the period of this sequence is 2n − 1, the n-bit CA is called maximum length CA [37]. Consider a character matrix T of a CA and the next state si(t + 1) of the CA is given as follows:

si(k+1)=T(rn)si(k).

The transition matrix T(rn) of such a CA is given by

T(rn)=[r110001r21001r30rn21001rn110001rn]

Each element of the diagonal vector signifies linear rule according to

rn={0,rule901,rule150

The generated 1D and 2D CA m-sequence with two different group CA rules rn and r′n are shown in Fig. 3.

 figure: Fig. 3

Fig. 3 The 1D case of the CA m-sequence with two groups CA rules.

Download Full Size | PDF

In our study, we employ the lensless Fresnel transform [39–43] encoding method to encrypt images, and the operation is implemented by a computer. The two randomly generated CA masks (RGCMs) by two different groups CA rules are placed at different positions in the Fresnel domain. Suppose that the two CA masks be represented by M1 and M2, respectively. The distances between adjacent planes are z1 and z2. The input elemental image array E(x, y) is placed on the same plane as the first RGCM1, which is located on the input plane, and the second RGCM2 located on the transform plane. Under the Fresnel approximation, the complex amplitude μ(x′, y′) obtained in the transform plane is expressed as follows:

μ(x,y)=FT{E(x,y)M1(rn)}×h(x^,y^;z1;λ),
and the impulse response function can be written as:
h(x^,y^;z1;λ)=exp[jπλz1×(x^2+y^2)],
where ⊕ denotes the exclusive OR operation, FT represents Fourier transform, λ represents the operation wavelength, and ŷ represent the coordinates of the spatial frequencies. For convenience, Eq. (9) can be rewritten as
μ(x,y)=FTλz1{(E(x,y)M1(rn))}.

The proposed encryption scheme can be involved four steps (see Fig. 4): 1) the input 3D images are captured in the form of an elemental image array by the proposed monospectral SAII system, 2) the captured elemental image array is modulated by the first RGCM1, and then, 3) after traveling the distance z1 modulated by the RCGM2, and 4) propagated along z2 to obtain the encrypted image. The input image is perpendicularly illuminated by a plane wave with a wavelength λ, and the encrypted image E′(x, y) can be obtained at the output plane as follows:

E(x,y)=FTλz2{μ(x,y)M2(rn)}.

The monospectral elemental image array decryption is an inverse propagation process, and the decrypted elemental image array can be implemented through the following derivation:

E(x,y)=FTλz1{FTλz2{E(x,y)M2(rn)},M1(rn)}.

At last, the 3D image at the depth z can be implemented by the optical or computational integral imaging reconstruction algorithm. We assume that the pixel number of the reconstructed 3D images at the depth z is the same as that of each elemental image, and the 3D images can be reconstructed by averaging the superimposed pixels from all of the elemental images. The reconstructed 3D images at the distance z can be calculated by the following equation:

R(x,y,z)=1No(x,y)k=0K1l=0L1Ekl(xkNx×pcx×zg,ylNy×pcy×zg),
where R(x, y, z) represents the pixel intensity of the digitally reconstructed monospectral 3D image at the depth z, Ek,l denotes the intensity of the kth column and the lth row monospectral elemental image, Nx and Ny are the pixel number of each monospectral elemental image, g is the focal length, p is the pitch of the mocro-lens, cx and cy are the size of imaging sensor, and No(x, y) denotes the overlapping number matrix. The multispectral 3D image can be reconstructed by synthesizing three spectral 3D reconstruction information at the depth z.

 figure: Fig. 4

Fig. 4 The encryption process of the proposed method.

Download Full Size | PDF

3. Simulation results and optical display

Simulation results are presented in this section in order to verify improvement of our proposed method for 3D images encryption. The experimental setup is shown in Fig. 5. In this experiment, two experimental datasets are used to validate the performance of the proposed method. One experimental dataset is comprised of two different objects “Dices” and “Magic cube” are used to capture the 3D scene, which are located at different distances from the imaging sensor plane, from the center of imaging grid. Another experimental dataset is composed of two different objects “Car” and “Magic cube”. 16 × 16 monospectral 2D elemental image patterns are captured by moving the imaging sensor in equal steps of 8.5 mm in the horizontal and vertical directions. To demonstrate the security and efficiency of our algorithm, we use two groups CA rules, they are rules (150, 150, 90, 150, 90, 150, 90, 150) and rules (150, 90, 150, 90, 90, 90, 150, 90), respectively. In this section, the proposed image cryptosystem is analyzed using different security measures. These measures consist of security analysis and robustness analysis. Each of these measures is described in detail in the following subsections.

 figure: Fig. 5

Fig. 5 Experimental setup of the 3D images pickup system.

Download Full Size | PDF

Figure 6(a) shows the encryption result of the proposed method. Figure 6(b) shows the histogram of the encryption image. The result of the pixel distribution of the encryption image shows that the proposed method provides a high quality encryption image.

 figure: Fig. 6

Fig. 6 Encryption result of the monospectral elemental image array and its histogram.

Download Full Size | PDF

Figure 7 shows our proposed 3D reconstruction results of the multispectral 3D slice images which are located at four different depths. The object “Dices” and “Magic cube” can be clearly recognized at the distance between 500 mm and 650 mm. Figure 8 shows another simulation results with our proposed method. It is evident from Figs. 7 and 8 that at each distance only one of the objects is clearly in focus while the other objects appear smeared. Thereby, the correct depth needs to be used to clearly visualize the 3D object. The depth information can therefore act as a key value for the decryption process.

 figure: Fig. 7

Fig. 7 Multispectral visualization of the reconstructed 3D images “Dices” and “Magic cube” from the calibrated monospectral elemental images at different depths (z).

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Multispectral visualization of the reconstructed 3D images “Car” and “Magic cube” from the calibrated monospectral elemental images at different depths (z).

Download Full Size | PDF

To validate the proposed encryption algorithm, we perform a simulation experiment to compare the performance of the proposed method with the computational integral imaging (CII)-based method [15,19] and the SAII-based method [34,35]. In this simulation, for convenience, we only use the expeimental dataset of “Dices” and “Magic cube” as the comparison object. Figures 9(a) and 9(b) show our proposed reconstructed 3D images at the distance of 600 mm and 500 mm, respectively. Figures 9(c) and 9(d) show the reconstructed 3D images with the CII-based method at the distance of 600 mm and 500 mm, respectively. Figures 9(e) and 9(f) show the reconstructed 3D images with the SAII-based method at the distance of 600 mm and 500 mm, respectively. Compared with the results showed in Fig. 9, we can infer that even though the SAII-based method provides better image quality than the CII-based method, there are still obvious ringing artifacts in the reconstruction image due to color crosstalk of the Bayer CFA. However, our proposed method effectively solves the problem.

 figure: Fig. 9

Fig. 9 Reconstructed 3D images with three methods: (a) and (b) our proposed reconstructed 3D images at the distance of 600 mm and 500 mm, respectively, (c) and (d) reconstructed 3D images with the CII-based method [19] at the distance of 600 mm and 500 mm, respectively, (e) and (f) reconstructed 3D images with the SAII-based method [35] at the distance of 600 mm and 500 mm, respectively.

Download Full Size | PDF

Now, we analyze the performance of our proposed method with the previous work [33], the previous monospectral image encryption method in [33] only can achieve 2D image encryption and reconstruction. The reason is that the reconstruction algorithm used in [33] cannot realize 3D reconstruction. The iterative super-resolution reconstruction algorithm is used in [33], which is a 2D image high-resolution reconstruction method. The high-resolution 2D image can be reconstructed from a series of low-resolution images by the updated error function of the iterative algorithm. Figure 10 shows the reconstructed 2D image using the reconstruction algorithm of work [33]. Figure 10(a) and 10(b) show the reconstructed images with the 6th and 15th iterations, respectively. From the results shown in Fig. 10, we can see that the previous work of [33] cannot provide 3D depth information, and the 2D color image can be clearly reconstructed with the increase of the iteration. Meanwhile, the camera calibration algorithm is very important for 3D reconstruction. Figures 11(a) and 11(b) show the reconstructed 3D images using the uncalibrated elemental images, in which the elemental images are obtained by using the capturing method of [33]. In their method, due to the captured elemental images are not correctly calibrated, the unwished parallaxes from the other direction increase the crosstalk from the center of the projection plane.

 figure: Fig. 10

Fig. 10 Reconstructed images of the previous method [33] with the different iterations: (a) 6th iteration, (b) 15th iteration.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Reconstructed 3D image with the uncalibrated elemental images.

Download Full Size | PDF

3.1. Key sensitive analysis

In this subsection, the security of our proposed method is discussed. A good cryptosystem should be sensitive to a small change in secret keys, i.e., a small change in the secret keys results in a completely different decrypted image. The proposed encryption algorithm is sensitive to even a tiny change in the secret keys, as illustrated by the following example. Figure 12 shows the decrypted multispectral object slide images with incorrect RGCM at the different depths. The results reveal that decryption can be achieved only when all secret keys are decoded. Meanwhile, the key space of the proposed encryption method is large enough to resist exhaustive attack. The size of the key space of the proposed method, besides of the initial conditions and the Fresnel transform parameters, the reconstruction depths, and the gateway values of CA also provide key space. For example, an N-cell, dual-state m-site neighborhood, 1D CA, a code breaker must contend with searching through 22m rules, 2N initial configurations, and 22N boundary configurations. Compare to the Fresnel transform encoding algorithm [41–43], our encryption method has been increased (22m) rules × (2N) initial configurations × (22N) boundary configurations security keys at least. Meanwhile, the depth information can be considered as the additional security keys.

 figure: Fig. 12

Fig. 12 Reconstructed 3D image with the partial incorrect keys (incorrect RGCM).

Download Full Size | PDF

3.2. Robustness analysis

An effective image encryption system needs to be able to tolerate a certain amount of noise attacks. We check the tolerance against data loss and noise attack of the encrypted image. We perform a simulation experiment to compare the performance of the proposed encryption method with the CII-based encryption method [19]. Figure 13 shows the simulation results of 3D images “Dices” and “Magic cube” with our proposed method. Figures 13(a) and 13(b) show the decrypted 3D images when the encrypted images are attacked by Gaussian noise with the variance of 0.1. Figures 13(c) and 13(d) show the decrypted 3D images against occlusion attack and 50% pixels of the encrypted image are occluded. Figure 14 shows another simulation results of 3D images “Car” and “Magic cube” with our method. Figures 14(a) and 14(b) show our reconstructed 3D images against Gaussian noise attack. Figures 14(c) and 14(d) show our reconstructed 3D images against occlusion attack. Figure 15 shows the simulation results of 3D images “Dices” and “Magic cube” with the method in [19]. Figures 15(a) and 15(b) show the reconstructed 3D images of the method [19] against Gaussian noise attack. Figures 15(c) and 15(d) show the reconstructed 3D images of the method [19] against occlusion attack. Figure 16 shows another simulation results of 3D images “Car” and “Magic cube” with the method in [19]. The results clearly demonstrate that the proposed method outperforms the method in [19]. Even though the encrypted images are badly damaged, the 3D images of the proposed method can be reconstructed at the different depths, successfully. In other words, our method is able to provide high robustness when the encrypted images against badly data loss attack. Meanwhile, the peak signal-to-noise ratio (PSNR) is used for quantitatively measuring quality of the reconstructed images against noise. It is calculated between the reconstructed slide image without noise and the reconstructed one with noise. The calculated PSNR values with two encryption methods are recorded in Table 1 and Table 2. Obviously, the higher the PSNR value is, the better the degree of robustness is.

 figure: Fig. 13

Fig. 13 Our reconstructed 3D images “Dices” and “Magic cube” against attacks: (a) Gaussian noise (0.1) and the reconstruction depth 650 mm, (b) Gaussian noise (0.1) and the reconstruction depth 500 mm, (c) cropping attack (50%) and the reconstruction depth 650 mm, (d) cropping attack (50%) and the reconstruction depth 500 mm.

Download Full Size | PDF

 figure: Fig. 14

Fig. 14 Our reconstructed 3D images “Car” and “Magic cube” against attacks: (a) Gaussian noise (0.1) and the reconstruction depth 650 mm, (b) Gaussian noise (0.1) and the reconstruction depth 500 mm, (c) cropping attack (50%) and the reconstruction depth 650 mm, (d) cropping attack (50%) and the reconstruction depth 500 mm.

Download Full Size | PDF

 figure: Fig. 15

Fig. 15 Reconstructed 3D images “Dices” and “Magic cube” of the method [19] against attacks: (a) Gaussian noise (0.1) and the reconstruction depth 650 mm, (b) Gaussian noise (0.1) and the reconstruction depth 500 mm, (c) cropping attack (50%) and the reconstruction depth 650 mm, (d) cropping attack (50%) and the reconstruction depth 500 mm.

Download Full Size | PDF

 figure: Fig. 16

Fig. 16 Reconstructed 3D images “Car” and “Magic cube” of the method [19] against attacks: (a) Gaussian noise (0.1) and the reconstruction depth 650 mm, (b) Gaussian noise (0.1) and the reconstruction depth 500 mm, (c) cropping attack (50%) and the reconstruction depth 650 mm, (d) cropping attack (50%) and the reconstruction depth 500 mm.

Download Full Size | PDF

Tables Icon

Table 1. PSNR values of 3D images “Dices” and “Magic cube” with two different methods.

Tables Icon

Table 2. PSNR values of 3D images “Car” and “Magic cube” with two different methods.

3.3. Optical display analysis

In this section, with the calibrated elemental image array, the 3D images can be optically reconstructed on the integral imaging display device. The reconstructed 3D images can be observed from different directions. As shown in Fig. 17, the reconstructed 3D images observed from the left view, front view, and right view.

 figure: Fig. 17

Fig. 17 Different views of the optical reconstructed 3D images. In the video (see Visualization 1), we show the optical reconstruction of 3D images with different views.

Download Full Size | PDF

4. Conclusion

In conclusion, the 3D images encryption method is proposed by combining the monospectral synthetic aperture integral imaging with CA-based Fresnel transform encoding algorithm. The proposed method has been shown to be effective in realizing 3D images encryption and multispectral image reconstruction, and improves the security owing to the high CA key-space in the decryption process. We also have tested the performance of the proposed method using different security measures. The simulation results have confirmed the security and robustness of our proposed method under practical conditions.

Funding

National Key R and D Program of China under Grant No. 2017YFB1002900; National Natural Science Foundation of China (NSFC) (61705146, 61535007); Equipment Research Program in Advance of China (JZX2016-0606/Y267); Fundamental Research Funds for the central Universities (YJ201637).

References and links

1. S. Liu, J. Xu, Y. Zhang, L. Chen, and C. Li, “General optical implementations of fractional Fourier transforms,” Opt. Lett. 20(9), 2088–2090 (2007). [CrossRef]  

2. W. Chen, B. Javidi, and X. Chen, “Advances in optical security systems,” Adv. Opt. Photonics 6(2), 120–155 (2014) [CrossRef]  

3. A. Alfalou and C. Brosseau, “Dual encryption scheme of images using polarized light,” Opt. Lett. 35(13), 2185–2187 (2010). [CrossRef]   [PubMed]  

4. Z. Liu and S. Liu, “Random fractional Fourier transform,” Opt. Lett. 32(15), 1053–1055 (1995). [CrossRef]  

5. Y. Shi, T. Li, Y. Wang, Q. Gao, S. Zhang, and H. Li, “Optical image encryption via ptychography,” Opt. Lett. 38(9), 1425–1427 (2013). [CrossRef]   [PubMed]  

6. Z. Liu, Q. Guo, L. Xu, M. Ahmad, and S. Liu, “Double image encryption by using iterative random binary encoding in gyrator domains,” Opt. Express 18(11), 12033–12043 (2010). [CrossRef]   [PubMed]  

7. W. Qin and X. Peng, “Asymmetric cryptosystem based on phase-truncated Fourier transforms,” Opt. Lett. 35(2), 581–583 (2008).

8. Y. Qin, Q. Gong, Z. Wang, and H. Wang, “Optical multiple-image encryption in diffractive-imaging-based scheme using spectral fusion and nonlinear operation,” Opt. Express 24(23), 26877–26886 (2016). [CrossRef]   [PubMed]  

9. D. Kong, L. Cao, G. Jin, and B. Javidi, “Three-dimensional scene encryption and display based on computer-generated holograms,” Appl. Opt. 55(29), 8296–8300 (2016). [CrossRef]   [PubMed]  

10. A. Alfalou and C. Brosseau, “Optical image compression and encryption methods,” Adv. Opt. Photonics 1(3), 589–636 (2009) [CrossRef]  

11. X. Li, D. Xiao, and Q. Wang, “Error-free holographic frames encryption with CA pixel-permutation encoding algorithm,” Opt. Laser Eng. 100, 200–207 (2018). [CrossRef]  

12. X. Wang and D. Zhao, “Amplitude-phase retrieval attack free cryptosystem based on direct attack to phase-truncated Fourier-transform-based encryption using a random amplitude mask,” Opt. Lett. 38(18), 3684–3686 (2013). [CrossRef]   [PubMed]  

13. W. Xu, H. Xu, Y. Luo, T. Li, and Y. Shi, “Optical watermarking based on single-shot-ptychography encoding,” Opt. Express 24(24), 27922–27936 (2016). [CrossRef]   [PubMed]  

14. N. Zhou, S. Pan, S. Cheng, and Z. Zhou, “Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing,” Opt. Laser Technol. 82, 121–133 (2016). [CrossRef]  

15. S. Hong, J. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004). [CrossRef]   [PubMed]  

16. D. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express 15(19), 12039–12049 (2007). [CrossRef]   [PubMed]  

17. M. Cho and B. Javidi, “Three-dimensional photon counting double-random-phase encryption,” Opt. Lett. 38(17), 3198–3201 (2013). [CrossRef]   [PubMed]  

18. H. Yoo, “Axially moving a lenslet array for high-resolution 3D images in computational integral imaging,” Opt. Express 21(7), 8873–8878 (2013). [CrossRef]   [PubMed]  

19. X. Li and I. Lee, “Modified computational integral imaging-based double image encryption using fractional Fourier transform,” Opt. Laser Eng. 66, 112–121 (2015). [CrossRef]  

20. J. Kim, J. Jung, Y. Jeong, K. Hong, and B. Lee, “Real-time integral imaging system for light field microscopy,” Opt. Express 22(9), 10210–10220 (2014). [CrossRef]   [PubMed]  

21. X. Li and I. Lee, “Robust copyright protection using multiple ownership watermarks,” Opt. Express 23(3), 3035–3046 (2015). [CrossRef]   [PubMed]  

22. A. Markman, J. Wang, and B. Javidi, “Three-dimensional integral imaging displays using a quick-response encoded elemental image array,” Optica 5(1), 332–335 (2014). [CrossRef]  

23. Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014). [CrossRef]   [PubMed]  

24. Y. Wang, Y. Shen, Y. Lin, and B. Javidi, “Extended depth-of-field 3D endoscopy with synthetic aperture integral imaging using an electrically tunable focal-length liquid-crystal lens,” Opt. Lett. 40(15), 3564–3567 (2015). [CrossRef]   [PubMed]  

25. A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42(35), 7036–7042 (2003). [CrossRef]   [PubMed]  

26. J. Wang, X. Xiao, and B. Javidi, “Three-dimensional integral imaging with flexible sensing,” Opt. Lett. 39(24), 6855–6858 (2014). [CrossRef]   [PubMed]  

27. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]   [PubMed]  

28. M. Cho and B. Javidi, “Three-dimensional photon counting double-random-phase encryption,” Opt. Lett. 38(17), 3198–3201 (2013). [CrossRef]   [PubMed]  

29. X. Wang, W. Chen, and X. Chen, “Fractional Fourier domain optical image hiding using phase retrieval algorithm based on iterative nonlinear double random phase encoding,” Opt. Express 22(19), 22981–22995 (2014). [CrossRef]   [PubMed]  

30. X. Peng, P. Zhang, H. Wei, and B. Yu, “Known-plaintext attack on optical encryption based on double random phase keys,” Opt. Lett. 31(8), 1044–1046 (2006). [CrossRef]   [PubMed]  

31. R. Tao, J. Lang, and Y. Wang, “Optical image encryption based on the multiple-parameter fractional Fourier transform,” Opt. Lett. 33(6), 581–583 (2008). [CrossRef]   [PubMed]  

32. I. Muniraj, B. Kim, and B. Lee, “Encryption and volumetric 3D object reconstruction using multispectral computational integral imaging,” Appl. Opt. 53(27), G25–G32 (2014). [CrossRef]   [PubMed]  

33. X. Li, M. Zhao, Y. Xing, L. Li, S. Kim, X. Zhou, and Q. Wang, “Optical encryption via monospectral integral imaging,” Opt. Express 25(25), 31516–31527 (2017). [CrossRef]   [PubMed]  

34. Z. Xiong, Q. Wang, Y. Xing, H. Deng, and D. Li, “An active integral imaging system based on multiple structured light method,” Opt. Express 23(21), 27095–27104 (2015). [CrossRef]  

35. Y. Xing, Q. Wang, Z. Xiong, and H. Deng, “Encrypting three-dimensional information system based on integral imaging and multiple chaotic maps,” Opt. Eng. 55(2), 023107 (2016). [CrossRef]  

36. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

37. S. Cho, U. Choi, H. Kim, Y. Hwang, J. Kim, and S. Heo, “New synthesis of one-dimensional 90/150 linear hybrid group cellular automata,” IEEE Trans. Comput. AID D. 26(9), 1720–1724 (2007). [CrossRef]  

38. X. Li, S. Kim, and Q. Wang, “Copyright protection for elemental image array by hypercomplex Fourier transform and an adaptive texturized holographic algorithm,” Opt. Express 25(15), 17076–17098 (2017). [CrossRef]   [PubMed]  

39. W. Chen, X. Chen, and C. JR Sheppard, “Optical color-image encryption and synthesis using coherent diffractive imaging in the Fresnel domain,” Opt. Express 20(4), 3853–3865 (2012). [CrossRef]   [PubMed]  

40. G. Situ and J. Zhang, “Double random-phase encoding in the Fresnel domain,” Opt. Lett. 29(14), 1584–1586 (2004). [CrossRef]   [PubMed]  

41. L. Chen and D. Zhao, “Optical color image encryption by wavelength multiplexing and lensless Fresnel transform holograms,” Opt. Express 14(19), 8552–8560 (2006). [CrossRef]   [PubMed]  

42. G. Situ and J. Zhang, “Multiple-image encryption by wavelength multiplexing,” Opt. Lett. 30(11), 1306–1308 (2005). [CrossRef]   [PubMed]  

43. X. Peng, H. Wei, and P. Zhang, “Chosen-plaintext attack on lensless double-random phase encoding in the Fresnel domain,” Opt. Lett. 15(31), 3261–3263 (2006). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Optical 3D scene reconstruction by the improved integral imaging method.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1
Fig. 1 Capturing process of the proposed monospectral SAII system.
Fig. 2
Fig. 2 The captured “chessboard” patterns and calibrated elemental image array.
Fig. 3
Fig. 3 The 1D case of the CA m-sequence with two groups CA rules.
Fig. 4
Fig. 4 The encryption process of the proposed method.
Fig. 5
Fig. 5 Experimental setup of the 3D images pickup system.
Fig. 6
Fig. 6 Encryption result of the monospectral elemental image array and its histogram.
Fig. 7
Fig. 7 Multispectral visualization of the reconstructed 3D images “Dices” and “Magic cube” from the calibrated monospectral elemental images at different depths (z).
Fig. 8
Fig. 8 Multispectral visualization of the reconstructed 3D images “Car” and “Magic cube” from the calibrated monospectral elemental images at different depths (z).
Fig. 9
Fig. 9 Reconstructed 3D images with three methods: (a) and (b) our proposed reconstructed 3D images at the distance of 600 mm and 500 mm, respectively, (c) and (d) reconstructed 3D images with the CII-based method [19] at the distance of 600 mm and 500 mm, respectively, (e) and (f) reconstructed 3D images with the SAII-based method [35] at the distance of 600 mm and 500 mm, respectively.
Fig. 10
Fig. 10 Reconstructed images of the previous method [33] with the different iterations: (a) 6th iteration, (b) 15th iteration.
Fig. 11
Fig. 11 Reconstructed 3D image with the uncalibrated elemental images.
Fig. 12
Fig. 12 Reconstructed 3D image with the partial incorrect keys (incorrect RGCM).
Fig. 13
Fig. 13 Our reconstructed 3D images “Dices” and “Magic cube” against attacks: (a) Gaussian noise (0.1) and the reconstruction depth 650 mm, (b) Gaussian noise (0.1) and the reconstruction depth 500 mm, (c) cropping attack (50%) and the reconstruction depth 650 mm, (d) cropping attack (50%) and the reconstruction depth 500 mm.
Fig. 14
Fig. 14 Our reconstructed 3D images “Car” and “Magic cube” against attacks: (a) Gaussian noise (0.1) and the reconstruction depth 650 mm, (b) Gaussian noise (0.1) and the reconstruction depth 500 mm, (c) cropping attack (50%) and the reconstruction depth 650 mm, (d) cropping attack (50%) and the reconstruction depth 500 mm.
Fig. 15
Fig. 15 Reconstructed 3D images “Dices” and “Magic cube” of the method [19] against attacks: (a) Gaussian noise (0.1) and the reconstruction depth 650 mm, (b) Gaussian noise (0.1) and the reconstruction depth 500 mm, (c) cropping attack (50%) and the reconstruction depth 650 mm, (d) cropping attack (50%) and the reconstruction depth 500 mm.
Fig. 16
Fig. 16 Reconstructed 3D images “Car” and “Magic cube” of the method [19] against attacks: (a) Gaussian noise (0.1) and the reconstruction depth 650 mm, (b) Gaussian noise (0.1) and the reconstruction depth 500 mm, (c) cropping attack (50%) and the reconstruction depth 650 mm, (d) cropping attack (50%) and the reconstruction depth 500 mm.
Fig. 17
Fig. 17 Different views of the optical reconstructed 3D images. In the video (see Visualization 1), we show the optical reconstruction of 3D images with different views.

Tables (2)

Tables Icon

Table 1 PSNR values of 3D images “Dices” and “Magic cube” with two different methods.

Tables Icon

Table 2 PSNR values of 3D images “Car” and “Magic cube” with two different methods.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

H m , n = [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] .
[ x i y i 1 ] m . n = s m , n H m , n [ x k y k 1 ] m , n ,
s i ( k + 1 ) = F ( s i 1 ( k ) , s i ( k ) , s i + 1 ( k ) ) ,
Rule 90 : s i ( k + 1 ) = s i 1 ( k ) s i + 1 ( k ) ,
Rule 150 : s i ( k + 1 ) = s i 1 ( k ) s i ( k ) s i + 1 ( k ) .
s i ( k + 1 ) = T ( r n ) s i ( k ) .
T ( r n ) = [ r 1 1 0 0 0 1 r 2 1 0 0 1 r 3 0 r n 2 1 0 0 1 r n 1 1 0 0 0 1 r n ]
r n = { 0 , rule 90 1 , rule 150
μ ( x , y ) = FT { E ( x , y ) M 1 ( r n ) } × h ( x ^ , y ^ ; z 1 ; λ ) ,
h ( x ^ , y ^ ; z 1 ; λ ) = exp [ j π λ z 1 × ( x ^ 2 + y ^ 2 ) ] ,
μ ( x , y ) = FT λ z 1 { ( E ( x , y ) M 1 ( r n ) ) } .
E ( x , y ) = FT λ z 2 { μ ( x , y ) M 2 ( r n ) } .
E ( x , y ) = FT λ z 1 { FT λ z 2 { E ( x , y ) M 2 ( r n ) } , M 1 ( r n ) } .
R ( x , y , z ) = 1 N o ( x , y ) k = 0 K 1 l = 0 L 1 E k l ( x k N x × p c x × z g , y l N y × p c y × z g ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.