Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical 3D object security and reconstruction using pixel-evaluated integral imaging algorithm

Open Access Open Access

Abstract

Under the framework of computational integral imaging, an optical 3D objects security and high-quality reconstruction method based on pixel-evaluating mapping (PEM) algorithm is proposed. In this method, the pixel crosstalk caused by noneffective pixel overlap is effectively reduced by a pixel-evaluated mask, which can improve the image quality of the reconstructed 3D objects. Meanwhile, compared with the other computational integral imaging reconstruction methods, our proposed PEM algorithm can obtain more accurate pixel mapping weight parameters, thereby the reconstructed 3D objects provide higher quality. In addition, the nonlinear feedback shift register cellular automata algorithm is proposed to increase the security of the proposed method. We have experimentally verified the proposed 3D objects encryption and reconstruction algorithm. The experimental results show that the proposed method is superior to the other computational reconstruction methods.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical encryption methods have shown great potential in the security field due to their high speed, parallelism and multidimensionality capabilities [1–5]. Many optical theories, such as ghost imaging, holographic [6,7], Fresnel transform and Fourier transform are widely used to accomplish optical encryption [8–14]. Among them, integral imaging (II) is a popular three-dimensional (3D) imaging technology that is considered the next generation of 3D displays technique [15–20]. Many scholars have carried out various researches on II, such as parallax extraction and stereo matching [21–24]. The conventional II is the 3D imaging technology for capturing two-dimensional (2D) information and optically displaying it, in which various 2D images with different viewing angles are captured from a 3D object by a micro lens array or digital camera array. The acquired 2D images are called elemental image arrays (EIAs). Since the distributed memory characteristics of these image arrays, this method has attracted much attention in the field of information security. A complete conventional integral imaging system includes two processes of record and reconstruction. Through the lenslet array and CCD camera, a series of small elemental images are captured during the recording process. And the light rays of the elemental image during the reconstruction process are back propagated through the lenslet array, which completes the reconstruction of the original 3D objects [25–28]. However, there are still some problems in this method. The main problem is that the quality of the reconstructed objects is affected by light diffraction and device limitations [29–31].

To solve these difficulties, in [32], the authors proposed a computer-generated integral imaging (CGII) method. In this method, based on the ray optics, the original 3D objects can be recorded and reconstructed by the virtual pinhole array computationally [33–37]. In the reconstruction process, each recorded elemental image is inversely projected onto the display plane and inversely amplified based on the amplification ratio. Since there is no diffraction limited and optical devices, this method can reconstruct the original image with high quality and its performance in the field of image encryption is more excellent than conventional integral imaging. However, the CGII technique still has some problems. One of the problems is that the image quality of 3D object is limited by the reconstruction distance. Since the reconstruction algorithm of CGII is a kind of pixel inversely amplification reconstruction method, the superimposed pixel area increases as the amplification ratio increases, resulting in the decrease in the quality of the reconstructed 3D objects.

In previous work, in order to solve the problem of low quality of reconstructed 3D objects, a 3D imaging of direct smart pixel mapping algorithm was proposed [37]. The smart mapping in this method is a process of converting depth, which can convert the elemental image recorded at long distance into the elemental image recorded near the pinhole array. Because the algorithm attenuates the interference problem caused by the reduction of the amplification factor, the visual quality of the restored 3D objects is increased. To further eliminate the noise caused by overlapping of inversely mapped elemental images, in [38], the authors introduced the depth-converted EIA to improve the visual quality of 3D objects, this method provides a higher visual quality for restored 3D object than conventional computational integral imaging reconstruction (CIIR) algorithms. However, for CGII, in the pixel mapping process, the mutual superposition of non-effective pixels in the elemental image will also result in pixel crosstalk, thereby reducing the visual quality of the reconstructed 3D objects.

To improve the quality of the reconstructed 3D objects, designing a new pixel mapping model to control the mapping strength of important and less important pixels of elemental images is mainly motivation of this work. In the elemental images, the pixels located at the 3D objects are defined as important pixels, and the pixels located at other regions (e.g. background) of elemental image are defined as less important pixels. In this work, we generate pixel-evaluated masks through the proposed consistent weight detection (CWD) algorithm, which effectively improves the image quality of reconstructed 3D objects of the proposed cryptographic system. The main innovation of our work is that our proposed PEM algorithm integrates the spatial and motion features between and the elemental image itself and adjacent elemental images, which allows us to obtain pixel mapping weight parameters more accurately. Therefore, our approach can provide considerable visual quality of the reconstructed 3D objects. To the best of our knowledge, this is the first paper presented by the implementation of pixel-evaluated masks for 3D objects reconstruction by using the CWD algorithm. We wish to point out that the proposed 3D objects reconstruction and encryption algorithm is different from what we considered in our earlier work [39] which the saliency map is used to determine the embedding parameters of watermark, and the aim of the previous work is to protect the copyright of holographic videos.

2. Proposed method

2.1. 3D objects reconstruction with the pixel evaluating mapping algorithm

In integral imaging, 3D object can be reconstructed by computational reconstruction algorithm. In this method, each elemental image corresponding to different magnification factor is projected and enlarged by a virtual pinhole array. The change in the reconstruction distance causes the enlarged and projected elemental images to be superimposed. That is to say, each pixel of the last reconstructed 3D object is the superimposed value of the pixels of the elemental images. Meanwhile, in the pixels mapping process, the non-effective (less important) pixels of the elemental images are superimposed each other to result in the pixel crosstalk, and pixel crosstalk decreases the visual quality of the restored 3D object. Thereby, in our work, we aim to searching pixel mapping weights to control mapping strength of the important and unimportant pixels to improve the visual quality of reconstructed 3D object is a challenging work that prompted us to design a new model to determine the strength of each pixel mapping.

Our proposed PEM algorithm integrates the static features from elemental image itself and the motion features from adjacent elemental images, which makes our proposed method can effectively extract more features of the 3D objects. Figure 1 represents the process of generating the pixel-evaluated mask by the CWD algorithm. First, the input elemental images are over-segmented into many superpixels, which refer to some sub-regions in the image that represent local structural features of a particular image. And the spatial edges within the elemental image and parallax motion boundary edges computed from adjacent elemental images. Then, from the spatial and motion edges that have been obtained, the probability map is calculated. Finally, the pixel-evaluated mask is calculated by the geodesic distance to the evaluation of the foreground and background regions. The details of our proposed PEM algorithm are as follows: First, we need to get the superpixel of each elemental image. In this process, while retaining the initial structural elements of the elemental image, unnecessary details are effectively simplified and ignored. The boundary between superpixels consists of strong edges or outlines in the elemental image. Each elemental image pk is over-segmented into a series of superpixel elements Rk={r1k,r2k,...,rnk} using the algorithm SLIC [40]. Then the edge probability map Mke (xki) responding to k-th elemental image pk at pixel xki can be calculated by employing [41]. The optical flow Ok between adjacent elemental images is obtained by the method [42]. The corresponding gradient magnitude Mko can be calculated according to the optical flow Ok as Mok=Ok. Assume the pixel edge graph is Mke, the edge probability of each superpixel can be generated by calculating the average value of the pixels with some maximum edge probabilities, thereby we can get the superpixel edge mapM^ek. In the same way, we can obtain the superpixel optical flow value mapM^ok, and the edge probability Mk can be produced by:

 figure: Fig. 1

Fig. 1 Analysis of the proposed pixel mapping strength calculated by the CWD algorithm.

Download Full Size | PDF

Mk=M^ek×M^ok.

According to the pixel mapping strength calculation process shown in Fig. 1, we can get the edge probability by combining the spatial and the parallax motion boundary edges. However, the generated edges map only identifies the foreground location of object. Therefore, we employ the geodesic distance to calculate the rough object probability map, which can highlight the foreground position with high edge values. The geodesic distance Dg (n1, n2, G) represents the distance between the nodes n1, n2 in map G is the smallest integral of the weight function W, where W represents weight function, and the geodesic distance can be calculated by:

Dg(n1,n2,G)=minVn1,n2n1n2|W(z)×Vn1,n2(z)|dz,
where V represents the path between nodes n1, n2.

We construct a corresponding undirected weighted graphGk={Nk,Ek}for each elemental image, where the superpixel Rk acts as node Nk and links.

For each elemental image, an undirected weighted map Gk with superpixels Rk as nodes Nk and the links between neighbored nodes as edges Ek is constructed. According to the graphical structure, we can speculate the weight Wk between adjacent superpixels rkm and rkn by the following formula:

Wmnk=Mk(r)mkMk(rnk),
where Mk(rkm) and Mk(rkn) are the boundary probability of rkm and rkn, respectively. And thus, the probability Mk(rkn) is calculated by the minimum geodesic distance to the boundary of image by the formula:
Mk(rnk)=minqQkDg(rnk,q,Gk),
where Qk represents the superpixels along the four boundaries of elemental images pk.

Using the foreground probability map Mk, we can detect the foreground object. The calculated object saliency is not accurate. Therefore, the foreground and background of object saliency should be considered simultaneously. Similarly, we construct an undirected map Gkof each pair of adjacent elemental images pk and pk + 1. The nodes Nk are composed of the superpixels rk and rk + 1 of elemental images pk and pk + 1. Using the object probability map Mk, the elemental image pk is decomposed into the background region Bk and the class object region Uk by the adaptive threshold. The threshold value αk of the elemental image pk is obtained by calculating the average probability of all the pixels in the elemental image pk by the probability map Mk. Thus, we produce the background region Bk of the k-th elemental image by the following formula:

Bk=RkUk,
where Uk={rnk|Mnkαk}{rnk|rnkistemporallyconnectedtoUk-1}.

Based on the mapGk, we can obtain the pixels mapping strength valuesSnk(rnk)of the elemental image pk:

Snk(rnk)=minBBkBk+1Dg(rnk,B,Gk)

The 3D imageO(x,y,z)is finally reconstructed at the distance of z with the pixels mapping strength values S of (i,j)th elemental image p using the equation:

O(x,y,z)=i=0M1j=0N1S(i,j)(x,y)×p(i,j)(xiM×ρcx×γ,yjN×ρcy×γ),
where O(x,y,z) means the pixel value of the reconstructed 3D image in a plane of depth z, M×N is the total number of elemental images, p(i,j) denotes the (i,j)thelemental image, cx and cy means the size of imaging sensor, ρ is the pitch of each lens, γrepresents the magnification factor.

2.2. Elemental images encoded by non-linear FSR CA

The cellular automata (CA) structure is treated as a discrete lattice of sites, where each cell in the structure is set to either the value of zero or one. Assuming the next state of each cell is determined by itself and its two neighbors. According to the deterministic rule determined only by logical neighborhood, each of cells evolves in discrete time steps. In our work, a CA is completely empty by the cell using the null-boundary CA specified by rules 90 and 150. The rule vector is an n-tuple which is a natural form of specification, where

di={0,ifcelliwithrule901,ifcelliwithrule150

If C is a CA rule vector <d1,d2,...,dn>, then the matrix T is represented after state transfer to C:

T=(d1100001d2100001d31000001dn1100001dn)

In a 1D CA, all the cells are arranged in a linear array. Of particular importance is the 2-site, 3-neighbourhood CA, where the next state of a particular cell is assumed to depend on itself and on its two neighbors. The state of cell x at time t + 1 is denoted as

xi(t+1)=f(xi1(t),xi(t),xi+1(t)),
where xi(t+1) denotes the state of ith cell at time t, and f is the Boolean function. The CA, for 3-neighbourhood, characterized by rules known as rules 90 and 150.

Rule90:xi(t+1)=xi1(t)xi+1(t)
Rule150:xi(t+1)=xi1(t)xi(t)xi+1(t)

In the elemental images encoding process, we propose a non-linear feedback shift register (FSR) CA to encode image. As shown in Fig. 2, the proposed non-linear FSR CA structure is composed of non-linear feedback circuit XOR and 8-bit CA. According to the feedback function f, the structure of Fig. 2 can be written as:

f(x1,x2,...,x8)=x7x5x4x3(x0F(x)),
where F means the complemented vector. The Wolfram rule is selected to generate the rule matrix, and the generated rule matrix is used to create the transition matrix T. According to the transition matrix T and the non-linear FSR CA structure, we generate the pseudo noise sequence to encrypt the input image. The image encoding process can be descripted as:
E(i,j)=x7x5x4x3(x0F(x))I(i,j),
where I(i,j)represents the input image pixel values at (i,j)and E(i,j)is the encoded image.

 figure: Fig. 2

Fig. 2 The structure of the proposed non-linear FSR based CA for image encoding algorithm.

Download Full Size | PDF

3. Simulation results and discussion

To confirm the usefulness of the proposed optical 3D objects encryption method, we carry out computational experiments for this proposed encryption system. The objects “Trees” and “House” are utilized to create the 3D scene. The objects are located at depths of 450 mm and 500 mm from the imaging plane, respectively. The elemental images can be captured by moving the CCD camera as shown in Fig. 3. The elemental images are captured by moving the imaging sensor in equal steps of 8 mm in the horizontal and vertical directions. Some captured elemental images with different parallaxes are selected and shown in Fig. 4. Figure 4 shows the quality comparison with three different detection methods. Our proposed PEM algorithm integrates the static features (see the third line of Fig. 4) from elemental image itself and the motion features (see the second line of Fig. 4) from adjacent elemental images, which makes our proposed method extracts more features of 3D objects. This brings us to accurately calculate the pixel mapping weights and reconstruct the satisfactory visual quality of the 3D objects.

 figure: Fig. 3

Fig. 3 Optical pickup device of elemental images.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Illustration of the PEM of 3D objects calculated by the proposed detection algorithm: from top to down: input adjacent elemental images, motion features detection, statics features detection, proposed detection method.

Download Full Size | PDF

To illustrate the improvement of the extracted pixel elevating mask from the 3D objects, we compare it with the method in [13], in which the authors calculated PEM only consider the feature of elemental image itself. Figure 5(a) shows one captured elemental image, and Figs. 5(b) and 5(c) represent the generated PEM by method [13] and our method, respectively. The brighter pixels indicate higher saliency probabilities of 3D objects; it means that the pixel mapping weights of 3D objects are accurately calculated. Compared with Figs. 5(b) and 5(c), Figs. 5(c) obviously provides better salient region of 3D objects than Figs. 5(b), thereby our method provides better performance than method [13].

 figure: Fig. 5

Fig. 5 (a) One captured elemental image, (b) the pixel-evaluated mask calculated by method [13], (c) the pixel-evaluated mask calculated by our method.

Download Full Size | PDF

Next, in order to confirm the visual quality of our work, we present the reconstructed 3D objects with our method and method [11] in Fig. 6. Figures 6(a) and 6(c) show the reconstructed “Tree” and “House” located at the 450 mm and 500 mm, respectively, with our proposed method using the pixel-evaluated mask. Figures 6(b) and 6(d) show the reconstructed “Trees” and “House” located at the 450 mm and 500 mm, respectively, with method [11] without using the pixel-evaluated mask. Our proposed method not only considers the static feature of elemental image itself, but also calculates the parallax motion features of the adjacent elemental images to determine the pixel-evaluated mask, which makes our proposed method provides better visual quality than method [11] without considering features of adjacent elemental images.

 figure: Fig. 6

Fig. 6 Visual quality comparison between our proposed method using pixel-evaluated mask to control the pixel mapping weights and method [11] without using pixel-evaluated mask: (a) and (c) reconstructed 3D objects “Trees” and “House” located at the depths of 450 mm and 500 mm, respectively, (b) and (d) reconstructed 3D objects “Trees” and “House” with method [11].

Download Full Size | PDF

To further show the image quality improvement, the peak signal-to-noise ratio (PSNR) is used to assess the image quality. PSNR is commonly used to measure the quality of distorted image compared with original image. With the center image of elemental image regarded as the original image while the reconstructed images in Fig. 6 as the distorted images. The calculated average PSNRs of our method are 30.13 dB and 31.65 dB, respectively. The calculated average PSNRs of method [11] are 25.02 dB and 24.10 dB, respectively. Compared with method [11], the PSNR of our proposed method is averagely increased by 20.49%.

To show security of the proposed non-linear FSR CA encoding algorithm, in our work, we compare our method with other two similar CA-based encoding methods. One method that the 8-bit hybrid CA (HCA) with rule (150 90 150 90 90 90 150 90) is utilized to create the 2D CA encoding matrix to encode the image. Another method with HCA rule (150 150 90 150 90 150 90 150) is used to generate the CA encoding matrix. Figure 7 shows the encryption results with three methods. Figures 7(a)-7(c) show the generated pseudorandom noise encoding matrixes with three different CA encoding methods. Figures 7(d)-7(f) show the encryption results with three CA encoding methods. From the results, we can see that image encoding based on the HCA methods result in obvious patterns of objects, which can provide hints to attack the encryption method. However, in our work, the proposed non-linear FSR CA encoding algorithm is used to uniformly distribute the energy of the obvious patterns, which can effectively solve this problem.

 figure: Fig. 7

Fig. 7 Results of three CA encoding methods: (a) generated pseudorandom noise encoding matrix generated by HCA rule (150 90 150 90 90 90 150 90), (b) generated pseudorandom noise encoding matrix generated by HCA rule (150 150 90 150 90 150 90 150), (c) generated pseudorandom noise encoding matrix generated with our method, (d)-(f) corresponding encoded image with three different methods.

Download Full Size | PDF

Image histogram as a very important tool is used to analyze the security of the encryption method. Figures 8(a)-8(c) represent the histograms of the encrypted images, which are generated by two HCA methods and the proposed method, respectively. From the results shown in Fig. 8, we can see that the histograms of our proposed encrypted image become very flat after encryption. Thereby it can bring us considerable capacity of resisting statistical attack.

 figure: Fig. 8

Fig. 8 Histogram analysis: (a) histogram of the encrypted image with HCA rule (150 90 150 90 90 90 150 90), (b) histogram of the encrypted image with HCA rule (150 150 90 150 90 150 90 150), (c) histogram of the encrypted image with our method.

Download Full Size | PDF

CA has a large key space for image encryption. With an N-cell, dual-state, M-site one-dimensional CA, our encryption method has 22M ×2N × 22N security keys. Figures 9(a) and 9(b) show the decrypted 3D objects slide images with incorrect reconstruction depths of 350 mm and 700 mm, respectively. Figures 9(c) and 9(d) show the decrypted 3D object slide images with incorrect CA rules at the correct depths. Figures 10(a) and 10(b) show the decrypted 3D objects slide images with all correct secret keys. The results shown in Figs. 9 and 10 reveal that 3D objects can be decrypted only when all secret keys are cracked.

 figure: Fig. 9

Fig. 9 Security analysis with the partial incorrect keys: (a)-(b) decrypted 3D objects with the incorrect reconstruction depths, (c) and (d) decrypted 3D objects with the incorrect CA rules (60 90 150 90 90 90 150 90) and (150 90 120 90 90 90 150 90).

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Decrypted 3D objects with all correct keys: (a) 450 mm and (b) 500 mm.

Download Full Size | PDF

To explain the robustness of the encrypted images of the proposed method, we implement an experiment to compare the robustness of our method with method [11] and method [12]. Figure 11 shows the reconstructed 3D objects with Gaussian noise and the variance is 0.30. From the results shown in Fig. 11, we can see that our proposed method is superior to methods [11] and [12]. Although, the encrypted images are seriously damaged (the variance reaches to 0.30) by the noise attack, the 3D objects of our proposed method can be clearly reconstructed.

 figure: Fig. 11

Fig. 11 Robustness test against Gaussian noise attacks with the zero mean and the variance of 0.30: (a) and (d) the proposed method, (b) and (e) method [11], (c) and (f) method [12].

Download Full Size | PDF

Figure 12 shows the simulation results of 3D objects “Trees” and “House” with 30% salt and pepper noise attack. Figures 12(a) and 12(d) represent the reconstructed 3D objects with our method. Figures 12(b) and 12(e) show the reconstructed 3D objects with method [11]. Figures 12(c) and 12(f) show the reconstructed 3D objects with method in [12]. Compared with methods [11] and [12] (see Fig. 12), our proposed encryption method is more robust against salt and pepper noise attack.

 figure: Fig. 12

Fig. 12 Robustness test with 30% salt and pepper attack: (a) and (d) the proposed method, (b) and (e) method [11], (c) and (f) method [12].

Download Full Size | PDF

Meanwhile, the PSNR, Structural Similarity Index (SSIM), and Color Quality Enhancement (CQE) [43] are utilized for quantitatively measuring visual quality of the reconstructed 3D objects against noise attacks. The color quality assessment CQE is used to measure the colorfulness, sharpness and contrast of the quality distorted image. The PSNR and SSIM values are calculated between the reconstructed plane images of 3D objects without noise and the ones with noise. The calculated PSNRs and SSIMs with three different encryption methods are recorded in Tables 1-6. In experiments, compared with PSNRs, SSIMs, and CQEs of methods [11] and [12], we obtain considerable improvement of PSNRs, SSIMs, and CQEs with our proposed encryption method.

Tables Icon

Table 1. PSNRs of 3D objects against Gaussian noise with three different methods.

Tables Icon

Table 2. PSNRs of 3D objects against salt and pepper noise with three methods.

Tables Icon

Table 3. SSIMs of 3D objects against noise attacks with three methods.

Tables Icon

Table 4. CQE colorfulness of 3D objects against noise attacks with three methods.

Tables Icon

Table 5. CQE sharpness of 3D objects against noise attacks with three methods.

Tables Icon

Table 6. CQE contrast of 3D objects against noise attacks with three methods.

Finally, the 3D objects also can be optically displayed on the integral imaging display device. In this experiment, the captured elemental images need to be calibrated, with the calibrated elemental image array, as shown in Fig. 13, the reconstructed 3D objects can be observed from the left and right views on the display device.

 figure: Fig. 13

Fig. 13 Different views of the optical reconstructed 3D objects on the display device.

Download Full Size | PDF

4. Conclusion

In conclusion, we apply the proposed PEM algorithm to 3D objects integral imaging reconstruction, which addresses the problem of low image quality of reconstructed 3D objects due to interference of the non-effective pixels. Meanwhile, the proposed non-linear FSR CA-based image encoding method can further improve the security of the encryption method. We also evaluate the robustness of this approach against various attacks. The simulation results confirm the feasibility of our proposed 3D objects encryption method.

Funding

National Key R&D Program of China (2017YFB1002900); National Natural Science Foundation of China (NSFC) (61705146, 61535007); Natural Science Foundation of Guangdong Province (2018A0303070009).

References

1. A. Alfalou and C. Brosseau, “Dual encryption scheme of images using polarized light,” Opt. Lett. 35(13), 2185–2187 (2010). [CrossRef]   [PubMed]  

2. Y. Shi, T. Li, Y. Wang, Q. Gao, S. Zhang, and H. Li, “Optical image encryption via ptychography,” Opt. Lett. 38(9), 1425–1427 (2013). [CrossRef]   [PubMed]  

3. X. Li, D. Xiao, and Q. H. Wang, “Error-free holographic frames encryption with CA pixel-permutation encoding algorithm,” Opt. Lasers Eng. 100, 200–207 (2018). [CrossRef]  

4. C. Li, D. Lin, J. Lu, and F. Hao, “Cryptanalyzing an image encryption algorithm based on autoblocking and electrocardiography,” IEEE Multimed. 25(4), 46–56 (2018). [CrossRef]  

5. Z. Hua, Y. Zhou, and H. Huang, “Cosine-transform-based chaotic system for image encryption,” Inf. Sci. 480, 403–419 (2019). [CrossRef]  

6. D. Wang, C. Liu, L. Li, X. Zhou, and Q. H. Wang, “Adjustable liquid aperture to eliminate undesirable light in holographic projection,” Opt. Express 24(3), 2098–2105 (2016). [CrossRef]   [PubMed]  

7. S. F. Lin, H. K. Cao, and E. S. Kim, “Single SLM full-color holographic three-dimensional video display based on image and frequency-shift multiplexing,” Opt. Express 27(11), 15926–15942 (2019). [CrossRef]   [PubMed]  

8. A. Alfalou and C. Brosseau, “Optical image compression and encryption methods,” Adv. Opt. Photonics 1(3), 589–636 (2009). [CrossRef]  

9. S. Liansheng, W. Jiahao, T. Ailing, and A. Asundi, “Optical image hiding under framework of computational ghost imaging based on an expansion strategy,” Opt. Express 27(5), 7213–7225 (2019). [CrossRef]   [PubMed]  

10. N. Zhou, S. Pan, S. Cheng, and Z. Zhou, “Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing,” Opt. Laser Technol. 82, 121–133 (2016). [CrossRef]  

11. X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3D images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26(9), 11084–11099 (2018). [CrossRef]   [PubMed]  

12. D. H. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express 15(19), 12039–12049 (2007). [CrossRef]   [PubMed]  

13. H. Khalilian and I. V. Bajic, “Video watermarking with empirical PCA-based decoding,” IEEE Trans. Image Process. 22(12), 4825–4840 (2013). [CrossRef]   [PubMed]  

14. S. H. Hong, J. S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004). [CrossRef]   [PubMed]  

15. X. Shen, H. S. Kim, K. Satoru, A. Markman, and B. Javidi, “Spatial-temporal human gesture recognition under degraded conditions using three-dimensional integral imaging,” Opt. Express 26(11), 13938–13951 (2018). [CrossRef]   [PubMed]  

16. A. Markman, X. Shen, and B. Javidi, “Three-dimensional object visualization and detection in low light illumination using integral imaging,” Opt. Lett. 42(16), 3068–3071 (2017). [CrossRef]   [PubMed]  

17. F. Yi, Y. Jeoung, and I. Moon, “Three-dimensional image authentication scheme using sparse phase information in double random phase encoded integral imaging,” Appl. Opt. 56(15), 4381–4387 (2017). [CrossRef]   [PubMed]  

18. H. Yoo, “Axially moving a lenslet array for high-resolution 3D images in computational integral imaging,” Opt. Express 21(7), 8873–8878 (2013). [CrossRef]   [PubMed]  

19. X. Li and I. Lee, “Modified computational integral imaging-based double image encryption using fractional Fourier transform,” Opt. Lasers Eng. 66, 112–121 (2015). [CrossRef]  

20. J. Kim, J. H. Jung, Y. Jeong, K. Hong, and B. Lee, “Real-time integral imaging system for light field microscopy,” Opt. Express 22(9), 10210–10220 (2014). [CrossRef]   [PubMed]  

21. D. C. Hwang, D. H. Shin, S. C. Kim, and E. S. Kim, “Depth extraction of three-dimensional objects in space by the computational integral imaging reconstruction technique,” Appl. Opt. 47(19), D128–D135 (2008). [CrossRef]   [PubMed]  

22. J. H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43(25), 4882–4895 (2004). [CrossRef]   [PubMed]  

23. K. Atanassov, S. Goma, V. Ramachandra, and T. Georgiev, “Content-based depth estimation in focused plenoptic camera,” Proc. SPIE 7864, 7864 (2011). [CrossRef]  

24. V. Saveljev and I. Palchikova, “Analysis of autostereoscopic three-dimensional images using multiview wavelets,” Appl. Opt. 55(23), 6275–6284 (2016). [CrossRef]   [PubMed]  

25. X. W. Li and I. K. Lee, “Robust copyright protection using multiple ownership watermarks,” Opt. Express 23(3), 3035–3046 (2015). [CrossRef]   [PubMed]  

26. A. Markman, J. Wang, and B. Javidi, “Three-dimensional integral imaging displays using a quick-response encoded elemental image array,” Optica 1(5), 332–335 (2014). [CrossRef]  

27. Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014). [CrossRef]   [PubMed]  

28. Y. J. Wang, X. Shen, Y. H. Lin, and B. Javidi, “Extended depth-of-field 3D endoscopy with synthetic aperture integral imaging using an electrically tunable focal-length liquid-crystal lens,” Opt. Lett. 40(15), 3564–3567 (2015). [CrossRef]   [PubMed]  

29. A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42(35), 7036–7042 (2003). [CrossRef]   [PubMed]  

30. J. Wang, X. Xiao, and B. Javidi, “Three-dimensional integral imaging with flexible sensing,” Opt. Lett. 39(24), 6855–6858 (2014). [CrossRef]   [PubMed]  

31. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]   [PubMed]  

32. Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41(26), 5488–5496 (2002). [CrossRef]   [PubMed]  

33. X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019). [CrossRef]  

34. K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012). [CrossRef]   [PubMed]  

35. H. Yoo, “Axially moving a lenslet array for high-resolution 3D images in computational integral imaging,” Opt. Express 21(7), 8873–8878 (2013). [CrossRef]   [PubMed]  

36. Y. Oh, D. Shin, B. G. Lee, S. I. Jeong, and H. J. Choi, “Resolution-enhanced integral imaging in focal mode with a time-multiplexed electrical mask array,” Opt. Express 22(15), 17620–17629 (2014). [CrossRef]   [PubMed]  

37. M. Martinez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005). [CrossRef]   [PubMed]  

38. X. Li, C. Li, and I. Lee, “Chaotic image encryption using pseudo-random masks and pixel mapping,” Signal Processing 125, 48–63 (2016). [CrossRef]  

39. X. Li, Y. Wang, Q. H. Wang, S. Kim, and X. Zhou, “Copyright protection for holographic video using spatiotemporal consistent embedding strategy,” IEEE Trans. Industr. Inform.1 (2019).

40. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012). [CrossRef]   [PubMed]  

41. M. Leordeanu, R. Sukthankar, and C. Sminchisescu, “Efficient closed form solution to generalized boundary detection,” in European Conference on Computer Vision. Springer, 516–529 (2012). [CrossRef]  

42. T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33(3), 500–513 (2011). [CrossRef]   [PubMed]  

43. K. Panetta, C. Gao, and S. Agaian, “No Reference Color Image Contrast and Quality Measures,” IEEE Trans. Consum. Electron. 59(3), 643–651 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Analysis of the proposed pixel mapping strength calculated by the CWD algorithm.
Fig. 2
Fig. 2 The structure of the proposed non-linear FSR based CA for image encoding algorithm.
Fig. 3
Fig. 3 Optical pickup device of elemental images.
Fig. 4
Fig. 4 Illustration of the PEM of 3D objects calculated by the proposed detection algorithm: from top to down: input adjacent elemental images, motion features detection, statics features detection, proposed detection method.
Fig. 5
Fig. 5 (a) One captured elemental image, (b) the pixel-evaluated mask calculated by method [13], (c) the pixel-evaluated mask calculated by our method.
Fig. 6
Fig. 6 Visual quality comparison between our proposed method using pixel-evaluated mask to control the pixel mapping weights and method [11] without using pixel-evaluated mask: (a) and (c) reconstructed 3D objects “Trees” and “House” located at the depths of 450 mm and 500 mm, respectively, (b) and (d) reconstructed 3D objects “Trees” and “House” with method [11].
Fig. 7
Fig. 7 Results of three CA encoding methods: (a) generated pseudorandom noise encoding matrix generated by HCA rule (150 90 150 90 90 90 150 90), (b) generated pseudorandom noise encoding matrix generated by HCA rule (150 150 90 150 90 150 90 150), (c) generated pseudorandom noise encoding matrix generated with our method, (d)-(f) corresponding encoded image with three different methods.
Fig. 8
Fig. 8 Histogram analysis: (a) histogram of the encrypted image with HCA rule (150 90 150 90 90 90 150 90), (b) histogram of the encrypted image with HCA rule (150 150 90 150 90 150 90 150), (c) histogram of the encrypted image with our method.
Fig. 9
Fig. 9 Security analysis with the partial incorrect keys: (a)-(b) decrypted 3D objects with the incorrect reconstruction depths, (c) and (d) decrypted 3D objects with the incorrect CA rules (60 90 150 90 90 90 150 90) and (150 90 120 90 90 90 150 90).
Fig. 10
Fig. 10 Decrypted 3D objects with all correct keys: (a) 450 mm and (b) 500 mm.
Fig. 11
Fig. 11 Robustness test against Gaussian noise attacks with the zero mean and the variance of 0.30: (a) and (d) the proposed method, (b) and (e) method [11], (c) and (f) method [12].
Fig. 12
Fig. 12 Robustness test with 30% salt and pepper attack: (a) and (d) the proposed method, (b) and (e) method [11], (c) and (f) method [12].
Fig. 13
Fig. 13 Different views of the optical reconstructed 3D objects on the display device.

Tables (6)

Tables Icon

Table 1 PSNRs of 3D objects against Gaussian noise with three different methods.

Tables Icon

Table 2 PSNRs of 3D objects against salt and pepper noise with three methods.

Tables Icon

Table 3 SSIMs of 3D objects against noise attacks with three methods.

Tables Icon

Table 4 CQE colorfulness of 3D objects against noise attacks with three methods.

Tables Icon

Table 5 CQE sharpness of 3D objects against noise attacks with three methods.

Tables Icon

Table 6 CQE contrast of 3D objects against noise attacks with three methods.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

M k = M ^ e k × M ^ o k .
D g ( n 1 , n 2 , G ) = min V n 1 , n 2 n 1 n 2 | W ( z ) × V n 1 , n 2 ( z ) | d z ,
W m n k = M k ( r ) m k M k ( r n k ) ,
M k ( r n k ) = min q Q k D g ( r n k , q , G k ) ,
B k = R k U k ,
S n k ( r n k ) = min B B k B k + 1 D g ( r n k , B , G k )
O ( x , y , z ) = i = 0 M 1 j = 0 N 1 S ( i , j ) ( x , y ) × p ( i , j ) ( x i M × ρ c x × γ , y j N × ρ c y × γ ) ,
d i = { 0 , if cell i with rule 90 1 , if cell i with rule 150
T = ( d 1 1 0 0 0 0 1 d 2 1 0 0 0 0 1 d 3 1 0 0 0 0 0 1 d n 1 1 0 0 0 0 1 d n )
x i ( t + 1 ) = f ( x i 1 ( t ) , x i ( t ) , x i + 1 ( t ) ) ,
Rule9 0 : x i ( t + 1 ) = x i 1 ( t ) x i + 1 ( t )
Rule15 0 : x i ( t + 1 ) = x i 1 ( t ) x i ( t ) x i + 1 ( t )
f ( x 1 , x 2 , ... , x 8 ) = x 7 x 5 x 4 x 3 ( x 0 F ( x ) ) ,
E ( i , j ) = x 7 x 5 x 4 x 3 ( x 0 F ( x ) ) I ( i , j ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.