Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Active integral imaging system based on multiple structured light method

Open Access Open Access

Abstract

In this paper, we simplify the equipments for integral imaging (II) pickup and implement an active II system based on multiple structured light (MSL) method. In the active II system, the complete three-dimensional (3D) shape of the 3D scene can be reconstructed, and the tunable parallaxes can be generated without occlusions. Therefore, the high-quality 3D images can be displayed efficiently by the II. We also use the Compute Unified Device Architecture implementing the processing algorithms in graphics processing unit. The experimental results demonstrate the effectiveness of the MSL method for the II pickup and the acceleration for the elemental image array generation. Especially, the proposed method is suitable for the real scene with high precision.

© 2015 Optical Society of America

1. Introduction

Integral imaging (II) as an attractive three-dimensional (3D) technology can reconstruct the autostereoscopic 3D images without glasses and provide both horizontal and vertical parallaxes with continuous views [1–3 ]. Basically, the conventional II technology consists of the pickup and display processes. In the pickup process, however, there are still some problems such as the limitations of parallax range, scene size, and complexity of devices, which delay the practical application of II [3]. In the past decades, many researchers have focused on solving these problems in the II pickup, and many active and passive approaches have been proposed according to whether using the illumination of structured light [4–9 ].

The conventional approaches of II pickup use the passive approaches without the illumination of structured light, such as by micro-lens array (MLA), camera array or dual-camera [1, 4–7, 9, 10 ]. However, the II pickup by the MLA is limited by the scene size, unnecessary beams, aberrations, and crosstalk in the adjacent elemental images [6, 11, 12 ]. Some researchers replace the MLA with a certain camera array or dual-camera in the II pickup process to collect the full-color and high-resolution 3D information of a large-sized 3D scene [7]. But, the camera array contains a large number of cameras, and requires some complex optical and mechanical structures. Besides, the calibrations of the enormous amount of cameras are difficult [13]. The dual-camera enabled II pickup approach simplifies the acquisition equipment, but it is limited by the precision of the two-view stereoscopic camera and stereo matching algorithms [10].

Some researchers have introduced the active pickup approaches into the II pickup process. In the active approaches, a great contribution has been made by the Chungbuk National University to collect the elemental image array (EIA) of the real 3D scene with a depth camera in 2012 [11, 12 ]. The depth camera uses a pattern of projected infrared points to generate a dense 3D shape. Actually, the depth camera is an integrated system of the structured light illumination to obtain the 3D shape with a limited resolution. This system simplifies the pickup process, but it is limited by the accuracy and resolution of the depth camera. In addition, the occlusions and the holes in the depth map degrade the quality of the generated EIA seriously [11, 12, 14 ]. Therefore, the high-precision active pickup approach is needed to meet the requirement for the II display.

In this paper, we propose an active II system with high-quality 3D reconstruction based on the multiple structured light (MSL) method. In the proposed system, the II pickup equipments are simplified. The complete 3D shape is fused by the MSL method, and the tunable parallaxes are generated without occlusions. In this paper, high-quality 3D images are also reconstructed efficiently in the II display. In order to reduce the time cost of the sub-images and EIA generations, the algorithms are implemented with Compute Unified Device Architecture (CUDA) based on graphics processing unit (GPU) in parallel [11, 15, 16 ]. The experiments verify the effectiveness of the proposed system and the acceleration for the EIA generations.

2. Principle of the active II system

2.1 Configuration of the proposed active II system

Figure 1 shows a schematic diagram illustrating the configuration of the proposed active II system. The proposed system consists of three parts: acquisition part, where the color texture is obtained by a charge coupled device (CCD) camera and complete 3D shape is reconstructed by the MSL method with the illumination of structured light; processing part which generates the orthographic sub-images and EIA in parallel with CUDA based on GPU; and the 3D display part which reconstructs the 3D images by the MLA for viewers.

 figure: Fig. 1

Fig. 1 Configuration of the proposed active II system.

Download Full Size | PDF

2.2 Reconstruction of fused 3D shape by using the MSL method

In this paper, the complete fused 3D shape of the real 3D scene is reconstructed by using the MSL method. Multiple digital light processions (DLPs) are utilized to illuminate the 3D scene with the structured light from different angles between the times t 1 to tn, and the CCD camera captures the corresponding deformed patterns of the 3D scene, synchronously. Between the time tn +1, the CCD camera captures the color texture of the 3D scene without the illumination of the structured light. With the deformed patterns, the 3D shape can be reconstructed [17–23 ]. However, each illumination from a single DLP has some occlusions in the blind areas, as shown in Fig. 2 . Because of the occlusions, the reconstruction precision is impaired, and even results in an unsuccessful reconstruction.

 figure: Fig. 2

Fig. 2 Principle of the 3D shape reconstruction by the proposed MSL method.

Download Full Size | PDF

In the MSL method, we reconstruct the imperfect 3D shapes based on each DLP separately. In order to avoid the interferences of the occlusions, the imperfect 3D shapes are fused to generate the more complete 3D shape. As shown in Fig. 2, the i-th (i = 1, 2, ..., and n) DLP (DLPi) projects N grating patterns on the surface of the 3D scene between the time ti in sequence. All grating patterns are arranged by a sinusoidal rule. There is an equal 2π/N phase-shifting between the adjacent grating patterns. The CCD captures the N deformed patterns of the DLPi within the time ti. After the capture of the deformed patterns, the color texture is captured without the illumination of the structured light between the time tn +1. The intensity Ii(x, y, j) of the j-th captured deformed pattern within the time ti is expressed as:

Ii(x,y,j)=Ri(x,y){Ai(x,y)+Bi(x,y)cos[φi(x,y)+σj]}.
where j = 1, 2, ..., and N, x, y are the pixel coordinates in the captured deformed patterns, Ri(x, y) represents the surface reflectance of the 3D scene, Ai(x, y) represents the background light intensity, Bi(x, y) is the fringe contrast, φi(x, y) indicates the deformed phase modified by the 3D scene, and σj is the phase-shifting of the j-th deformed pattern.

In the MSL method, the phase measuring profilometry (PMP) is introduced to reconstruct the 3D shapes by a single DLP because of the high accuracy and good robustness [17]. The truncated phase φ'i(x, y) of the deformed phase φi(x, y) can be deduced as:

φ'i(x,y)=arctann=1NIi(x,y,n)sin(σn)n=1NIi(x,y,n)cos(σn).

According to the inverse trigonometric functions, φ'i(x, y) has a value in [-π, π). For the continuous phase distributions, the truncated phase φ'i(x, y) needs to be unwrapped by the phase unwrapping algorithm [17, 18, 20 ], and the unwrapped phase is denoted as Ψi(x, y). Then the phase-changing ΔΨi(x, y) between the 3D scene and the reference plane can be calculated. According to the phase-to-height mapping algorithm, the height Δhi(x, y) of the 3D scene based on the illumination from the DLPi can be calculated as:

1Δhi(x,y)=ai(x,y)+bi(x,y)Δφi(x,y)+ci(x,y)Δφi2(x,y),
where ai(x, y), bi(x, y) and ci(x, y) are the mapping parameters, which can be acquired by plane calibrations [20]. After dealing with the deformed patterns information, we can get the height and contour information of the 3D scene. The obtained multiple heights Δhi(x, y) based on each DLP maybe not complete because of the blind areas of the illuminations. However, the height Δhi(x, y) is simply determined by the 3D scene, not the measurement system. In other words, the height Δhi(x, y) is independent of the parameters in the PMP. So the different height Δhi(x, y) can be fused and stitched together to obtain the more complete 3D shape. The fused height ΔH(x, y) can be obtained as:
ΔH(x,y)=i=1MΔhi(xi,yi),(xi,yi)Ωi,
i=1MΩi=Ω,
where Ωi represents the pixel regions in which the reconstructed height Δhi(x, y) has no accumulated errors in the phase unwrapping algorithm, and Ω represents the whole pixel region.

2.3 Parallel generation of sub-images and EIA for II display

Because of the huge data volume to be processed for the II display, the parallel computation is applied for more efficiently generating the sub-images and EIA. In the proposed system, we use the CUDA to implement the generation algorithms in GPU. The sub-images are generated firstly. The sub-image, which is a collection of pixels at the same position in each elemental image, has the orthographic projection geometry. In the II display, the sub-images represent a series of directional images. The EIA for display is generated by interweaving the series of sub-images. Each sub-image corresponds to an orthogonal projection from a specific direction. Each sub-image contributes one pixel to a fixed position within each elemental image. Thus, the EIA can be understood as a superposition of sub-images, whose spatial resolution is reduced by a factor equal to the number of sub-images. As shown in Fig. 3 , the sub-images and EIAs are generated computationally. The reconstructed 3D scene is imaged on the EIA plane by the virtual MLA. The parallel rays with the same directional projecting angle θ can be extracted to form an orthographic sub-image [24–26 ]. Figures 3(a) and 3(b) show the generation geometries of the sub-images and EIAs with the different central depth planes (CDPs). The pixel coordinates of the sub-images are decided by the CDP and the depth data ΔD(x, y) which is transformed from the fused height ΔH(x, y):

ΔD(x,y)=ΔH(x,y)WRw=ΔH(x,y)HRh,
where the parameters W and H are the real width and height of the 3D scene, and the Rw × Rh is the resolution of the captured deformed pattern. For the sub-image with the projecting angle θ, as shown in Fig. 3, the pixel intensity of the point K is mapped to the pixel coordinate G, and the pixel shifting between K and G is denoted as Δq. The sub-image information Iθ(x, y) for the projecting angle θ can be deduced as:
Iθ(x,y)=T(x+Δqx,y+Δqy),
where T(x, y) is the pixel intensity of the color texture at the coordinate (x, y), and Δqx and Δqy are the components of the pixel shifting Δq along the x and y axes, respectively. The pixel shifting Δq which depends on the depth data ΔD(x, y) and CDP can be calculated as:
Δq=(ΔD(x,y)dc)tanθ,
where dc is the distance between the zero plane (z = 0) and the CDP.

 figure: Fig. 3

Fig. 3 Geometry of generations for the sub-images and EIAs in the proposed system: (a) and (b) with the different CDPs.

Download Full Size | PDF

The projecting angle θ can be deduced by the parameters of the II display. As shown in Fig. 3, the size of each pixel of elemental images is Δr, and the pitch of the micro-lens is p. The gap between the MLA and the EIA is g. The projecting angle θ in the horizontal and vertical directions can be decided by:

θ=(arctanΔrig,arctanΔrjg),
where i, j = floor(-p/2Δr) - 1, floor(-p/2Δr), ..., floor(p/2Δr) + 1, and floor(*) represents rounding down. The different projecting angles are independent to each other, and thus the parallel calculation is possible in GPU.

Because the EIA can be understood as a superposition of sub-images, by interweaving the sub-image Iθ(x, y), as shown in Fig. 4 , the EIA E(x, y) can be calculated as:

E(x,y)=(pΔr)2i,jm,nIθ(pmΔr+i,pnΔr+j)δ(xpmΔri,ypnΔrj),
where, m and n represent the index of the micro-lens. Also, the pseudoscopic-orthoscopic image conversion has been taken into consideration in the EIA generation process.

 figure: Fig. 4

Fig. 4 Pixel mapping algorithm of the EIA generation.

Download Full Size | PDF

Figure 5 shows a diagram of the sub-images and EIA generations with the CUDA based on GPU in parallel. In the sub-image generation process, the depth data and color texture are transferred from central processing unit (CPU) to GPU, and each thread in the CUDA kernel accomplishes the corresponding calculation to generate the sub-image’s pixel Iθ(x, y) in parallel. In the CUDA kernel, the number of the effective threads is equal to the pixel number in the color texture. In the EIA generation process, each thread maps a corresponding pixel from the sub-images to the EIA based on Eq. (10) and Fig. 4.

 figure: Fig. 5

Fig. 5 GPU-based implementation of the sub-images and EIA generations based on the proposed method.

Download Full Size | PDF

When the generation process of the EIA is completed, the pixel information of the EIA is transferred from GPU to CPU for the II display.

3. Experiments and discussions

To further verify the effectiveness of the proposed active II system based on the MSL method, we implemented an experiment using two projectors (CB-X24) as the DLP1 and DLP2 to project N = 4 grating patterns separately. The camera (GM501-H) captures the deformed patterns in 2048 × 1536 pixels. The generated EIA is displayed on the II Pad [27]. The experimental setup is shown in Fig. 6 .

 figure: Fig. 6

Fig. 6 Experimental setup of the proposed active II system.

Download Full Size | PDF

The II system in the experiment is configured with the specification in Table 1 .

Tables Icon

Table 1. Configuration parameters and experiment environment of the II system

In the experiment, a mask of “man face” is arranged as the experimental 3D scene to pickup. The center of the mask of the “man face” is located z = 130mm. Because of the ambient reflection, the mask looks unusual. We reconstructed the 3D shapes by each DLP with the corresponding deformed patterns, as shown in Figs. 7(a) and 7(b) , respectively. We can see that the occlusions in the blind areas caused the accumulated errors and degraded the reconstructed qualities of the 3D shapes, as shown in Figs. 7(c) and 7(d). In the MSL method, the two imperfect 3D shapes were fused to form a complete one. Figure 7(e) shows the fused result. Taking the advantage of the PMP, the fused reconstruction is complete with fairly high precision. As a comparison experiment of the active II pickup approach, we also captured the depth data of the 3D scene by the Microsoft Kinect system, as shown in Fig. 7(f), with different pseudo-colors. There are many holes in the obtained depth maps which can degrade the reconstruction and EIA generation qualities seriously.

 figure: Fig. 7

Fig. 7 Captured deformed patterns and reconstructed 3D shapes in the experiment: (a) and (b) the deformed patterns projected by DLP1 and DLP2, (c) and (d) the 3D shapes reconstructed with (a) or (b), (e) the fused 3D shape by the proposed MSL method, and (f) the depth data with different pseudo-colors in the comparison experiment by the Microsoft Kinect system.

Download Full Size | PDF

In our experiment, the sub-images and EIAs are generated in parallel by using CUDA. The CDP is set to dc = 130 mm and 0 mm, respectively. The parallax range and continuity are tunable in the experiment. Different sub-images and EIAs can be generated with different parameters in the II display. In the experiments, the maximum parallax range is 20.95°. Some of the sub-images with the corresponding projecting angle θ = 10.47°, 0°, and −10.47° are shown in Figs. 8(a)-8(c) . And the EIAs with the different CDPs (dc = 130 mm and 0 mm) are generated, as shown in Figs. 8(d) and 8(e).

 figure: Fig. 8

Fig. 8 Generated sub-images with different projecting angles and EIAs with different CDPs: (a), (b), and (c) the sub-images, (d) and (e) the EIAs and magnified parts.

Download Full Size | PDF

The generated EIA with the CDP of dc = 130 mm is shown on the II Pad. When the viewer moves in front of the II display, the reconstructed 3D images from different positions are captured, as shown in Fig. 9 .

 figure: Fig. 9

Fig. 9 Different views of the reconstructed 3D images: (a) top view, (b) left view, (c) front view, (d) right view, and (e) bottom view.

Download Full Size | PDF

In our experiment, we also implemented the generation algorithms based on CPU. The experiment is conducted in a computer with Intel(R) Core(Tm) i7-3770 CPU. We calculate the time cost based on the system timer T (ms) in the program. At program startup, the system timer is denoted as T = T 1. When the procedure is over, the system timer has a return value T = T 2. So, the time costs of the generations of the sub-images and EIA can be calculated as (T 2 - T 1)ms. When the EIAs have the same resolution of 2048 × 1536 pixels, the time costs are shown in Fig. 10(a) . The abscissa represents the number of the sub-images. From the curves, we can see that the GPU has greater impact on the acceleration with the increase of the number of the sub-images. With 13 × 13 sub-images, the CPU takes 5053.06 ms to generate the EIA, which is 10.23 times as long as in the GPU (493.9 ms). The Fig. 10(b) shows the influences of EIA’s resolution on the time cost. When the pixels of EIA are quadrupled, the time cost increases only by 0.1 to 0.5 times, which indicates the power of the parallelization of the GPU.

 figure: Fig. 10

Fig. 10 Time costs of the generations of sub-images and EIA (a) the comparison of proposed method based on GPU and CPU with the same EIA’s resolution of 2048 × 1536 pixels, (b) the influences of the different EIA’s resolutions based on GPU.

Download Full Size | PDF

4. Conclusion

We have proposed an active II system based on the MSL method which simplifies the equipments in the acquisition of the 3D information. Multiple DLPs are used to obtain the complete 3D shape by fusing the imperfect 3D shapes based on the single DLP. The proposed system avoids the occlusions in the blind areas of the illuminations, and improves the reconstructed quality of the 3D scene comparing with the utilization of the depth camera. The GPU parallel processing is also applied for the generations of the sub-images and EIAs in the experiments, which accelerates the processing up to 10.23 times to generate 13 × 13 sub-images and EIAs with 2048 × 1536 pixels. And even the resolution of the EIA quadrupled, the time cost increases by only 0.1 to 0.5 times. The proposed system has the potential for the real-time and high-quality II pickup and display with high resolution. Especially, the proposed pickup method is suitable for the real 3D scene. Despite the advantage of the proposed system, there are some limitations in the acquisition speed of the 3D information. Therefore, further work focused on the acceleration in the acquisition part is needed to be carried out.

Acknowledgments

This work is supported by the NSFC under Grant Nos. 61320106015 and 61225022, and the “863” Program under Grant No. 2015AA015902.

References and links

1. G. Lippmann, “La photographie integrale,” Comptes-Rendus Acad. Sci. 146, 446–451 (1908).

2. J. Y. Son, B. Javidi, S. Yano, and K. H. Choi, “Recent developments in 3-D imaging technologies,” J. Disp. Technol. 6(10), 394–403 (2010). [CrossRef]  

3. J. Hong, Y. Kim, H. J. Choi, J. Hahn, J. H. Park, H. Kim, S. W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues,” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]   [PubMed]  

4. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). [CrossRef]   [PubMed]  

5. X. Xiao, B. Javidi, M. Martínez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]   [PubMed]  

6. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on Integral Photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]   [PubMed]  

7. H. Navarro, A. Dorado, G. Saavedra, A. Llavador, M. Martínez-Corral, and B. Javidi, “Is it worth using an array of cameras to capture the spatio-angular information of a 3D scene or is it enough with just two?” Proc. SPIE 8384, 838406 (2012). [CrossRef]  

8. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12(6), 1067–1076 (2004). [CrossRef]   [PubMed]  

9. J. Yim, Y. M. Kim, and S. W. Min, “Real object pickup method for real and virtual modes of integral imaging,” Opt. Eng. 53(7), 073109 (2014). [CrossRef]  

10. X. Jiao, X. Zhao, Y. Yang, Z. Fang, and X. Yuan, “Dual-camera enabled real-time three-dimensional integral imaging pick-up and display,” Opt. Express 20(25), 27304–27311 (2012). [CrossRef]   [PubMed]  

11. J. S. Jeong, K. C. Kwon, M. U. Erdenebat, Y. Piao, N. Kim, and K. H. Yoo, “Development of a real-time integral imaging display system based on graphics processing unit parallel processing using a depth camera,” Opt. Eng. 53(1), 015103 (2014). [CrossRef]  

12. G. Li, K. C. Kwon, G. H. Shin, J. S. Jeong, K. H. Yoo, and N. Kim, “Simplified integral imaging pickup method for real objects using depth camera,” J. Opt. Soc. Korea 16(4), 381–385 (2012). [CrossRef]  

13. The Stanford Multi-Camera Array, http://graphics.stanford.edu/projects/array/.

14. Kinect 3D sensor: http://www.microsoft.com/en-us/kinectforwindows/.

15. CUDA C programming guide, Ver. 5.0, NVIDIA, Santa Clara, CA, (2012).

16. Y. H. Jang, C. Park, J. S. Jung, J. H. Park, N. Kim, J. S. Ha, and K. H. Yoo, “Integral imaging pickup method of bio-medical data using GPU and Octree,” J. Korea Contents Assoc. 10(6), 1–9 (2010). [CrossRef]  

17. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]   [PubMed]  

18. E. H. Kim, J. Hahn, H. Kim, and B. Lee, “Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection,” Opt. Express 17(10), 7818–7830 (2009). [CrossRef]   [PubMed]  

19. L. Su, X. Su, W. Li, and L. Xiang, “Application of modulation measurement profilometry to objects with surface holes,” Appl. Opt. 38(7), 1153–1158 (1999). [CrossRef]   [PubMed]  

20. Y. Xu, S. Jia, Q. Bao, H. Chen, and J. Yang, “Recovery of absolute height from wrapped phase maps for fringe projection profilometry,” Opt. Express 22(14), 16819–16828 (2014). [CrossRef]   [PubMed]  

21. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-D shape measurement,” Opt. Express 18(5), 5229–5244 (2010). [CrossRef]   [PubMed]  

22. P. Ou, B. Li, Y. Wang, and S. Zhang, “Flexible real-time natural 2D color and 3D shape measurement,” Opt. Express 21(14), 16736–16741 (2013). [CrossRef]   [PubMed]  

23. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

24. K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE – Transactions on Information and Systems, E 90-D, 233–241 (2007).

25. Z. L. Xiong, Q. H. Wang, S. L. Li, H. Deng, and C. C. Ji, “Partially-overlapped viewing zone based integral imaging system with super wide viewing angle,” Opt. Express 22(19), 22268–22277 (2014). [CrossRef]   [PubMed]  

26. K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012). [CrossRef]   [PubMed]  

27. C. C. Ji, C. G. Luo, H. Deng, D. H. Li, and Q. H. Wang, “Tilted elemental image array generation method for moiré-reduced displays in computer generated integral imaging,” Opt. Express 21(17), 19816–19824 (2013). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Configuration of the proposed active II system.
Fig. 2
Fig. 2 Principle of the 3D shape reconstruction by the proposed MSL method.
Fig. 3
Fig. 3 Geometry of generations for the sub-images and EIAs in the proposed system: (a) and (b) with the different CDPs.
Fig. 4
Fig. 4 Pixel mapping algorithm of the EIA generation.
Fig. 5
Fig. 5 GPU-based implementation of the sub-images and EIA generations based on the proposed method.
Fig. 6
Fig. 6 Experimental setup of the proposed active II system.
Fig. 7
Fig. 7 Captured deformed patterns and reconstructed 3D shapes in the experiment: (a) and (b) the deformed patterns projected by DLP1 and DLP2, (c) and (d) the 3D shapes reconstructed with (a) or (b), (e) the fused 3D shape by the proposed MSL method, and (f) the depth data with different pseudo-colors in the comparison experiment by the Microsoft Kinect system.
Fig. 8
Fig. 8 Generated sub-images with different projecting angles and EIAs with different CDPs: (a), (b), and (c) the sub-images, (d) and (e) the EIAs and magnified parts.
Fig. 9
Fig. 9 Different views of the reconstructed 3D images: (a) top view, (b) left view, (c) front view, (d) right view, and (e) bottom view.
Fig. 10
Fig. 10 Time costs of the generations of sub-images and EIA (a) the comparison of proposed method based on GPU and CPU with the same EIA’s resolution of 2048 × 1536 pixels, (b) the influences of the different EIA’s resolutions based on GPU.

Tables (1)

Tables Icon

Table 1 Configuration parameters and experiment environment of the II system

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I i ( x , y , j ) = R i ( x , y ) { A i ( x , y ) + B i ( x , y ) cos [ φ i ( x , y ) + σ j ] } .
φ ' i ( x , y ) = arc tan n = 1 N I i ( x , y , n ) sin ( σ n ) n = 1 N I i ( x , y , n ) cos ( σ n ) .
1 Δ h i ( x , y ) = a i ( x , y ) + b i ( x , y ) Δ φ i ( x , y ) + c i ( x , y ) Δ φ i 2 ( x , y ) ,
Δ H ( x , y ) = i = 1 M Δ h i ( x i , y i ) , ( x i , y i ) Ω i ,
i = 1 M Ω i = Ω ,
Δ D ( x , y ) = Δ H ( x , y ) W R w = Δ H ( x , y ) H R h ,
I θ ( x , y ) = T ( x + Δ q x , y + Δ q y ) ,
Δ q = ( Δ D ( x , y ) d c ) tan θ ,
θ = ( arc tan Δ r i g , arc tan Δ r j g ) ,
E ( x , y ) = ( p Δ r ) 2 i , j m , n I θ ( p m Δ r + i , p n Δ r + j ) δ ( x p m Δ r i , y p n Δ r j ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.