Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Real-time capturing and 3D visualization method based on integral imaging

Open Access Open Access

Abstract

We propose a real-time capturing and 3D visualization method based on integral imaging. We applied real-time conversion algorithm to conventional integral imaging pickup system. Gap control method with depth plane adjustment is also applied to improve image quality. Implemented system provides real-time 3D images with ultra high definition resolution in 20 frames per second, and the observer can change depth planes freely. Simulations and experimental results show the validity of proposed system.

©2013 Optical Society of America

1. Introduction

Real-time handling of three-dimensional (3D) information is one of the biggest issues in 3D display field [110]. Beyond two-dimensional (2D) era, 3D technologies substitute 2D technologies in various fields [1115]. However, real-time 3D information technology has been suffering from intensive calculation complexity and the cost of special optical devices [1, 79]. Especially, it is very important to extract 3D information from 3D objects and to display it in real-time for the 3D broadcasting and 3D interaction technology. This technology, which can be called real-time capturing and 3D visualization, is composed of three parts: pickup of actual 3D scene, image processing, and 3D display. Pickup is to extract 3D information from 3D objects and has been studied intensively by computer scientists [1618]. On the other hand, 3D display is to visualize 3D information with optical devices and has been mainly researched by electrical engineers and physicists [1113]. Image processing is to manipulate 3D information data obtained at pickup stage for the 3D display specification [19, 20]. There have been several pickup methods and 3D display methods. The calculation complexity of image processing can be reduced dramatically with the clever consideration of choosing 3D pickup and 3D display technologies. From that point of view, real-time 3D visualization is the convergence of two different fields: 3D pickup and 3D display.

The 3D pickup is mainly categorized into two approaches: to get view images from various viewpoints containing 3D information, or to extract depth information directly. The former needs multiple cameras [5] or lens array [21, 22] to get directional view images. Figure 1 shows a diagram of pickup system using a charge coupled device (CCD) and a lens array. The latter obtains depth information directly with time of flight cameras [16], structured light [17], or coded aperture [18]. Since all pickup methods provide 3D information of objects, it is possible to convert 3D information from one shape to another. However the conversion usually takes time and it is inappropriate for the real-time process [20, 23, 24].

 figure: Fig. 1

Fig. 1 Diagram of pickup system using a CCD and a lens array.

Download Full Size | PDF

3D display has various methods to visualize 3D information, but most of them use 2D flat panel display with optical devices [1113]. Although holography is a good exception, it needs complex calculation and bulky setup [11, 13]. Most of 3D display systems use 3D information of directional view images rather than using depth information directly. Depth fused display uses depth information directly, but it has some limitations such as very narrow viewing angle or limited depth range [25]. Therefore with the consideration of 3D display methods, the former pickup methods are proper for real-time visualization which provides different view images using multiple cameras or a camera with lens array.

The multiple camera pickup method provides several perspective views with different viewing directions. Figure 2(a) shows the scheme of five-camera pickup method. Cameras are located at camera plane with different viewing directions, but their camera axes converge to a convergence point as shown in Fig. 2(a). Since this pickup method provides several view images as many as camera number, it accords well with multi-view display which is shown in Fig. 2(b) [2, 5]. In Fig. 2(a), five cameras provide five view images. However, liquid crystal (LC) panel in multi-view display shows only one elemental image as shown in Fig. 2(b). Therefore image interweaving process should be applied to five view images to generate one elemental image and this process is usually performed in real-time [5]. However, the multiple camera pickup method is bulky and needs delicate calibration; otherwise it causes severe crosstalk [26].

 figure: Fig. 2

Fig. 2 Diagram of various pickup and 3D display methods: (a) multiple camera pickup method, (b) multi-view display, (c) lens array pickup method, and (d) integral imaging.

Download Full Size | PDF

By the way, the lens array pickup method provides a lot of perspective views with same direction and same viewing angle as shown in Fig. 2(c). Since lens array pickup method provides a lot of view images as many as small lenses in the lens array, it is natural to pair this method and integral imaging which is shown in Fig. 2(d) [21]. Integral imaging uses lens array pickup image as the elemental image directly as shown in Figs. 2(c) and 2(d) because Lippmann first introduced integral imaging in that way in 1908 [27]. With this method, LC panel in integral imaging process directly uses the image from lens array pickup in real-time, but this method suffers from pseudoscopic problem [13, 6, 8, 9]. Pseudoscopic problem is the depth reversion problem which exists only in integral imaging, not in multi-view display [2]. For instance, the observer in Fig. 2(d) feels that the mushroom object is closer than the hamburger object, even though the hamburger object is closer to the CCD. Several methods have been proposed to resolve pseudoscopic problem, but they were expensive [1, 9], only for virtual mode [12], or not proper for real-time process because of calculation complexity [7]. Therefore, although lens array pickup method has various advantages such as it is simple, uses only one camera, is cheaper than multiple camera pickup method, and is less complex for calibration, it has the critical weak point, the pseudoscopic problem, to be widely used in integral imaging with real objects.

Multiple camera pickup method can be applied to real-time 3D visualization with multi-view display without pseudoscopic problem [5]. However it is bulky and needs careful calibration between multiple cameras. Lens array pickup method is simple and free from calibration between cameras, but it suffers from pseudoscopic problem. To realize real-time capturing and 3D visualization using benefits of lens array pickup method and integral imaging, we focused on this inequality between two pickup methods. Why there is no pseudoscopic problem in multiple camera pickup method and multi-view display? We reached to a simple difference that there was not any interweaving process between lens array pickup method and integral imaging. Therefore, our group recently proposed general interweaving process between lens array pickup method and integral imaging [2], and this process is a simple pixel mapping algorithm working in real-time.

In this paper, we propose a real-time 3D visualization method based on lens array pickup method and integral imaging with recent conversion algorithm. We presented lens array pickup system with a high frame rate CCD and a lens array, real-time image processing using recent conversion algorithm, and integral imaging system with LC display (LCD) and a lens array. A preliminary feasibility test has been reported by our group recently at a conference [3]. However, the image quality of experimental results was not enough to show the validity of proposed system. In this paper, we improve 3D image quality by gap control method with depth plane adjustment. Based on designed system and calibration method, we apply gap control method with depth plane adjustment to enhance 3D image quality. This new method proposes a guideline to select proper parameter of pixel mapping algorithm to enhance 3D image quality. We also provide pictures and videos to show the validity of proposed system and improved 3D image quality.

In Section 2, the detailed analysis of conversion algorithm and adjustable depth plane method is introduced. Image quality improvement by gap control method with depth plane adjustment is also analyzed. Then system design and implementation are presented in Section 3. Then, experimental setup and experimental results with images and videos are shown in Section 4. Finally this paper ends with discussion and conclusion in Section 5.

2. Real-time pixel mapping algorithm without pseudoscopic problem

2.1 Pixel mapping algorithm

As we mentioned above, real-time pixel mapping algorithm is inspired by interweaving process of multiple camera pickup and multi-view display. In multi-view interweaving process, all the pixels in multiple CCDs are back propagated to corresponding display pixels through lenticular lens. Therefore, multi-view display system provides several view images at specific viewpoints, which are the exact position of pickup CCDs. Real-time pixel mapping algorithm works with the same principle. First, we set the pickup CCD plane and the display panel plane, and then the pixels of CCD are back propagated to display panel plane.

In multi-view display, viewing distance is fixed by the distance between pickup CCDs and convergence plane as shown in Fig. 2(a). However in integral imaging, there is no viewing distance and each lens in lens array contains light field information with same viewing angle and same direction. Therefore, the distance between the pickup lens array plane and the display lens array plane can be adjustable. In fact, the distance can be one of the several lengths derived by lens pitch and display panel pixel pitch because the viewpoints planes are repeated by the geometric relation of them.

Figure 3 shows several possible display lens array planes in real-time pixel mapping algorithm. Compared to Figs. 2(c) and 2(d), 3D images in Fig. 3 have correct depth information. The scheme is more like multi-view display with a lot of pickup cameras which are in parallel direction as shown in Figs. 2(a) and 2(b). The distance between pickup lens array plane and k-th display lens array plane Dk is derived as follows:

 figure: Fig. 3

Fig. 3 Possible display lens array planes in real-time pixel mapping algorithm.

Download Full Size | PDF

Dk=plppkf=knf,

where pl is lens pitch, f is focal length of lens, pp is display panel pixel pitch, and n is the number of pixels in a lens. For each k-value, corresponding display panel pixel and CCD pixel are changed. The mapping process can be simply derived by ray tracing algorithm, and the relation matrix is as follows:

(ijst)=(10k0010k00100001)(ijst)+(knknn+1n+1),

where s-th and t-th CCD pixel of i-th and j-th lens of pickup lens array is mapped to the s'-th and l'-th display panel pixel of i'-th and j'-th lens of display lens array, respectively [2].

Figure 4 shows some examples of pixel mapping algorithm for several k values, where the color indicates lens number and the number inside color indicates pixel number. Figure 4(a) shows captured light field of 3D objects, and the corrected elemental image is obtained by pixel mapping of captured light field. Since the number of pixels behind each lens is identical for all lenses in lens array, the pixels with same number contain same directional information of 3D objects [2]. Therefore, although the pixels are shuffled complexly after the real-time pixel mapping algorithm, the pixel number behind each lens cannot be changed as shown in Fig. 4. By changing k-value, the relative position between display lens array and 3D objects can be varied. This algorithm works in real-time regardless of k-value, and the depth of 3D object can also be easily adjusted from real to virtual images in real-time.

 figure: Fig. 4

Fig. 4 Principles of pixel mapping algorithm with various display lens array plane: (a) captured image with a lens array and a CCD, (b) corrected elemental image obtained by pixel mapping algorithm with k = 0, (c) with k = 1, and (d) with k = 2.

Download Full Size | PDF

2.2 Image quality improvement by gap control method with depth plane adjustment

Adjusting depth plane not only locates 3D images from real to virtual images but also improves quality of reconstructed 3D images. In integral imaging using LCD, color separation problem caused by red/green/blue (RGB) subpixels degrades image quality. Figure 5(a) shows color separation problem in integral imaging caused by RGB subpixels. Among several methods to improve image degradation problem caused by LCD subpixels [2830], the most simple and useful method is gap control method [28]. By locating lens array not at the focal length from display panel but a little bit closer or further, color separation problem can be reduced as shown in Fig. 5(b). In this paper, we also apply this method in 3D display stage by setting gap a little bit larger than the focal length.

 figure: Fig. 5

Fig. 5 Principles of gap control method with depth plane adjustment method: (a) color separation problem in integral imaging, (b) gap control method, (c) gap control method with depth plane adjustment, (d) simulation result of color separation problem (g = f), (e) simulation result of gap control method (k = 0, g = 1.5f), and (f) simulation result of gap control method with depth plane adjustment (k = 1, g = 1.5f).

Download Full Size | PDF

However this method causes another image degradation problem. Since the elemental image is generated as focal mode, which means that the gap is assumed as the exact focal length, the display lens array should be that much apart from display panel. However strictly speaking, the gap control method is for the real or virtual mode integral imaging, therefore the central depth plane is fixed and the depth region is limited around the central depth plane [31]. Hence, although the theoretical expressible depth range of focal mode integral imaging is infinite, practically the depth range is limited by quantized subpixel pitch. If the pixel pitch can be infinitesimal, the color separation problem becomes solved, and the expressible depth range goes to the infinite.

However this image degradation problem can be reduced by adjusting the relative position of display lens array and the 3D objects. By locating 3D objects around the central depth plane, we can fully utilize expressible depth range. For example, although the 3D objects are quite apart from lens array at pickup stage as shown in Fig. 4(a), the reconstructed 3D images can be located around lens array at display stage by adapting pixel mapping algorithm as shown in Fig. 4(d). By using this method, we can also locate 3D images around central depth plane, and the image degradation problem can be resolved. Figure 5(c) shows an example of gap control method with depth plane adjustment. This method may not be effective for the 3D objects with large depth differences. However, this method is valid for most cases whose scale of objects is similar to the depth differences.

Figures 5(d)-5(f) are simulation results to verify the gap control method with depth adjustment. For the simulation, we use our own integral imaging simulator based on MATLAB. This simulator contains the information of subpixel structures of LCD panel to verify color separation problem of proposed system. Simulation condition is identical to the implemented system, and the specification is shown in Table 1. We assume that the gap is about 1.5f (4.85mm) which is one of the reasonable values for the gap control method because the thickness of lens array is already 1.0f (3.3mm). Therefore the central depth plane is located at 26.4mm in front of lens array [31], and the 3D images can be located around central depth plane when k is 1. Note that the implemented situation is quite different from the diagram shown in Figs. 3 and 4. Without gap control method, color separation problem which makes 3D images hard to recognize is severe as shown in Fig. 5(d). By applying gap control method, image quality is clearly improved as shown in Fig. 5(e). However, result image is still blurred and sliced at borders because of limited depth range of real mode integral imaging as expected. By applying depth plane adjustment to gap control method, image is clearer and not much blurred as shown in Fig. 5(f). Therefore, simulation results show the validity of proposed gap control method with depth adjustment.

Tables Icon

Table 1. Specification of implemented system

This gap control method with depth plane adjustment proposes a guideline to select proper k value. Beyond the real-time issues of pseudoscopic-orthoscopic conversion algorithm, this algorithm makes it possible that observer can enjoy real-time 3D images without any image degradation problem caused by gap control method.

3. Real-time capturing and 3D visualization system

Experimental setup of real-time capturing and 3D visualization system is implemented with lens array pickup system and integral imaging display system. A high frame CCD and a lens array are used for lens array pickup system, and an LC panel and an identical lens array are used for integral imaging system. Figure 6 shows our implemented real-time capturing and 3D visualization system. For the real-time capturing and 3D visualization we use high frame rate CCD from Allied Vision Technology (Prosilica GX2300C) and a latest PC (Intel i7 processor with NVIDIA GTX 470 graphic card), and for the high quality 3D visualization we use high resolution LCD panel (IBM 22inch 3840 × 2400). Real-time pixel mapping algorithm is implemented only with openCV programming without any GPU programming. We use identical 1mm lens array with 3.3mm focal length for pickup and display. Detailed specification is listed in Table 1.

 figure: Fig. 6

Fig. 6 Implementation of proposed real-time capturing and 3D visualization system

Download Full Size | PDF

Since the proposed system is composed of several sensitive optical devices, the calibration problem is a practical issue for the implementation. At first, pickup lens array should be calibrated with CCD, and then the start pixel should be found because real-time pixel mapping algorithm has to receive start pixel as an initial input; otherwise it cannot be performed correctly. The display lens array should be calibrated with the displayed elemental image also. Furthermore, one pixel in the CCD should be in one to one correspondence with one pixel in the LC panel, because image resizing procedure takes time and it is hard to satisfy real-time condition with a PC. Therefore, we use two rulers to match pickup world and display world in the same scale. Detailed calibration method was introduced in our previous work [3].

4. Experimental results

With our implemented system, we present capturing and 3D visualization experiments in this section. Figure 7 shows experimental result. The parallax within viewing angle is clearly provided as shown in Fig. 7(a). The real-time pixel mapping algorithm is applied so that the reconstructed 3D images contain orthoscopic depth information. The hamburger object is at the front not only in pickup stage, but also in display stage. To verify real-time capturing and 3D visualization, we recorded 3D objects and 3D images together as shown in Fig. 7(b), and swung the mushroom object (see Media 2). The result shows that 3D objects and 3D images are synchronized and it is real-time system. The resolution of one frame is over ultra high definition (UHD, 3840 × 2400). Calculation time for one frame is about 50 micro-seconds, and the 3D images are provided up to 20 frames per second. Figure 7(c) shows that implemented system provides details of 3D objects very well including human hand (see Media 3).

 figure: Fig. 7

Fig. 7 Experimental results of proposed real-time capturing and 3D visualization system: (a) left view, center view, and right view of reconstructed 3D images (Media 1), (b) real-time 3D visualization of objects (Media 2), and (c) reconstructed 3D image of 3D objects and a human hand (Media 3)

Download Full Size | PDF

Furthermore, to verify gap control method with depth plane adjustment, experiments are performed with different k values. Figure 8 shows result images with different k values. When k is 1, central depth plane is located around hamburger object so that Fig. 8(a) shows high quality of hamburger image and blurred mushroom image. However when k is 2, central depth plane is located around 1 cm behind mushroom object so that Fig. 8(b) shows similar quality of mushroom images and poor quality of hamburger image. In this case, optimal k value would be 1, and this result shows that the gap control method with depth plane adjustment is valid and improves image quality. Furthermore, since depth planes are adjustable instantly in any time, the observer can find optimal depth plane to watch.

 figure: Fig. 8

Fig. 8 Experimental results of proposed real-time capturing and 3D visualization system with different k values: (a) when central depth plane is located around hamburger object (k = 1) and (b) when central depth plane is located 1 cm behind mushroom object (k = 2).

Download Full Size | PDF

5. Conclusion

In this paper, we proposed and presented real-time capturing and 3D visualization method based on integral imaging. We presented lens array pickup system with a high frame rate CCD, and integral imaging system with an LCD and a lens array. Furthermore, we proposed gap control method with depth plane adjustment to realize real-time 3D images without any image degradation problem. This method also proposes a guideline to select proper depth plane in real-time pixel mapping algorithm for the 3D image quality. Implemented system provided real-time 3D images with UHD resolution in 20 frames per second. Simulations and experiments showed the validity of proposed method.

Acknowledgment

This work was supported by the National Research Foundation of Korea grant funded by the Korean government (MSIP) through the National Creative Research Initiatives Program (#2007-0054847).

References and links

1. F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999). [CrossRef]  

2. J.-H. Jung, J. Kim, and B. Lee, “Solution of pseudoscopic problem in integral imaging for real-time processing,” Opt. Lett. 38(1), 76–78 (2013). [CrossRef]   [PubMed]  

3. J. Kim, J.-H. Jung, and B. Lee, “Real-time pickup and display integral imaging system without pseudoscopic problem,” Proc. SPIE 8643, 864303, 864303-7 (2013). [CrossRef]  

4. B. Javidi, S. Yeom, I. Moon, and M. Daneshpanah, “Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events,” Opt. Express 14(9), 3806–3829 (2006). [CrossRef]   [PubMed]  

5. W. J. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23(3), 814–824 (2004). [CrossRef]  

6. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]   [PubMed]  

7. G. Li, K.-C. Kwon, K.-H. Yoo, S.-G. Gil, and N. Kim, “Real-time display for real-existing three-dimensional objects with computer-generated integral imaging,” in Proceeding of International Meeting on Information Display (IMID) 2012, Daegu, Korea, Aug. 2012 (Society for Information Display and Korean Society for Information Display), 471–472.

8. M. Martinez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005). [CrossRef]   [PubMed]  

9. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lensarray method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37(11), 2034–2045 (1998). [CrossRef]   [PubMed]  

10. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Rep. CTSR 2005–02 (Stanford University, 2005).

11. B. Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). [CrossRef]  

12. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]   [PubMed]  

13. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]   [PubMed]  

14. Y. Kim, K. Hong, J. Yeom, J. Hong, J.-H. Jung, Y. W. Lee, J.-H. Park, and B. Lee, “A frontal projection-type three-dimensional display,” Opt. Express 20(18), 20130–20138 (2012). [CrossRef]   [PubMed]  

15. Y. Takaki, Y. Urano, S. Kashiwada, H. Ando, and K. Nakamura, “Super multi-view windshield display for long-distance image information presentation,” Opt. Express 19(2), 704–716 (2011). [CrossRef]   [PubMed]  

16. M. Kawakita, K. Iizuka, H. Nakamura, I. Mizuno, T. Kurita, T. Aida, Y. Yamanouchi, H. Mitsumine, T. Fukaya, H. Kikuchi, and F. Sato, “High-definition real-time depth-mapping TV camera: HDTV Axi-Vision Camera,” Opt. Express 12(12), 2781–2794 (2004). [CrossRef]   [PubMed]  

17. E.-H. Kim, J. Hahn, H. Kim, and B. Lee, “Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection,” Opt. Express 17(10), 7818–7830 (2009). [CrossRef]   [PubMed]  

18. A. Levin, R. Fergus, F. Durand, and T. W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007). [CrossRef]  

19. J.-H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, “Three-dimensional display scheme based on integral imaging with three-dimensional information processing,” Opt. Express 12(24), 6020–6032 (2004). [CrossRef]   [PubMed]  

20. J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17(8), 6320–6334 (2009). [CrossRef]   [PubMed]  

21. J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008). [CrossRef]   [PubMed]  

22. J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, and B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express 18(25), 26373–26387 (2010). [CrossRef]   [PubMed]  

23. J. H. Jung, J. Yeom, J. Hong, K. Hong, S. W. Min, and B. Lee, “Effect of fundamental depth resolution and cardboard effect to perceived depth resolution on multi-view display,” Opt. Express 19(21), 20468–20482 (2011). [CrossRef]   [PubMed]  

24. H. Kim, J. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47(19), D117–D127 (2008). [CrossRef]   [PubMed]  

25. S. G. Park, J.-H. Kim, and S.-W. Min, “Polarization distributed depth map for depth-fused three-dimensional display,” Opt. Express 19(5), 4316–4323 (2011). [CrossRef]   [PubMed]  

26. R. Kaptein and I. Heynderickx, “Effect of crosstalk in multi-view autostereoscopic 3D displays on perceived image quality,” Display Week, Society for Information Display Digest 38(1), 1220–1223 (2007).

27. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

28. Y. Kim, J. Yeom, J.-H. Jung, J. Hong, and B. Lee, “View image error analysis based on focal mode and virtual mode in three-dimensional display using lenses,” Proc. SPIE 7956, 79560S, 79560S-6 (2011). [CrossRef]  

29. Y. Kim, G. Park, J.-H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt. 48(11), 2178–2187 (2009). [CrossRef]   [PubMed]  

30. J. Kim, J.-H. Jung, J. Hong, J. Yeom, and B. Lee, “Elemental image generation method with the correction of mismatch error by sub-pixel sampling between lens and pixel in integral imaging,” J. Opt. Soc. Korea 16(1), 29–35 (2012). [CrossRef]  

31. J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40(29), 5217–5232 (2001). [CrossRef]   [PubMed]  

Supplementary Material (3)

Media 1: MOV (64 KB)     
Media 2: MOV (669 KB)     
Media 3: MOV (1176 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Diagram of pickup system using a CCD and a lens array.
Fig. 2
Fig. 2 Diagram of various pickup and 3D display methods: (a) multiple camera pickup method, (b) multi-view display, (c) lens array pickup method, and (d) integral imaging.
Fig. 3
Fig. 3 Possible display lens array planes in real-time pixel mapping algorithm.
Fig. 4
Fig. 4 Principles of pixel mapping algorithm with various display lens array plane: (a) captured image with a lens array and a CCD, (b) corrected elemental image obtained by pixel mapping algorithm with k = 0, (c) with k = 1, and (d) with k = 2.
Fig. 5
Fig. 5 Principles of gap control method with depth plane adjustment method: (a) color separation problem in integral imaging, (b) gap control method, (c) gap control method with depth plane adjustment, (d) simulation result of color separation problem (g = f), (e) simulation result of gap control method (k = 0, g = 1.5f), and (f) simulation result of gap control method with depth plane adjustment (k = 1, g = 1.5f).
Fig. 6
Fig. 6 Implementation of proposed real-time capturing and 3D visualization system
Fig. 7
Fig. 7 Experimental results of proposed real-time capturing and 3D visualization system: (a) left view, center view, and right view of reconstructed 3D images (Media 1), (b) real-time 3D visualization of objects (Media 2), and (c) reconstructed 3D image of 3D objects and a human hand (Media 3)
Fig. 8
Fig. 8 Experimental results of proposed real-time capturing and 3D visualization system with different k values: (a) when central depth plane is located around hamburger object (k = 1) and (b) when central depth plane is located 1 cm behind mushroom object (k = 2).

Tables (1)

Tables Icon

Table 1 Specification of implemented system

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

D k = p l p p kf=knf,
( i j s t )=( 1 0 k 0 0 1 0 k 0 0 1 0 0 0 0 1 )( i j s t )+( kn kn n+1 n+1 ),
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.