Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-quality integral videography using a multiprojector

Open Access Open Access

Abstract

Integral videography (IV) is an animated extension of integral photography. Despite IV’s many advantages, the quality of its spatial images has thus far been poor; the pixel pitch of the display and the lens pitch are considered to be the main factors affecting the IV image format. Our solution for increasing pixel density is to use multiple projectors to create a high-resolution image and project the resultant image onto a small screen by using long-zoom-lens projection optics. We manufactured a lens array for the display device, and here we present experimental results on using two SXGA projectors. The pixel pitch and lens pitch of the new display are 85 µm and 1.016 mm, respectively. The multiprojector IV display device has a spatial resolution of approximately 1, 2, and 3 mm for image depths of 10, 35, and 60 mm, respectively, in front of and behind the lens array.

©2004 Optical Society of America

1. Introduction

Integral photography (IP) [1] is being used in exciting new tools for autostereoscopic displays. IP provides three-dimensional (3-D) images without using any supplementary glasses or tracking devices. This autostereoscopic technique is often called “fly’s eye lens photography” because of its use of an array of tiny lenses for taking and displaying the image. As a result, it possesses both horizontally and vertically varying directional information, thus producing a full parallax image as do most types of holograms. IP has attracted much attention for still photography and video in a variety of 3-D image fields. Okano et al. developed a real-time pickup and 3-D display system based on IP, which uses a charge-coupled device camera and a liquid-crystal display (LCD) panel for pickup and reconstruction, respectively [2]. Igarishi et al. proposed a method that produces the elemental images by using a computer instead of a real 3-D pickup; it is called CGIP (computer-generated integral photography) [3]. CGIP can be implemented in either real IP mode or virtual IP mode [4].

Our research involves a development of IP, called integral videography (IV) [5]. IV uses a fast image-rendering algorithm to project a computer-generated graphical object through a microconvex lens array by using multiple rays. Figure 1 shows the principle of IV. Each point in a 3-D space is reconstructed by the convergence of rays from pixels on the computer display through lenses in the array. The observer can see any point on the display from various directions as if it were fixed in 3-D space. IV can display animated 3-D objects, and it also has the merits of IP.

 figure: Fig. 1.

Fig. 1. Principle of IV.

Download Full Size | PDF

Most previous studies have focused on widening the viewing angle of the IP image and on enhancing the depth of IP image [612]. Jung et al.’s wide-viewing-angle integral imaging uses orthogonal polarization switching [9] and an aspheric fresnel-lens array [10]. Min et al. used double display devices to enhance 3-D integral imaging [11]. Park et al. proposed a method for enhancing the depth of IP by using a birefringence of a uniaxial crystal plate [12]. Although the advantages of IP/IV have been proven in feasibility studies and applications, an unresolved issue is the viewing resolution (image quality) of the reconstructed 3-D IP/IV image.

There are two important factors affecting the viewing resolution. One factor is the lens pitch, which determines the spatial resolution ratio in image reconstruction [13]. The other factor is the resolution and pixel density of the 2-D element image, which are the components of the 3-D image to be reconstructed. To our knowledge, the presently available full-color display device has a maximum pixel density of about 200 dpi. The resolutions of current display devices are not capable of realizing high-resolution 3-D imaging, because they have a very limited number of pixels. In our previous work, the IV images were limited due to the theoretical binding of the IV image resolution to the pixel density of display (like LCD). Erdmann et al. used a 3-D camera system employing a scanning microlens array to pickup the image and a relay lens to match the element image pitch on the CRT and the microlens array pitch [14]. Although these methods enable high-resolution 3-D imaging for integral photography, they need more works for a real-time 3-D imaging.

In this paper, we report on a multiple projectors display employing a long zoom lens projection technique to achieve high-quality IV images and a parallel-calculation to accelerate the IV rendering. We also developed a lens array especially for the display device. We evaluated the feasibility of this display by developing 3-D CT autostereoscopic images and using them for planning medical image-guided surgery.

2. System configuration and proposed methods

The projectors of the display are arranged in an array, and they produce a high-resolution image on a rear projection screen. We use long-focal-length projection technique to create a high-density pixel image onto the screen. High computational power is needed for parallel image rendering and parallel synchronic display using the multiple projector display system (Fig. 2).

 figure: Fig. 2.

Fig. 2. System configuration of multiprojector IV display.

Download Full Size | PDF

2.1 High-resolution and high pixel density image using multiple projectors

The limited resolution of IV is the cause of several problems. Since the depth information is encoded into the 2-D image, the projection of this into 3-D space has a much lower resolution. Furthermore, the pixel of they displayed are not points, but have a finite circumference. Therefore, with increasing distance from the lens array, the rays become scaled up and blur the image. It is necessary to increase the pixel number and pixel density to realize high quality IV imaging.

Despite recent progress in display technologies such as organic light emitting diodes (OLEDs), the most economical approach to making a large-format, high-resolution display uses an array of projectors [15]. Bresnahan et al. developed a large-scale high-resolution rear projection technique for a passive stereo display system, which used multiple workstations driving projectors to produce a high-resolution stereo image [16].

Our two-projector array (Fig. 2) utilizes long-focal-length projection on a rear projection screen. Each projector projects a part of entire elemental image onto the screen. The size of projector is bigger than the size of required projected image, so it is difficult to project the two small images onto the screen to become an adjacent image. We project the image in a certain distance and adjust the projected image onto a screen by use of a set of mirrors. The focuses of the projectors are adjusted to the distance to the screen. The key to making this display accurate was to edge-match each projected image to its neighbor without overlay or separation lines. The projected image must be free from geometric deformation.

Although a normal large-scale multiprojection technique can produce high-resolution images, the pixel density of projected image is too low for creating an autostereoscopic image. The pixel of the projected image used in IV autostereoscopic display must be high density. The use of long-focal-length projection can maintain the image to the small size and project a high-density pixel (small pixel pitch) image onto the screen. We altered the combination and arrangement of lenses to achieve new long-focal-length projection optics. The pixel density of the projected image can be more than 300 dpi.

2.2 Lens array and spatial image formation

The cylindrical lens array based microlens array is made of two lenticular sheets crossed at a right angle (Fig. 3(a)). The lens side of two lenticular sheets is contacted together. Since the focal length of lens is determined only by the lens side, the two lens arrays have the same focal length by use of above arrange method. The diffusion side of a screen is putted in a focus of lens array. Figure 3(b) gives a photograph of fabricated lens array. The lens array is made of plastic sheet. Although it is difficult to control the lens pitch and lens thickness, because the plastic sheet is heat molded, the process of making a lenticular sheet has been established for lenticular stereographs. This method enables a large size microlens array for IP/IV imaging.

 figure: Fig. 3.

Fig. 3. Design and fabrication of microlens array. (a) Lens array is made of two lenticular sheets crossed at a right angle. (b) Fabricated lens array.

Download Full Size | PDF

For IV to work, the high-resolution and high-pixel-density projected image on screen must be free from distortion and reflection, so an antireflective antistatic coating flat screen [17] is used to display the image. The screen is placed in the rear of the microlens array. When the rendered elemental IV image is projected on the screen, the autostereoscopic image will be formatted to a spatial image.

To adjust the position of projected image, four mirrors for two projectors are used to reflect the image onto the screen. The middle two mirrors (M1-2, M2-2 in Fig. 2) are fixed; the other two (M1-1, M2-1 in Fig. 2) can be made fine adjustments, i.e., rotations around the two axes and movement along the optical axis of the projector. The lens sets of the projector enable adjustment of zoom and focus. Furthermore, we adjusted the parallel displacement in two axes and inclination of the screen by using three micrometer heads that fixed the screen to the focal plane of the convex lens array.

3. Experimental results

3.1 Components and device

The two projectors (AP-2000, SXGA, APTi, Fujisawa, Japan) array displays 1280×2048 pixels on a 108.4×173.4 mm (300dpi) rear projection screen. The pitch of each pixel on the screen is 0.085 mm. Each lenslet element is square with a base area of 1.016 mm, which covers 12×12 pixels of the projected image. The focal length of the lenslet is 1.4 mm. The main specifications of the IV display are listed in Table I. A photograph of multiprojector IV display device is shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. High-resolution multiprojection IV display device. The display includes two projectors and mirrors for optical reflection.

Download Full Size | PDF

Tables Icon

Table 1. Main specifications of multiprojection IV display

3.2 Resolution measurement of IV image

The quality of the IV image depends primarily on the pixel density of the display (the number of pixels per unit display area) and the lens pitch of the lens array. We measured the spatial resolution of the IV display using a set of black and white stripes, with widths from 1.0 to 4.0 mm and spacing of 0.1 mm. The stripes were projected at different depths in front of and behind the IV display (Fig. 5), and the IV images of the projected stripes were taken using a digital camera (Nikon D1X, 3008×1536 pixels). The focal length and the F-number of camera lens are 50 mm and 16, respectively. The pupil diameter of camera iris is about 3 mm, which is similar to the pupil diameter of the human eye in an ordinary environment. The resolution was determined by the minimum width at which the stripes could be clearly observed.

Figure 6 shows an example of the IV image. The black and white stripes are displayed from 25 mm in front of the display to 25 mm behind the display. The display has a spatial resolution of about 1.0 mm, 2.0 mm, and 3.0 mm for image depth of 10 mm, 35 mm and 60 mm, respectively, in front of and behind the lens array (Fig. 7). The real IV image is superior to the virtual IV image because the spatial deformation of the IV image is caused by a mismatch between the lens focus distance and the lens gap. The error between the lens pitch and the width of the element might also be a cause of the deformation, thus affecting the significantly depth perception at deep locations.

 figure: Fig. 5.

Fig. 5. Schematic diagram of measuring IV image spatial resolution by projecting black and white stripes in different depths.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Measured IV image spatial resolution. The numbers mean the image depths in front of (real IV image) and behind (virtual IV image) the lens array. The green part shows the real IV image of strips arranged in Fig. 5.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Measured spatial resolutions of real and virtual IV images.

Download Full Size | PDF

3.3 Motion parallax and viewing zone

When an observer is in motion the visual scene surrounding the person is represented as a drifting image on the retina of the observer’s eye. The drift speed on the retina depends on the relative distance of a given object in the image. The drift speed of an object close to the observer is higher than when the object is far from the observer. This relative motion of the visual image on the retina is known as motion parallax, and it is exploited by the visual system to generate the sensation of depth [18]. Moreover, motion parallax is considered to be a very efficient cue to generate the sensation of relative depth.

We scanned a human heart by use of CT and rendered the IV image. The image is generated from volumetric CT data (512×512 pixels ×180 slices). The total number of pixels in all the elemental image is 1280×2048 (Fig. 8). For correct motion parallax, we observed the 3-D image at a distance of about 50 cm from the display. The condition wherein the IV image was continuously observed was examined. Figure 9 displays a movie of the observed IV images of the human heart taken from various directions by a digital video camera.

Here we also calculated the width of the viewing area when the viewing distance and the resolution requirement are given. The viewing area is expressed as the angle measured at the center of the display surface (the plane of the exit pupil of lens) and indicates how long an observer can move laterally without viewing a flipped image. The viewing angle of this IV display is about ±20° horizontal and vertical.

 figure: Fig. 8.

Fig. 8. Elemental image of IV rendering results. (a) Calculated entire elemental image of human heart; (b) Enlarged image of the yellow part in (a).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. (1.2MB) Movie of multiprojector IV Image (human heart) observed from different viewing directions.

Download Full Size | PDF

3.4 Dynamic IV imaging

We evaluated the usefulness of system in a clinical feasibility study. We took CT scans of an in-vivo human heart showing five phases of one cardiac cycle. The volumetric CT images (512×512 pixels×180 slices for one phase, thickness of 1.0 mm) were rendered five times in one heartbeat. The rendered elemental IV images were projected continually on the IV autostereoscopic display with the same heartbeat period of the patient (Fig. 10). (Since the projected IV autostereoscopic image was purely three-dimensional, it was difficult to record it using two-dimensional (2-D) conventional photographs. The quality of the actual image was much better than shown in this figure and video.)

 figure: Fig.10.

Fig.10. (1.3MB) Movie of IV CT autostereoscopic animated image of human heart. The patient has a heart rate of 63 beats per minute, i.e. a cardiac cycle of 0.95s.

Download Full Size | PDF

4. Discussion and conclusions

Most studies on improving viewing resolution have not provided a quantitative result of 3-D image quality. It is difficult to give a quantitative comparison of the image quality here with that in prior research. Previous studies used a large lenslet of more than 3 mm that was hard to observe as a high-quality image at a short viewing distance [6,7,9]. Recently, a lens array with a smaller lenslet pitch of approximately 1–2 mm was developed for IP research. The study by Jang [19] used a 1.09-mm square-shaped lens array including 53×53 lenslets. The quality of the reproduced 3-D image was improved by use of nonstationary micro-optics. The lens array that we use has a large lenslet number of 106×170 lenslets, and each lens has a uniform base size of 1.016×1.016 mm. Actually, our fabricated lens array is larger than that used in this study. The original lens array we fabricated includes 420×420 lenslets with a size of 426×426 mm. Our experimental results showed remarkable enhancement in image quality by use of a high-resolution multiprojection technique and corresponding lens array.

On the other hand, since the IP/IV displays both the horizontal and vertical parallax in space, the image quality is lower than that of binocular stereoscopy by use of the same 2-D image. Theoretically, when an ideal lens is utilized, IV can provide a 3-D display that is free from any discontinuous changes that occur when the observer moves and has the same resolution as that of conventional 2-D displays [13]. The spatial resolution of the projected 3-D image is proportional to the ratio of the lens diameter in the lens array to the pixel pitch of the display. Thus the projected pixel pitch needs to be made much smaller, and a corresponding lens array is needed. Actually, the current IV display system has a pixel density of only ~300 dpi. One could increase the pixel density by using more projectors or projectors with a higher resolution (SXGA or better). A larger computational power would also be needed to cope with the larger projection system.

We selected APTi Vision AP-2000 projectors because of their unique optical technology for displaying high-quality images. By improving the peripheral lighting contrast, the difference in brightness between the center and periphery of the screen became smaller, and the display appeared sharp even at the four corners. The color quality and value were excellent, and more colors could be projected. Since the projector is configured to project images at eye level, there is no need to compensate for the trapezoidal distortion of the keystone effect, which is a problem with the conventional LCD system. This kind of projector also has color balancing features and the ability to finely adjust the lens position horizontally and vertically without moving the projector body.

The DLP (digital light processing) technology projects a bright image as it reflects incoming light, compared with an LCD system that filters light out. It also adopts a single-DMD (digital micromirror device) projection system that eliminates color ghosting caused by overlap of the three primary colors. Without color drifting, to which a LCD system is susceptible, a single color background can be imaged clearly and digitally controlled, and ample gradation in tone is possible.

In this study, the position of the projected image is adjusted manually. It is difficult to make the projected images into a seamless image without overlay or separation lines on the screen. Edge blending is necessary in order to remove the visible discontinuities between adjacent projectors. We can use edge blending to overlap the edges of projected, tiled images, and blend the overlapped pixels to smooth the luminance and chromaticity transition from one image to another.

Tiling more multiple projectors is a viable way to build a high-resolution display. However, as such a display system scales beyond several projectors, aligning the projectors becomes a challenging issue. Projector geometric calibration was the first stumbling block we encountered. Vibration, heat expansion, and lamp changes can cause several pixels of drift per week even with clamped projectors. To overcome both misalignment and image-distortion problems, we use image-processing techniques to correct the source image before display [20].

In conclusion, we have developed a high-resolution, high-pixel-density device for an integral videography (IV) autostereoscopic display system using multiple projectors with a long-focal-length projection method. The feasibility study indicated that the multiple projectors with corresponding parallel rendering and displaying are satisfactory to make a high-quality 3-D display for surgical use. The main contribution of this study is the application and modification of the autostereoscopic technique so that it could be applied to a high-resolution multiprojector autostereoscopic IV display system.

Acknowledgments

The research of H. Liao was supported in part by Research Fellowships of the Japan Society for the Promotion of Science (15–11056). We thank Ichiro Sakuma of the Graduate School of Frontier Science, the University of Tokyo and Susumu Nakajima of the Graduate School of Medicine, the University of Tokyo, for helpful discussions.

References and links

1. M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. de Phys. 7, 821–825 (1908).

2. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef]   [PubMed]  

3. Y. Igarishi, H. Murata, and M. Ueda, “3D display system using a computer generated integral photograph,” Japan J. Appl. Physics. 17, 1683–1684 (1978). [CrossRef]  

4. J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40, 5217–5232 (2001). [CrossRef]  

5. H. Liao, S. Nakajima, M. Iwahara, E. Kobayashi, I. Sakuma, N. Yahagi, and T. Dohi, “Intra-operative real-time 3-D information display system based on integral videography,” in Medical Image Computing and Computer Assisted Intervention MICCAI 2001, W. Niessen and M. Viergever, eds., LNCS2208, 392–400 (2001). [CrossRef]  

6. B. Lee, S. -W. Min, and B. Javidi, “Theoretical analysis for three-dimensional integral imaging systems with double devices,” Appl. Opt. 41, 4856–4865 (2002). [CrossRef]   [PubMed]  

7. J.-H. Park, S. Jung, H. Choi, and B. Lee, “Viewing-angle-enhanced integral imaging by elemental image resizing and elemental lens switching,” Appl. Opt. 41, 6875–6883 (2002). [CrossRef]   [PubMed]  

8. J. -S. Jang and B. Javidi, “Improvement of viewing angle in integral imaging by use of moving lenslet arrays with low fill factor,” Appl. Opt. 42, 1996–2002 (2003). [CrossRef]   [PubMed]  

9. S. Jung, J.-H. Park, H. Choi, and B. Lee, “Wide-viewing integral three-dimensional imaging by use of orthogonal polarization switching,” Appl. Opt. 42, 2513–2520 (2003). [CrossRef]   [PubMed]  

10. S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric Fresnel-lens array,” Opt. Eng. 41, 2572–2576 (2002). [CrossRef]  

11. S.-W. Min, B. Javidi, and B. lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42, 4186–4159 (2003). [CrossRef]   [PubMed]  

12. J.-H. Park, S. Jung, H. Choi, and B. Lee, “Integral imaging with multiple image planes using a uniaxial crystal plate,” Opt. Express 11, 1862–1873 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-16-1862. [CrossRef]   [PubMed]  

13. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. 15, 2059–2065 (1998). [CrossRef]  

14. L. Erdmann and K. J. Gabriel, “High-resolution digital integral photography by use of a scanning microlens array, Appl. Opt. 40, 5592–5599 (2001). [CrossRef]  

15. K. Liet al., “Building and using a scalable display wall system,” IEEE Computer Graphics and Applications , 20, 29–37 (2000). [CrossRef]  

16. G. Bresnahan, R. Gasser, A. Abaravichyus, E. Brisson, and M. Walterman, “Building a large-scale high-resolution tiled rear-projected passive stereo display system based on commodity components,” in Stereoscopic Displays and Virtual Reality Systems X , A. J. Woods, M. T. Bolas, J. O. Merritt, and S. A. Benton, eds., Proc. SPIE 5006, 19–30 (2003).

17. Y. Endo, M. Ono, T. Yamada, H. Kawamura, K. Kobara, and T. Kawamura, “A study of antireflective and antistatic coating with ultrafine particles,” Advances Powder Technol. 7, 131–140 (1996). [CrossRef]  

18. J. J. Gibson, The Perception of the Visual World (Houghton Mifflin, New York, 1950).

19. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging with nonstationary micro-optics,” Opt. Let. 28, 324–326 (2002). [CrossRef]  

20. H. Liao, The University of Tokyo, 7-3-1 Hongo Bunkyo-ku, Tokyo 113-8656, Japan, and M. Iwahara, T. Koike, Y. Momoi, N. Hata, I. Sakuma, and T. Dohi are preparing a manuscript to be called “Scalable high-resolution integral videography autostereoscopic display by use of seamless multiprojection.”

Supplementary Material (2)

Media 1: MPG (1240 KB)     
Media 2: MPG (1333 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Principle of IV.
Fig. 2.
Fig. 2. System configuration of multiprojector IV display.
Fig. 3.
Fig. 3. Design and fabrication of microlens array. (a) Lens array is made of two lenticular sheets crossed at a right angle. (b) Fabricated lens array.
Fig. 4.
Fig. 4. High-resolution multiprojection IV display device. The display includes two projectors and mirrors for optical reflection.
Fig. 5.
Fig. 5. Schematic diagram of measuring IV image spatial resolution by projecting black and white stripes in different depths.
Fig. 6.
Fig. 6. Measured IV image spatial resolution. The numbers mean the image depths in front of (real IV image) and behind (virtual IV image) the lens array. The green part shows the real IV image of strips arranged in Fig. 5.
Fig. 7.
Fig. 7. Measured spatial resolutions of real and virtual IV images.
Fig. 8.
Fig. 8. Elemental image of IV rendering results. (a) Calculated entire elemental image of human heart; (b) Enlarged image of the yellow part in (a).
Fig. 9.
Fig. 9. (1.2MB) Movie of multiprojector IV Image (human heart) observed from different viewing directions.
Fig.10.
Fig.10. (1.3MB) Movie of IV CT autostereoscopic animated image of human heart. The patient has a heart rate of 63 beats per minute, i.e. a cardiac cycle of 0.95s.

Tables (1)

Tables Icon

Table 1 Main specifications of multiprojection IV display

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.