Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compact multi-projection 3D display system with light-guide projection

Open Access Open Access

Abstract

We propose a compact multi-projection based multi-view 3D display system using an optical light-guide, and perform an analysis of the characteristics of the image for distortion compensation via an optically equivalent model of the light-guide. The projected image traveling through the light-guide experiences multiple total internal reflections at the interface. As a result, the projection distance in the horizontal direction is effectively reduced to the thickness of the light-guide, and the projection part of the multi-projection based multi-view 3D display system is minimized. In addition, we deduce an equivalent model of such a light-guide to simplify the analysis of the image distortion in the light-guide. From the equivalent model, the focus of the image is adjusted, and pre-distorted images for each projection unit are calculated by two-step image rectification in air and the material. The distortion-compensated view images are represented on the exit surface of the light-guide when the light-guide is located in the intended position. Viewing zones are generated by combining the light-guide projection system, a vertical diffuser, and a Fresnel lens. The feasibility of the proposed method is experimentally verified and a ten-view 3D display system with a minimized structure is implemented.

© 2015 Optical Society of America

1. Introduction

State-of-the-art display technologies realize a high density of pixel structures and a large capacity for data processing. In particular, the degree of maturity in the two-dimensional (2D) display industry is sufficiently high to provide 2D images with the colors of the nature and real-world-like scenes displayed vividly. Most companies are now able to manufacture high-quality products. However, the market requires a next-generation display device which can change the game, and engineers hope to discover unique properties for pioneering a novel display industry. The three-dimensional (3D) display has long been considered as one of the promising candidates for the next-generation display.

The stereoscopic 3D display is the most popular 3D display system and was first proposed by Wheatstone in the 19th century [1]. A pair of images for each monocular eye is projected by polarization glasses or a goggle-shaped head-mounted display, and these images are synthesized in the brain. Through the synthesis processing, humans are able to experience 3D effects, and this principle is known as binocular disparity. The stereoscopic method is widely used in theaters and 3D televisions. Many types of autostereoscopic 3D displays such as multi-view and integral imaging have been proposed [2]. Multi-view 3D display systems which spatially distribute view images by combining the 2D display and optical components have already been commercialized, but the quality of the 3D image is not as good as that of 2D display systems. Likewise, it is difficult to represent a high-quality 3D image in integral imaging, which integrates a 3D image in space by rearranging the set of the 2D information using a lens-array. This is because massive amounts of spatial and angular information are required for reconstructing natural 3D images [3, 4]. Due to limitations in image quality and the complexity of the system, the autostereoscopic 3D display is not very attractive to the public as much as a next-generation display system beyond the 2D display market and the popularization of the 3D display system has lagged.

High performance autostereoscopic 3D display systems applying multiple projectors have recently been reported [5, 6]. Since the use of a large number of projectors enables massive amounts of spatial and angular data of 3D objects to be provided, the technique leads to high-resolution 3D images and improved viewing parameters such as the viewing angle, depth expression range, and the density of the viewpoint [7]. In spite of these advantages in image quality, it is difficult to implement multiple-projector based 3D display system for general purposes due to the complex and high-cost of the system including a projector array and associated equipment [6]. In addition, a larger space in both the horizontal and vertical directions is required compared to a 2D projection system. A free-form asymmetric mirror and a light-guide projection have been proposed as technologies for reducing the projection space. The performance of the former is good with fine accuracy in imaging. However, the design and manufacturing costs are expensive, and it is difficult to combine the projection units and free-form optics [8]. The latter one usually uses the principle of total internal reflection (TIR). The optical path of the image is folded in the light-guide, and the effective projection distance is reduced to the thickness of the light-guide [9–15]. The light-guide is composed of an acrylic material, a relatively low cost material. Attempt has been made to realize an autostereoscopic 3D display using light-guide projection [11]. However, the reported system provided only two views with two projectors. Since the two views are fixed, it cannot express more parallax of 3D objects, which is one of the most important depth cues. The design of the light-guide and the alignment of multiple projectors are complicated, since the geometry of the light-guide needs to be adjusted according to specifications of the projector and the image. Also, a severe image distortion which degrades the 3D effects occurs according to the incidence condition of the projection images and their orientations. In our previous study, we implemented a five-view multi-projection 3D display applying the light-guide projection, but the errors from image distortion were neglected and the system could not provide the correct 3D effects [13].

In this paper, we propose a multi-projection based multi-view 3D display system with a reduced projection space by adopting light-guide projection. We deduce the equivalent model of the light-guide projection for illuminating the relationship between the system parameters and image distortion. Through a ray tracing simulation of the equivalent model, the shapes of the images from each projector are estimated, and the distortion in the proposed system is compensated by image rectification considering an image transformation between the air and the light-guide material. In the experiment, a ten-view multi-projection 3D display in which the proposed method is used is implemented. We confirm that the projection distance in horizontal direction is reduced from 266 mm in the conventional system to 48.5 mm in the proposed method. Multiple viewpoints for multi-viewers, and correct view images for achieving a 3D effect are realized.

2. Principle

2.1 Multi-projection based multi-view 3D display

In the flat-panel based multi-view 3D display, 2D image resolution for a monocular eye is inversely proportional to the number of views because the directivity is assigned for distributing the view images on the space by combining lenticular lenses or parallax barriers. Therefore, the 3D image that is synthesized by binocular disparity is also degraded with increasing number of views. On the other hand, that in the multi-projection based multi-view 3D display is conserved because a single projection unit contributes to the formation of a single viewpoint. Since the resolution of the view image is the same as that of the projection unit, the 3D image involving the same resolution is synthesized, as shown in Fig. 1. Furthermore, it is possible to increase the number of viewpoints and to enhance other viewing parameters by increasing the number of projectors in a lateral direction [7].

 figure: Fig. 1

Fig. 1 Schematic diagram of a multi-projection based a multi-view 3D display (top view).

Download Full Size | PDF

The parameters of the system are defined by the geometrical relation. The position of viewpoints, referred to as the optimal viewing position dv and the interval of viewpoints pe are calculated by the lens maker’s law as follows:

dv=fdpdpf,
pe=ppdvdp,
where f is the focal length of the collimation lens, dp is the projection distance between the projector and the collimation lens, and pp is the interval of the adjacent projectors which can be adjusted for the interval of viewpoints.

The viewing angle θview in the multi-projection based multi-view 3D system is determined by the number of projectors n. The angle between the chief rays of the outermost projectors represents the viewing angle of the system.

θview=2tan1((n1)pp2dp).

As mentioned above, we find that the large area should be assigned for providing a sufficient projection distance, as indicated in the yellow box in Fig. 1. For enlarging the size of image and the number of projectors, more projection space is required. The required projection space Ap in the air is equal to:

Ap=(n1)ppdp.

From this point of view, the multi-projection based multi-view 3D display is inefficient to be implemented in general places such as home, a classroom and an office. For reducing the size of projection space, special optical components for short length projection or a densely stacked projection array with a thin projection device are required.

2.2 Light-guide projection

To reduce the projection distance, the light-guide with a shape of wedge prism was adopted by Travis et al [9–12]. The light-guide has a unique structure composed of three parts: an entrance part, a light-guide part, and an exit part, as shown in Fig. 2(a). The angle of entrance part is determined to satisfy the TIR condition of the rays. When the diverging angle of the projector θv and the critical angle of the light-guide material θc are defined, the angle of the entrance part θa should satisfy Eq. (5) for the TIR of a marginal ray of the projection image.

θaθc+θv'2,
where θv is the diverging angle in the light-guide which is calculated by Snell’s law. If the lower marginal ray satisfies the TIR condition, other rays also experience TIR, since angles to the normal of other rays at the bottom surface are larger than that of the marginal ray.

 figure: Fig. 2

Fig. 2 Light-guide projection: (a) ray trajectory in wedge-shaped light-guide (side view), (b) multi-view 3D display system using light-guide (top view).

Download Full Size | PDF

After entering the light-guide, the ray experiences multiple TIRs until it reaches the exit surface. In the light-guide part, the angle of the ray does not change as the ray propagates, and the projection distance inside the light-guide increases with increasing number of reflections. The angle to the normal of the light-guided ray changes at the exit part because the exit part has a shape of wedge prism with a constant slope angle. Therefore, the angle to the normal is reduced as the ray undergoes repetitive reflections. The following equation represents the changes in the angle to the normal and the exit condition of the ray in the light-guide.

θcθexit=θi2Nθe,
where θexit is the angle to the normal of the ray on the bottom interface in the exit part. θi represents the initial incident angle of an arbitrary ray satisfying the TIR condition, and N is the total number of reflections at the inclined surface of the exit part. When the angle to the normal at the exit surface is smaller than the critical angle, the ray can emerge from the light-guide contributing to the representation of the image.

By assuming that the light-guide is extended over a lateral direction, and projectors are aligned along the entrance part of the light-guide, we can represent the multi-view images by combining the collimation optics in front of the exit surface, as shown in Fig. 2(b). The effective projection distance deff is reduced in accordance with the thickness of the light-guide, thus implementing a multi-projection based multi-view 3D display with a reduced projection space.

2.3 Equivalent model of light-guide

The ray traveling toward the exit surface of the light-guide is refracted at the entrance part and multiple TIRs in the light-guide part. As the number of reflections in the system increases, the calculation for estimating the trajectory of rays becomes more complex. In addition, the proposed system requires many projectors, thus increasing the analysis load when the ray tracing is performed for a real light-guide.

For simplifying the ray-tracing for multiple projectors, the optical equivalent model of a light-guide is proposed. For the case of rays transferred by the TIR, we can trace the trajectory of the rays by stacking the light-guide at the interfaces [12, 15]. As an example of the trajectory of light-guide stacks, the bundle of rays undergoing four reflections (three reflections in the light-guide part and one reflection in the exit part) in the light-guide is represented in Fig. 3. When the TIRs in the light-guide part are converted to the light-guide stacks, the light-guide is approximated to a simple prism structure. The apex angle of the prism θapex is calculated by the angles of the entrance and the exit part, and the number of reflections at the inclined surface in the exit part.

 figure: Fig. 3

Fig. 3 Equivalent model of light-guide.

Download Full Size | PDF

θapex=θa2Nθe.

Detailed specifications of the equivalent prism model can be obtained by calculating the optical path length of the projected rays in the light-guide. In addition, the equivalent model provides a hint for suppressing the noise component from the sudden changes in the angle in between the light-guide part and the exit part. As shown in Fig. 3, the rays inside marginal components indicated by a yellow-dashed line reach the exit surface without deflection, but rays exceeding the marginal component cannot pass through the stacked light-guide shown in Fig. 3, because the rays are abruptly deflected due to the sudden changes in the angle. The unwanted deflection breaks the linearity in the image, and noise occurs in the image such as overlapping and discontinuity. To reduce noise, the concept of mask blocking the abrupt deflection is adopted in the calculation process of the equivalent prism. The position of the masks is calculated by intersecting the marginal ray paths and the corner of the light-guide stack. By combining the equivalent prism and masks, it is possible to simulate the image estimation for light-guide projection without noise. Since the ray tracing of the proposed system is simplified, the deflections at the light-guide part are replaced by the propagation in the prism material. Thus, we are able to save resources for the image estimation. In addition, the concept of the equivalent model can be expanded over the multiple reflection conditions.

For estimating the image characteristic at the exit surface, the projector should focus on the exit surface. When the projected image is incident on the prism, the image is severely distorted by refraction because the projection optic is designed to represent the correct image in air. Therefore, the process for correcting the image distortion and adjusting the focus is required.

When the ray (orange-colored) originated from the projection origin Oair with the angle of half of the diverging angle θv/2 is incident on the prism, the refracted ray (red-colored) reaches the arbitrary point on the exit surface Pexit as shown in Fig. 4. The ray will pass through the virtual point Pv on the virtual plane (red-colored solid line) which is perpendicular to the optical axis, and is located at projection distance dh from the prism entrance in the horizontal direction. We assume that the orange-colored ray propagating in the air heads toward the virtual point Peq which is located at certain distance dh in horizontal direction from the prism entrance with the same height of the point Pv. The optical paths of both colored rays mean that the bundle of rays focused on the point Peq in the air will be focused on the Pv when the light-guide is inserted at the distance do from the projection origin in air. For other points on the virtual plane, the position of Peq can be calculated as the equivalent imaging point. The relationship between these points in air and the material is defined as follows:

dhtan(θv2)=dh'tan(θv'2),
d=do+dh=do+dh'cos(θv/2)nl2sin2(θv/2),
ddo+dh'nldh'((nl21)2nl3(θv2)2(nl410nl2+9)24nl5(θv2)4+higherorderterms),
Peq(x,y)=(d,dtan(θv/2)),
where nl is the refractive index of the prism material. Since the heights of the virtual points in the air Peq and the prism Pv are equal for both rays, we are able to deduce Eq. (8) which describes the projection distances in the air versus the prism. From Eq. (8), the position of equivalent imaging points in air for the variation angle of rays can be obtained by Eq. (9) in terms of the distance from the projection origin to the light-guide entrance do, the optical path in the prism dh, and the refractive index of the prism nl. Equation (10) represents the Taylor expansion of Eq. (9) for a ray with a normal incidence. We can find that the projection distance in a horizontal direction in the air d can be approximated as the constant value of do+dh'/nl for paraxial rays.

 figure: Fig. 4

Fig. 4 Equivalent imaging curve of exit surface.

Download Full Size | PDF

Figure 5 shows the locus of equivalent imaging points when the projection origin is located at −40 mm, and the projection distance in the material having a refractive index of 1.49 is 300 mm. The position of equivalent imaging points is calculated using Eq. (9) with the condition that the angle to the optical axis of rays changes from −20° to 20° as the blue curve in the Fig. 5. The virtual plane in the material corresponds to the locus of equivalent imaging points in the manner of the focus and image information, and the projected image focused on the equivalent imaging plane can be represented on the virtual plane. However, it is difficult to set the projected image on the curved surface, since the projection unit is designed for representing the correct image at a flat surface in air. For image registration for conventional projector optics, we can simplify the locus of the equivalent imaging points as the plane located at do+dh'/nl whose value is deduced by Eq. (10). Since the conventional projection unit generally has a maximum diverging angle in a vertical direction of under 20°, the shape of the locus of the equivalent imaging points can be considered as the plane and the approximation therefore is valid for practical use. The image on the exit surface is also calculated by the projective transformation of the image on the virtual plane. We can illuminate the direct relation between the images at the equivalent imaging plane in air and the image at the exit surface in the material. In other words, it is possible to adjust both the focus of the projection unit and the image distortion by applying the concept of the equivalent imaging plane.

 figure: Fig. 5

Fig. 5 Locus of equivalent imaging points.

Download Full Size | PDF

2.4 Image compensation of proposed system

In the case of a multi-projection based 3D display system, the projection units are physically separated with different orientations in order to maximize the overlapped region of the images, as shown in the Fig. 1. However, the image area cannot be matched perfectly. When an inconsistency in the active areas of each projector and the image distortion appears, the observer perceives 3D scenes accompanied by visual fatigue from the incorrect synthesis of binocular images. For achieving a correct 3D scene that is free from distortion, it is important to address all images into a common area where all projectors can represent the image and compensate for image distortion [16, 17].

In a conventional multi-projection 3D display system, an auto-calibration method with a capturing device is widely used in order to match the imaging area and for correcting image distortion [17, 18]. However, it is difficult to apply the same method in the proposed system since the distortion originating owing to the refraction in the material must be considered. In the proposed system, two kinds of distortion exist. One is keystone distortion from the orientation of projectors which also appears in conventional systems. The other one is from the refraction of rays at the interface between air and the light-guide which is not a factor in conventional system. The former distortion is simply compensated by projective transformation while compensating for the latter distortion using the same image transformation is difficult. This is because the refracted angle of rays is governed by Snell’s law and the incident angle differs, as a function of the projector orientation. Therefore, the presence of the refractive material makes the ray tracing complicated, and an image analysis which considers the distortion from the refraction is required.

Figure 6 shows the image transformation process in the proposed system. The most important issue is to illuminate a degree of distortion between the original image and the distorted area in the material. In the previous section, we defined the equivalent imaging plane so as to directly compare images in air and in the material. Thus, we can compensate for image distortion by applying an image transformation matrix.

 figure: Fig. 6

Fig. 6 Image transformation in equivalent models: (a) image transformation of reference image, (b) image transformation of pre-distorted image.

Download Full Size | PDF

Defining Iref as a reference image at the equivalent imaging plane which corresponds to the input data, it undergoes two distortions and should be modified three times to present the correct image at the common area. The first modification is described as an image transformation H0 from the reference image to the keystone-distorted image on the equivalent imaging plane. The distorted image by the image transformation H1 originated by the refraction is then obtained on the exit surface. The last transformation H2 is setting the distorted image on the exit surface into the common area to correct for distortion. The transformations of each stage are defined as a 3 by 3 matrix and are calculated by comparing the positions of the four corner points between the original image and the transformed image [19]. The relationship between the reference image Iref and the modified image Icom is described as follows:

Icom=H2H1H0Iref.

We denote a pre-distorted image as Ipre, which is compensated so as to represent the correct image at the common area, and its image data is provided as an input of a projector, which is also influenced by the keystone effect and the refraction. The correct image will be displayed at the common area after the transformations, and the relationship between the pre-distortion image and the correct image on the common area is defined as

Icom=H1H0Ipre.

The transformation matrix between the reference image and the pre-distortion image is calculated by combining Eqs. (12) and (13) as follows:

Ipre=H01H11H2H1H0Iref.

From the specification of each projector and its image distortion data, the transformation matrices of each stage are calculated by Eq. (14).

3. Image simulation

3.1 Distortion estimation

To estimate the changes that could occur in images of the proposed system, the ray tracing tool of LightTools 7.3 was adopted. It enables actual parameters of the projector and the material used in the simulation to be determined. We implemented the equivalent model of the light-guide and the projector array of the proposed system, and image characteristics at each stage are obtained.

Figure 7 shows the captured image of the simulation for constructing the equivalent model with the projector array. The equivalent model of the light-guide was calculated through the actual geometry of a light-guide made of acrylic material. The projection unit is modeled as a point light source with a diverging angle corresponding to pico-projectors (Vieway VPL-201, Oriental Electronics, Korea), and the mask plane where input image is inserted. The projector array is comprised of six projectors. Five of them are configured equi-angularly in the positive side from the y-axis and one is used as a reference at the center. Since the configuration of the proposed system is symmetrical with respect to the y-axis, it is possible to estimate the ray tracing results for ten projectors. All projectors are aligned on the arc of a circle with a diameter of 240 mm for the equi-angular configuration. The diameter of the circle is equal to the distance between the projection origin and the equivalent imaging plane. The angular interval between two adjacent projectors is 2.6 degrees. In the ray-tracing, 100 million rays are sampled in order to obtain a reliable simulation with an error value of 3.62%.

 figure: Fig. 7

Fig. 7 Ray-tracing of equivalent model: (a) geometry of light-guide, (b) side view, and (c) top view of projector array.

Download Full Size | PDF

We investigated estimated images under three different conditions to illuminate the transformation matrices and the common area on the exit surface. Figure 8(a) shows the estimated image at an equivalent imaging plane. The reference projector presents a clear image without any distortion, but keystone distortions that become severe when the orientation angle of projector increases can be found. The image transformation matrix of the keystone effect H0 for each projector is calculated by comparing the reference image and keystone-distorted images. As shown in Fig. 8(b), the keystone-distorted image in Fig. 8(a) is distorted more severely by refraction at the interface. To obtain a precise estimation of the distortion, the TIR at the exit surface is ignored in the simulation, since some of the rays satisfying the TIR condition at the exit surface cannot be detected. Likewise, in the simulation results shown in the Fig. 8(a), changes in the shape of image become larger as the projector is located far from the center. In addition, the image is elongated in a vertical direction since the optical path of the lower part is shorter than that of the upper part at the exit surface which is inclined with the slope of the apex angle of the prism. The image transformation matrix H1 is calculated by comparing the keystone-distorted image at the equivalent imaging plane and the distorted image at the exit surface.

 figure: Fig. 8

Fig. 8 Image estimation results: (a) image at equivalent imaging plane, (b) image at exit surface without total internal reflection, (c) image at exit surface with total internal reflection

Download Full Size | PDF

Figure 8(c) shows the active area of each projector at the exit surface when the TIR is taken into consideration. Since the rays emerging from the light-guide can contribute to displaying the view images, some parts of the image area in the Fig. 8(b) are excluded in the active area. As the oblique angle of the projector increases, the difference in the optical paths of the rays corresponding to each corner of the image increases, resulting in additional distortion in the active area of projector.

From the simulation results in Fig. 8(c), it is possible to calculate the common area of the system. When the active areas of ten projectors are overlapped, the image can be represented on the large area, as shown in Fig. 9(a). However, all view images should be represented on the common area to permit the image to be shown at the same space on the exit surface as the observing position varies. The area bounded by gray in Fig. 9(b) shows the common area of the proposed system calculated by intersecting the active areas. We restricted the common area to the shape of a rectangle, since the conventional display device and its image sources have the shape of a rectangle. For compensating the distorted image on the exit surface into the common area, the image transformation matrix H2 is calculated, and the proposed system provides the correct 3D effects.

 figure: Fig. 9

Fig. 9 Active area and common area: (a) overlapped area of active areas, (b) common area.

Download Full Size | PDF

3.2 Image compensation of proposed system

From the specifications of each projector and its image distortion data, the transformation matrices of each stage are calculated by Eq. (14) and the pre-distortion images for each projector are obtained.

Through the image estimation simulation, the transformation matrices for the image compensation are calculated and the pre-distorted images are obtained by means of Eq. (12). Figure 10 shows the series of pre-distorted images originating from the checkerboard image (reference image). Each pre-distorted image will experience the keystone distortion and the refraction. Correct checkerboard images are represented at the common area. To investigate the feasibility of the proposed system, we performed an image simulation with pre-distorted images for ten projectors.

 figure: Fig. 10

Fig. 10 Calibrated checkerboard images for each projector.

Download Full Size | PDF

Figure 11 shows the overlapped image on the exit surface when both the original and the compensated checkerboard patterns are used as input data. Without any compensation in the Fig. 11(a), each image from the projectors shows severe blurring in the upper and lower parts. In the case of the blurred part, it is difficult to recognize the original images. The entire image size exceeds the common area in Fig. 9(b). These results mean that it is impossible for the observer to perceive the correct view images. On the contrary, the clear overlapped checkerboard image appears after the compensation. It is evident that the compensation greatly mitigates inconsistencies between the view images, suggesting that the proposed method provides correct 3D information to multiple viewers.

 figure: Fig. 11

Fig. 11 Overlapped image on the exit surface: (a) before compensation, (b) after compensation.

Download Full Size | PDF

At the upper and the lower part in the Fig. 11(b), there are few blurs and keystone distortions generated by the fact that the equivalent imaging curve is approximated as a plane. We hypothesize that the mismatch can be modified by considering an equivalent imaging curve instead of a plane, and the effect of the mismatch can be ignored when the diverging angle of the projector is as small as specified in Section 2. The keystone distortion in the Fig. 11(b) can be reduced by increasing the number of sampling points in the transformation matrix calculation.

4. Experiments and results

To demonstrate the proposed system and its image compensation method, we implemented the experimental setup with ten projectors.

Figure 12(a) shows a schematic diagram of the proposed system. Each projector is aligned toward the center of the equivalent imaging plane with the equi-angular configuration. The focusing plane is adjusted so as to be located at the equivalent imaging plane. After the alignment of the projector array, the light-guide combined with collimation optics is located at the calculated distance from the projection. The experimental setup of the proposed method with ten projectors is implemented and the results are shown in Fig. 12(b). Images passing through the light-guide by TIRs are collimated toward the viewpoints. As the collimation optics, a vertical diffuser having diffusing angles of 60° and 1° in the vertical and horizontal direction is used for expanding the viewpoints in the vertical direction, and a Fresnel lens with a focal length of 200 mm is employed. Detailed specifications of the experimental setup are shown in Table 1.

 figure: Fig. 12

Fig. 12 Experimental setup: (a) schematic diagram of ten-view multi-projection 3D display, (b) prototype of ten-view multi-projection 3D display with light-guide.

Download Full Size | PDF

Tables Icon

Table 1. Experimental conditions

Figure 13 shows the experimental results for image compensation. The diffuser is attached on the exit surface to confirm the overlap of the correct images at the common area. If the original checkerboard pattern in the Fig. 13(a) is inserted as input data, severe blurring at the upper and lower parts occurs as shown in the Fig. 13(b) and the image exceeds the common area of the system. In this condition, the view images are distorted, and it becomes difficult to synthesize 3D images. Even if observer perceives the 3D image, it will be accompanied by a distorted depth and visual fatigue. When the original images are replaced by pre-distorted images, each image is well overlapped at the common area. The captured image at the diffuser is similar to the original checkerboard pattern, as shown in the Fig. 13(c). The size of the common area of the system is 110 mm (H) by 65 mm (W). For representing the same height of the image in the conventional multi-projection 3D display system, a projection distance of 266 mm in horizontal direction is required when the same projector is implemented. In the proposed system, a projection distance of 48.5 mm which is the same as the thickness of the light-guide in the horizontal direction is required. The feasibility of image compensation via the equivalent model and the equivalent imaging plane is verified by the experimental results.

 figure: Fig. 13

Fig. 13 Overlapped checkerboard pattern at the exit surface: (a) original checkerboard pattern, (b) overlapped image before compensation, (c) overlapped image after compensation.

Download Full Size | PDF

By combining a light-guide projection system and collimation optics, a ten-view 3D display is realized. Every original view image is calibrated through its corresponding transformation matrices calculated by an image estimation method and pre-distorted view images traveling the light-guide are focused into the viewpoints. Figure 14 shows captured view images of the proposed system at the viewpoints. The view images are represented at the common area and the shape of the original image is conserved with reduced distortion. The view images are clearly separated with the parallax at the optimal viewing distance of 1200 mm. A movie containing changes in view images is provided.

 figure: Fig. 14

Fig. 14 View images of ten-view 3D display: (a) 1st view, (b) 2nd view, (c) 3rd view, (d) 4th view, (e) 5th view, (f) 6th view, (g) 7th view, (h) 8th view, (i) 9th view and (j) 10th view (Visualization 1).

Download Full Size | PDF

The above findings serve to verify that the proposed method provides high-resolution multi-view 3D images with a reduced projection space and the image compensation from the equivalent model of the light-guide shows correct view images at the common area.

5. Conclusion

We proposed a multi-projection 3D display system with the reduced projection space using light-guides and its optical equivalent model to permit the system to be manipulated easier. The size of the projection space in the conventional multi-projection based multi-view 3D display system is reduced by applying a light-guide projection system. Through experiments, we confirmed that the projection distance of the proposed system is highly reduced by 18% of that of the conventional system for representing an image with the same height, and the viewing zones are finely separated. It is likely that the use of a multi-projection 3D display system with a reduced size would help to spread the system into some areas with limited space such as homes, classrooms, etc. In addition, the proposed method is applicable for use in multi-projection 3D displays based on not only multi-view systems but also integral imaging systems, which will widen the choice of 3D display system types. We expect that the projection distance of such multiple-projector based 3D displays can be reduced more by optimizing the structure of the light-guide and projection optics.

Acknowledgment

This research was supported by “The Cross-Ministry Giga KOREA Project” of The Ministry of Science, ICT and Future Planning, Korea [GK15D0200, Development of Super Multi-View (SMV) Display Providing Real-Time Interaction]. The car 3D image used in the experiment for Fig. 14 is provided by aXel used under Creative Commons Attribution 3.0.

References and links

1. C. Wheatstone, “Contributions to the physiology of vision. part the first. On some remarkable and hitherto unobserved phenomena of binocular vision,” Philos. Trans. R. Soc. Lond. 128(0), 371–394 (1838). [CrossRef]  

2. B. Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). [CrossRef]  

3. S.-G. Park, J. Yeom, Y. Jeong, N. Chen, J.-Y. Hong, and B. Lee, “Recent issues on integral imaging and its applications,” J. Inf. Disp. 15(1), 37–46 (2014). [CrossRef]  

4. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]   [PubMed]  

5. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010). [CrossRef]   [PubMed]  

6. J.-H. Lee, J. Park, D. Nam, S. Y. Choi, D.-S. Park, and C. Y. Kim, “Optimal projector configuration design for 300-Mpixel multi-projection 3D display,” Opt. Express 21(22), 26820–26835 (2013). [CrossRef]   [PubMed]  

7. S.-G. Park, J.-Y. Hong, C.-K. Lee, M. Miranda, Y. Kim, and B. Lee, “Depth-expression characteristics of multi-projection 3D display systems [invited],” Appl. Opt. 53(27), G198–G208 (2014). [CrossRef]   [PubMed]  

8. Ricoh Co, Ltd., “Ultra short throw projectors,” http://www.ricoh.com.

9. A. Travis, F. Payne, J. Zhong, and J. Moore, “Flat panel display using projection within a wedge-shaped waveguide,” in 20th International Display Research Conference of the Society for Information Display (2002) 292–295.

10. A. Travis, T. Large, N. Emerton, and S. Bathiche, “Collimated light from a waveguide for a display backlight,” Opt. Express 17(22), 19714–19719 (2009). [CrossRef]   [PubMed]  

11. M. Large, T. Large, and A. Travis, “Parallel optics in waveguide displays: a flat panel autostereoscopic display,” J. Disp. Technol. 6(10), 431–437 (2010). [CrossRef]  

12. A. R. L. Travis, T. A. Large, N. Emerton, and S. N. Bathiche, “Wedge optics in flat panel displays,” Proc. IEEE 101(1), 45–60 (2013). [CrossRef]  

13. S.-G. Park, C.-K. Lee, and B. Lee, “Compact multi-projection 3D display using a wedge prism,” Proc. SPIE 9391, 939113 (2015). [CrossRef]  

14. Y. K. Cheng, S. N. Chung, and J. L. Chern, “Aberration analysis of a wedge-plate display system,” J. Opt. Soc. Am. A 24(8), 2357–2362 (2007). [CrossRef]   [PubMed]  

15. C.-K. Lee, T. Lee, H. Sung, and S.-W. Min, “Analysis and design of wedge projection display system based on ray retracing method,” Appl. Opt. 52(17), 3964–3976 (2013). [CrossRef]   [PubMed]  

16. Y. M. Kim, J. Yim, Y.-K. Ahn, and S.-W. Min, “Compensation of elemental image using multiple view vectors for off-axis integral floating system,” Appl. Opt. 53(10), 1975–1982 (2014). [CrossRef]   [PubMed]  

17. K. Nagano, A. Jones, J. Liu, J. Busch, X. Yu, M. Bolas, and P. Debevec, “An autostereoscopic projector array optimized for 3D facial display,” in ACM SIGGRAPH 2013 Emerging Technologies (ACM, 2013), paper 1.

18. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

19. K. Hong, J. Hong, J.-H. Jung, J.-H. Park, and B. Lee, “Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging,” Opt. Express 18(11), 12002–12016 (2010). [CrossRef]   [PubMed]  

Supplementary Material (1)

NameDescription
Visualization 1: MOV (983 KB)      View images of ten-view 3D display

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Schematic diagram of a multi-projection based a multi-view 3D display (top view).
Fig. 2
Fig. 2 Light-guide projection: (a) ray trajectory in wedge-shaped light-guide (side view), (b) multi-view 3D display system using light-guide (top view).
Fig. 3
Fig. 3 Equivalent model of light-guide.
Fig. 4
Fig. 4 Equivalent imaging curve of exit surface.
Fig. 5
Fig. 5 Locus of equivalent imaging points.
Fig. 6
Fig. 6 Image transformation in equivalent models: (a) image transformation of reference image, (b) image transformation of pre-distorted image.
Fig. 7
Fig. 7 Ray-tracing of equivalent model: (a) geometry of light-guide, (b) side view, and (c) top view of projector array.
Fig. 8
Fig. 8 Image estimation results: (a) image at equivalent imaging plane, (b) image at exit surface without total internal reflection, (c) image at exit surface with total internal reflection
Fig. 9
Fig. 9 Active area and common area: (a) overlapped area of active areas, (b) common area.
Fig. 10
Fig. 10 Calibrated checkerboard images for each projector.
Fig. 11
Fig. 11 Overlapped image on the exit surface: (a) before compensation, (b) after compensation.
Fig. 12
Fig. 12 Experimental setup: (a) schematic diagram of ten-view multi-projection 3D display, (b) prototype of ten-view multi-projection 3D display with light-guide.
Fig. 13
Fig. 13 Overlapped checkerboard pattern at the exit surface: (a) original checkerboard pattern, (b) overlapped image before compensation, (c) overlapped image after compensation.
Fig. 14
Fig. 14 View images of ten-view 3D display: (a) 1st view, (b) 2nd view, (c) 3rd view, (d) 4th view, (e) 5th view, (f) 6th view, (g) 7th view, (h) 8th view, (i) 9th view and (j) 10th view (Visualization 1).

Tables (1)

Tables Icon

Table 1 Experimental conditions

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

d v = f d p d p f ,
p e = p p d v d p ,
θ view =2 tan 1 ( (n1) p p 2 d p ).
A p =(n1) p p d p .
θ a θ c + θ v ' 2 ,
θ c θ exit = θ i 2N θ e ,
θ apex = θ a 2N θ e .
d h tan( θ v 2 )= d h ' tan( θ v ' 2 ),
d= d o + d h = d o + d h ' cos( θ v /2) n l 2 sin 2 ( θ v /2) ,
d d o + d h ' n l d h ' ( ( n l 2 1) 2 n l 3 ( θ v 2 ) 2 ( n l 4 10 n l 2 +9) 24 n l 5 ( θ v 2 ) 4 +higherorderterms ),
P eq (x,y)=(d,dtan( θ v /2)),
I com = H 2 H 1 H 0 I ref .
I com = H 1 H 0 I pre .
I pre = H 0 1 H 1 1 H 2 H 1 H 0 I ref .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.