Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

360-degree large-scale multiprojection light-field 3D display system

Open Access Open Access

Abstract

This paper proposes a 360-deg large-scale multiprojection light-field 3D display system, which can reconstruct the light field of models in real space. The reconstructed contents can be observed by multiple viewers from different angles and positions simultaneously. In this system, 360 projectors project images onto a cylindrical light-field diffusion screen whose height is 1.8 m and diameter is 3 m. When moving around the system, viewers can see 3D scenes with smooth-motion parallax, and the frame rate can reach 30 fps and above. To achieve a large-scale display, we design a wide-field lens with cylindrical lenses to enlarge the projection image. To promote efficiency of data transmission and render 3D contents in real time, we apply computers equipped with multiple graphic cards, and display data are divided by field programmable gate array. Finally, a 360-deg light-field autocalibration method based on CCD and multiview sampling is proposed, whose calibration effect is strongly confirmed by experiment results.

© 2018 Optical Society of America

1. INTRODUCTION

Three-dimensional (3D) display has been developing rapidly in recent years, with the goal to accurately reconstruct large-scale and true-color 3D contents that can be experienced without glasses. Common glasses-free techniques include multiview display, integral imaging display, holography, and light-field display.

Multiview displays usually use a lenticular lens or parallax barrier to distribute pixels of the display device to multiple viewpoints and show different views to the left and right eyes, and thus offering a 3D effect. However, the use of pixel multiplexing will cause severe resolution reduction in one view zone [1]. Multiprojection systems will not reduce resolution in view zones [2], but viewpoints are still fixed, and serious cross talk exists between view zones [3,4]. Integral imaging can provide full parallax by using an array of small lenses that are spherical, square, etc. But limited viewing angles, resolution, and depth range are inherent drawbacks due to the display device and the lens array itself [5,6]. Holography can theoretically recover all light information, and progress has been made in increasing display speed [7] and field of view [8] in recent years. However, too much information is needed to generate images, so the implementation of holography is mainly limited by speed of data processing, transmitting, and development of spatial light modulator. Based on analysis above, currently these techniques are not suitable for constructing large-scale and wide view-angle 3D display systems.

By contrast, light-field display holds great promise of reproducing large-scale and high-quality 3D scenes because it reconstructs the light field of objects. It also discards phase information and focuses only on light intensity, which can decrease the amount of data greatly, and thus enables the achievement real-time display. Current light-field display systems such as scanning systems [911] and multiprojection systems [12] have succeeded in providing a complete 360-deg viewing experience. However, these systems only have a small size, and their application is limited.

In order to attain large-scale 3D display in space and allow for personal interaction, a preferred approach is to use a projection-type configuration. But previous large-scale multiprojection systems usually have had a limited viewing angle due to applying the 2D rectangular screen [1319]. To solve this problem, Zhong et al. [20] set up a multiprojection light-field display system with a cylindrical diffusion screen and optimized the light-field reconstruction algorithm for better display performance [21]. Yet that work didn’t mention how to solve problems such as image calibration, data transmission, and real-time rendering. Image calibration is necessary because conventional optical imaging will produce blurred and distorted images on a cylindrical screen. And for the sake of generating seamless horizontal autostereoscopic imagery, a dense array of projectors is needed, so transmission and synchronization methods of huge display data also have to be improved. In the aspect of calibration, Chen et al. [22] proposed a calibration method with no need to reconstruct the shape of the screen mathematically. Instead, Chen pasted a paper to the screen to simulate the shape of screen, which will undoubtedly introduce calibration error caused by disalignment between paper and screen.

In this paper, a 360-deg large-scale multiprojection light-field 3D display system with 360 projectors and a cylindrical diffusion screen is proposed. The screen is 1.8 m high with a diameter of 3 m. We design a wide-field lens that can produce a clear image on the cylindrical screen and enlarge the size of the projection image. We use computers equipped with multiple graphic cards to process the huge amount of data and achieve synchronous real-time rendering, and our rendering rate reaches 30 fps. In addition, we propose a light-field calibration method based on CCD and multiview fitting algorithms, and the effectiveness of this method has been strongly confirmed by experiment results.

2. PRINCIPLE OF LIGHT-FIELD RECONSTRUCTION

Principle of the proposed display system is illustrated in Fig. 1. A dense array of projectors are arranged in a circle under a cylindrical anisotropic diffusion screen. Projectors are set staggered horizontally, and the array circle is homocentric with the screen. In order to reduce the amount of display data, this system only offers horizontal parallax but satisfies perspective relationship in a vertical direction. This is a reasonable tradeoff as human movement is dominated by horizontal motion. Thus, the anisotropic diffuser has a large diffuse angle in the vertical direction, while a small diffuse angle is in the horizontal direction. Then through the screen, pupils of projectors are diffused to stripe images, and at each viewpoint the viewed image consists of several stripe images from multiple projectors. As shown in Fig. 1(a), at viewpoints V1 and V2, the two spatial points A and B will be displayed by different projectors. And when the projectors are arranged densely enough, a seamless image could be observed.

 figure: Fig. 1.

Fig. 1. (a) Schematic diagram of a 360-deg multiprojection light-field display system. (b) General structure of light-field mapping principle. (c) On x-z plane. (d) On yz plane.

Download Full Size | PDF

Take the reconstruction of a spatial point Q(x,y,z) displayed by projector P(0,hp,dp) as an example; the mapping relationship is illustrated in Figs. 1(b)1(d). Point S(xs,ys,zs) is the corresponding projection point of Q on the screen with radius rs. On the xz plane, light through Q will not change its propagation direction because of the property of the screen, thus xs,zs are calculated by the intersection of line PQ and the screen, as indicated as Eq. (1). On the y-z plane, considering light from Q is strongly scattered in the vertical direction, ys is calculated according to viewpoint V(xv,yv,zv), as indicated as Eq. (2).

{zszdpz=xsxx,zs2+xs2=rs2,
ys=(yvy)(zsz)zvz+y.

3. PROJECTION OPTICAL SYSTEM

In the projection system, commercial DLP microprojectors are used and arranged 360 deg in a circle under the cylindrical light-field screen. Projection lenses need to meet the following requirements:

  • 1. Since the projector is placed under the screen, the lens needs to be eccentric so that the image can be projected upwards onto the screen.
  • 2. The target imaging surface is a cylinder with a diameter of 3.0 m and a height of 1.8 m. The aspect ratio of the projector is 4∶3, so the projection image is required to be stretched more in the horizontal direction.
  • 3. The lateral projection image should be able to cover a 180-deg range on the screen as much as possible, as shown in Fig. 2(a).

 figure: Fig. 2.

Fig. 2. (a) Top view of lens imaging. (b) Structure of the projection lens. (c) MTF chart of the projection lens. Maximum on the abscissa is the cut-off frequency of the projection lens.

Download Full Size | PDF

Structure of the designed projection lens is shown in Fig. 2(b). A group of spherical lenses are used to ensure the center part of the projection is clear on screen where imaging distance is relatively long. While two cylindrical lenses are used to control the horizontal and vertical field angle, respectively, and to ensure the horizontal edge of the projection is clear on screen where imaging distance is relatively short. The horizontal field angle of the projection lens is about 66.4 deg, and the vertical field angle is about 30 deg. Given that our projector has 800×600 resolution, the radius of screen is 1.5 m, and the radius of projector array is 1.7 m, horizontal cutoff frequency can be calculated, and the result is 0.107 lp/mm, which is the maximum on the abscissa in the MTF chart, as shown in Fig. 2(c). According to the figure, MTF of edge of the object at cut-off frequency is bigger than 0.6, meeting the imaging requirements.

The distortion graphs of the lens are shown in Fig. 3. In Fig. 3(a), the ordinate and abscissa, respectively, represent the vertical and horizontal normalized field of view coordinates, denoted as Hx, Hy. The red lines in this graph indicate the horizontal edge of the object that corresponds to the horizontal edge of the field of view. In Fig. 3(b), the abscissa represents the horizontal field angle θ, and the ordinate represents the height of the image on the screen, denoted as Y. The red lines in this graph indicate the horizontal edge of the field of view. To further present performance of the projection lens, we projected one pixel-width grid onto the screen and photographed multiple parts of the projection image, as shown in Fig. 3(c). We define the left- and right-edge fields of view of the projector’s image on screen as 1 and 1, and the bottom and top fields of view as 0 and 1. Pictures from left to right correspond to the projection image from left edge to right edge, and pictures in same row are taken at the same height. On the grid pictures, adjacent pixels can be clearly distinguished, which verifies the image quality of the lens. These figures shows that our projection lens introduces large distortion in the middle and small distortion at the edge, because distance between the projection unit and screen differs greatly from middle to edge. Thus, the calibration method is necessary for every projector in order to reconstruct the light field accurately and seamlessly.

 figure: Fig. 3.

Fig. 3. (a) Grid image in the projector. The red lines indicate the horizontal edge of the object that corresponds to the horizontal edge of field of view. (b) Grid image on the screen. θ is the horizontal field angle. Y is the height of image on the screen. The red lines indicate the horizontal edge of the field of view. (c) Real grid image on screen. Numbers on the bottom of each subfigure represent the corresponding field in the complete projection image.

Download Full Size | PDF

4. TRANSMISSION AND RENDERING OF LIGHT-FIELD DATA

In order to achieve a large-scale 360-deg 3D display, a large amount of light-field data needs to be updated in real-time by computers. Suppose one computer needs to drive N projectors with the resolution w×h and frame rate f, then the total display bandwidth can be calculated by Eq. (3),

Ww*h*f*N.

Taking a projector with the resolution 800×600 and frame rate 60 Hz as an example, the total display bandwidth needs to be at least 9.9×103MHz if there are 360 projectors. To distribute display data and achieve maximum performance, we equipped one computer with multiple graphics cards. Common high-performance graphics cards have four output interfaces, including two dual-channel DVI interfaces with a bandwidth of 330 MHz for each, an HDMI interface, and a DP interface. HDMI and DP can be converted to single-channel DVI with the bandwidth 165 MHz, respectively. So, one graphics card has an output bandwidth of 990 MHz and can theoretically drive 36 projectors.

Based on analysis above, we set data generation and transmission mode as follows: each computer installs four graphics cards, and four parallel computing computers are applied to drive 360 projectors. As shown in Fig. 4(a), two kinds of splitters are designed for data distribution, and their specifications are listed in Table 1. Splitter 1 is used to divide a dual-channel DVI into three 800×1800 resolution DVI interfaces, and splitter 2 is used to divide a single-channel interface (HDMI or DP) or a 800×1800 resolution DVI into three 800×600 resolution VGA interfaces so that each splitter 2 can drive three projectors. As a result, one computer can drive 96 projectors. The connection relationship between graphics cards and splitters is shown in Fig. 4(b).

 figure: Fig. 4.

Fig. 4. (a) Working principles of splitter 1 and splitter 2. (b) Connection relationship between graphic cards and splitters.

Download Full Size | PDF

Tables Icon

Table 1. Specification of Splitters

When splitters are connected to computer, they will be identified as independent monitors, and the arrangement of monitors will determine the way in which display data are allocated to each projector. In order to guarantee images of angle, adjacent projectors can splice together, and the monitor arrangement is set as Fig. 5. Monitors in the same row are controlled by the same window, and display data in the window is generated from one graphics card. So there are four windows forming a render area in one computer, and the rendering contents of each projector are based on projector position in this area. The rendering algorithm runs like this: models are computed in CPU and then sent to GPU, where model information will be transferred to display data based on Eqs. (1) and (2). Then specific projection images of each projector are generated, and as a result, all the images will form the final high-resolution image one computer should output.

 figure: Fig. 5.

Fig. 5. Monitor arrangement and rendering window setting. Bigger monitors represent splitter 1 with the resolution 2400×1800, and the smaller ones represent splitter 2 with the resolution 800×1800.

Download Full Size | PDF

To achieve display synchronization, we apply the following techniques. First, in a single graphics card, synchronization is achieved through software. The frame update happens only when images of all projectors are generated. Then we introduce a Quadro card and WGL_NV_gpu_affinity extension to control the rendering of graphics cards in one computer because there is no direct communication between common graphic cards. The extension introduces the concept of affinity-DC, which can direct OpenGL commands to a specific GPU or set of GPUs in the affinity mask. We also use multithread technology to support parallel rendering, in which the main thread controls synchronization of subthreads, and each subthread controls the rendering of one graphics card. Lastly, synchronization between computers is achieved by broadcasting in local area network. The current state of each model in the scene is sent by the host to the slaves. If performance of the computers differs little, delay between host and slaves is not obvious and will lead to at most one frame dropped in our system.

5. LIGHT-FIELD CALIBRATION METHOD

Due to the distortion mainly caused by the projection lens and the installation deviation of projectors, the light field from projectors cannot distribute ideally as desired. To solve this problem, we propose an autocalibration method, which can calibrate both positions and directions of rays. According to Ref. [23], the light field can be represented by two planes, and in our method, we use the projector image plane P and the screen cylinder S. We set a coordinate system on each surface and parameterized rays by their intersections with these two surfaces. We establish a mapping relationship between the intersection coordinates on each surface, and then we use the relationship to modulate models and reconstruct their light field in space.

Structure of the calibration system is shown in Fig. 6. To simplify calculation, we spread the screen cylinder as plane Scr and establish a normalized Cartesian coordinate system SOT. To guarantee calibration accuracy, we utilize control points of the Bézier surface to represent the mapping relationship. A given Bézier surface of degree N×M is defined by a set of (N+1)×(M+1) control points pij (i=0,1,,N; j=0,1,,M), mathematically indicated as Eq. (4). BiN(u), BjM(v) are Bernstein polynomials and pij can be solved by substituting a group of (u,v),P(u,v) into Eq. (4). In our method, coordinates of projector pixels and points on screen are defined as (u,v) and P(u,v), respectively,

P(u,v)=i=0Nj=0MBiN(u)BjM(v)pij;u,v[0,1].

 figure: Fig. 6.

Fig. 6. (a) Structure of the automatic calibration system. Screen marks are attached to screen for simulating shape of the screen. (b) The normalized plane coordinate system SOT, which is established by spreading the cylinder screen. Start angle (0 deg) of the screen will be set as 0 in SOT, with end angle (360 deg) set as 1. Bottom of the screen will be set as 0, with top set as 1.

Download Full Size | PDF

Taking one projector as an example, the framework of our algorithm is presented as follows. First, to simulate shape of screen we mark the screen with a rectangular lattice, as shown in Fig. 6, and points in the lattice are denoted as Scr(s,t),(s,t[0,1]). Then we use CCD to capture images of screen marks. Since CCD can only capture rays which propagate directly into the pupil of it, and due to the property of screen, a mere narrow stripe of the projection image can be captured. In order to sample the complete light field of a projector, we place a CCD in several locations and capture multiview images. When a CCD is located in the position i(i=0,1,,N), screen marks in the captured image are denoted as ScrCami(a,b). Then the mapping relationship B1 between screen space and CCD image space can be calculated by Eq. (5),

Scr(s,t)=B1{ScrCami(a,b)}.

Second, the projector will project a 2D image, displaying M lines that distribute within the screen height range. This set of lines is horizontal on the projector image plane with an equal interval in the vertical direction. And as explained before, a CCD will only capture a column of M points, which are denoted as ProjCami(a,b). We design a search algorithm to find corresponding projector pixels of these points and denote the pixels as Proji(m,n). Our search algorithm is based on color matching. The projector image will be divided and set to different colors, and after color recognition Proji(m,n) can be positioned to a certain color zone. In this zone, color division and recognition will be repeated until the new color zone is small enough. Finally, we scan pixels in the zone and find the target pixel. Based on previously acquired data, we calculate screen coordinates of projection points at position i, which is denoted as ProjScri(s,t),

ProjScri(s,t)=B1{ProjCami(a,b)}.

After sampling light field in multiple positions, the mapping relationship B2 between the projector image and screen space can be calculated by Eq. (7),

Proj(m,n)=B2{ProjScr(a,b)}.

Based on B2, mapping table between projector pixels and projection points on screen can be calculated and used for display. In software, the calibration program can be divided into three modules: (1) image reading and coordination calculation of sample points, (2) projector pixel searching, and (3) mapping calculation. In order to improve efficiency of data acquisition in one position, the CCD will sample as many projectors as possible. The overall module flow chart is shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Flow chart of software.

Download Full Size | PDF

6. EXPERIMENT RESULTS AND DISCUSSI0N

Important parameters of the proposed display system and specification of each computer is shown in Table 2, and the real system is shown in Fig. 8.

Tables Icon

Table 2. Parameters of the Display System

 figure: Fig. 8.

Fig. 8. Structure of the real system. 360 projectors are arranged in a circle under a cylindrical light-field screen, and the circle is homocentric with the screen.

Download Full Size | PDF

In the calibration experiment, the CCD can be placed randomly. In one sample position, the CCD first captures images of screen marks, then records coordinates of the marks in both the screen space and CCD image space. Afterwards, unsampled projectors sequentially project lines as mentioned before. Then the CCD will capture the projection images for each projector and record coordinates of projection points in the CCD image space. Finally, the corresponding pixels in the projector are searched according to projection points in the CCD image. Commonly in one position, the CCD will sample about 30 projectors, and pixel sampling intervals in projectors range from 70 to 100.

The projected grid before and after calibration are shown in Fig. 9. Vertical lines in the grid are set parallel to the generatrix of the cylindrical screen, and horizontal lines parallel to bottom of the screen. Figure 9(a) shows that before calibration images from different projectors cannot splice together, and the distortion of the projector image is pretty serious. However after calibration, distortion is eliminated and images can stitch well, as shown in Fig. 9(b). Models such as a static girl and a dynamic soldier are displayed, as shown in Fig. 10, respectively. When the soldier is displayed, the frame rate can reach at least 30 fps tested by the software Fraps. The complexity of 3D contents and their corresponding frame rate are listed in Table 3.

 figure: Fig. 9.

Fig. 9. (a) Grid before calibration. (b) Grid after calibration.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Static girl model and dynamic soldier model. (a) Girl pictures taken from different angles. (b) Real size of the girl. The screen is 1.8 m high. (c) Screenshot of the dynamic soldier.

Download Full Size | PDF

Tables Icon

Table 3. Comparison of Frame Rate Between Different Models

The display results show that our system can reconstruct the light field of models with high quality. Models are seamless and can be observed from different positions in a 360-deg range. By using the wide-field lens, each viewer can see models with a maximum height of 1.8 m and a maximum width of 2.78 m. However, the diagrams show that the models have light and dark stripes, which is even more obvious in the background. This is because the projector array is not dense enough. We have explained that pupils of projectors are diffused to stripe images, and then the distance between adjacent light stripes is positively related to the exit pupil distance between adjacent projectors. In addition, the horizontal diffuse angle of the screen is small, which restricts the width of light stripes. Furthermore, the brightness and color inconsistency of projectors also affect the uniformity of 3D models. An effective way to alleviate this effect is to increase the density of the projectors.

7. CONCLUSION

In this paper, a large-scale multiprojection light-field 3D display system is proposed. We verify the principle of light-field reconstruction and generate consistent 3D models with smooth-motion parallax. Our system can accommodate multiple viewers and provide a compelling 360-deg viewing experience. Unlike previous 3D display systems, our system achieves both large-scale and wide field of view. We design a wide-field lens to enlarge the projection image. We provide a method applying field programmable gate array splitters and computers equipped with multiple graphic cards to distribute display data. The frame rate of rendering contents can reach 30 fps and above. We also propose a light-field autocalibration method based on the Bézier surface and multiview sampling to eliminate distortion from projection images. This method can also be adapted to display systems with other shapes of screens. We envisage that our work will promote popularity of multiuser 3D display systems and offer some reference for other 3D display techniques.

Funding

National Key R&D Program of China (2017YFB1002900); National Natural Science Foundation of China (NSFC) (61575175).

Acknowledgment

Great thanks to the State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, for providing the necessary support.

REFERENCES

1. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [invited],” Appl. Opt. 50, H87–H115 (2011). [CrossRef]  

2. W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graphics 23, 814–824 (2004). [CrossRef]  

3. N. A. Dodgson, J. Moore, and S. Lang, “Multi-view autostereoscopic 3D display,” in International Broadcasting Convention (1999), Vol. 2, pp. 497–502.

4. F. Speranza, W. J. Tam, T. Martin, L. Stelmach, and C. H. Ahn, “Perceived smoothness of viewpoint transition in multi-viewpoint stereoscopic displays,” Proc. SPIE 5664, 72–82 (2005). [CrossRef]  

5. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48, H77–H94 (2009). [CrossRef]  

6. T. Georgiev, K. C. Zheng, B. Curless, D. Salesin, S. K. Nayar, and C. Intwala, “Spatio-angular resolution tradeoffs in integral photography,” in Eurographics Symposium on Rendering (2006), pp. 263–272.

7. H. Gao, J. Liu, Y. Yu, P. Liu, C. Zeng, Q. Yao, H. Zheng, and Z. Zheng, “Real-time holographic video display using holographic liquid crystals with extended response to future holographic 3D TV,” in Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2015), paper DTh4A-6.

8. T. Inoue and Y. Takaki, “Table screen 360-degree holographic display using circular viewing-zone scanning,” Opt. Express 23, 6533–6542 (2015). [CrossRef]  

9. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360 light field display,” ACM Trans. Graph. 26, 40 (2007). [CrossRef]  

10. X. Xia, Z. Zheng, X. Liu, H. Li, and C. Yan, “Omnidirectional-view three-dimensional display system based on cylindrical selective-diffusing screen,” Appl. Opt. 49, 4915–4920 (2010). [CrossRef]  

11. C. Su, Q. Zhong, L. Xu, H. Li, and X. Liu, “24.2: Real-time rendering 360° floating light-field 3D display,” in SID Symposium Digest of Technical Papers (Wiley, 2015), Vol. 46, pp. 346–349.

12. S. Yoshida, “fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays,” Opt. Express 24, 13194–13203 (2016). [CrossRef]  

13. T. Balogh, T. Forgács, T. Agács, O. Balet, E. Bouvier, F. Bettio, E. Gobbetti, and G. Zanetti, “A scalable hardware and software system for the holographic display of interactive graphics applications,” in Eurographics (Short Presentations) (2005), pp. 109–112.

14. T. Agocs, T. Balogh, T. Forgacs, F. Bettio, E. Gobbetti, G. Zanetti, and E. Bouvier, “A large scale interactive holographic display,” in Virtual Reality Conference (IEEE, 2006), p. 311.

15. T. Balogh, “The holovizio system,” Proc. SPIE 6055, 60550U (2006). [CrossRef]  

16. J. A. I. Guitián, E. Gobbetti, and F. Marton, “View-dependent exploration of massive volumetric models on large-scale light field displays,” Visual Comput. 26, 1037–1047 (2010). [CrossRef]  

17. P. T. Kovács, A. Boev, R. Bregović, and A. Gotchev, “Quality measurements of 3D light-field displays,” in International Workshop on Video Processing and Quality Metrics for Consumer Electronics (VPQM), Chandler, Arizona, 2014.

18. J.-H. Lee, J. Park, D. Nam, S. Y. Choi, D.-S. Park, and C. Y. Kim, “32.1: optimal projector configuration design for 300-mpixel light-field 3D display,” in SID Symposium Digest of Technical Papers (Wiley, 2013), Vol. 44, pp. 400–403.

19. J.-H. Lee, J. Park, D. Nam, and D.-S. Park, “Color and brightness uniformity compensation of a multi-projection 3D display,” Proc. SPIE 9579, 95790N (2015). [CrossRef]  

20. Q. Zhong, Y. Peng, H. Li, C. Su, W. Shen, and X. Liu, “Multiview and light-field reconstruction algorithms for 360 multiple-projector-type 3D display,” Appl. Opt. 52, 4419–4425 (2013). [CrossRef]  

21. Q. Zhong, H. Li, X. Liu, B. Chen, and L. Xu, “24.3: Adaptive optimization of rendering for multi-projector-type light field display,” in SID Symposium Digest of Technical Papers (Wiley, 2015), Vol. 46, pp. 350–353.

22. B.-S. Chen, Q. Zhong, H.-F. Li, X. Liu, and H.-S. Xu, “Automatic geometrical calibration for multiprojector-type light field three-dimensional display,” Opt. Eng. 53, 073107 (2014). [CrossRef]  

23. M. Levoy and P. Hanrahan, “Light field rendering,” in 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1996), pp. 31–42.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. (a) Schematic diagram of a 360-deg multiprojection light-field display system. (b) General structure of light-field mapping principle. (c) On x - z plane. (d) On y z plane.
Fig. 2.
Fig. 2. (a) Top view of lens imaging. (b) Structure of the projection lens. (c) MTF chart of the projection lens. Maximum on the abscissa is the cut-off frequency of the projection lens.
Fig. 3.
Fig. 3. (a) Grid image in the projector. The red lines indicate the horizontal edge of the object that corresponds to the horizontal edge of field of view. (b) Grid image on the screen. θ is the horizontal field angle. Y is the height of image on the screen. The red lines indicate the horizontal edge of the field of view. (c) Real grid image on screen. Numbers on the bottom of each subfigure represent the corresponding field in the complete projection image.
Fig. 4.
Fig. 4. (a) Working principles of splitter 1 and splitter 2. (b) Connection relationship between graphic cards and splitters.
Fig. 5.
Fig. 5. Monitor arrangement and rendering window setting. Bigger monitors represent splitter 1 with the resolution 2400 × 1800 , and the smaller ones represent splitter 2 with the resolution 800 × 1800 .
Fig. 6.
Fig. 6. (a) Structure of the automatic calibration system. Screen marks are attached to screen for simulating shape of the screen. (b) The normalized plane coordinate system SOT, which is established by spreading the cylinder screen. Start angle (0 deg) of the screen will be set as 0 in SOT, with end angle (360 deg) set as 1. Bottom of the screen will be set as 0, with top set as 1.
Fig. 7.
Fig. 7. Flow chart of software.
Fig. 8.
Fig. 8. Structure of the real system. 360 projectors are arranged in a circle under a cylindrical light-field screen, and the circle is homocentric with the screen.
Fig. 9.
Fig. 9. (a) Grid before calibration. (b) Grid after calibration.
Fig. 10.
Fig. 10. Static girl model and dynamic soldier model. (a) Girl pictures taken from different angles. (b) Real size of the girl. The screen is 1.8 m high. (c) Screenshot of the dynamic soldier.

Tables (3)

Tables Icon

Table 1. Specification of Splitters

Tables Icon

Table 2. Parameters of the Display System

Tables Icon

Table 3. Comparison of Frame Rate Between Different Models

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

{ z s z d p z = x s x x , z s 2 + x s 2 = r s 2 ,
y s = ( y v y ) ( z s z ) z v z + y .
W w * h * f * N .
P ( u , v ) = i = 0 N j = 0 M B i N ( u ) B j M ( v ) p i j ; u , v [ 0,1 ] .
Scr ( s , t ) = B 1 { ScrCam i ( a , b ) } .
ProjScr i ( s , t ) = B 1 { ProjCam i ( a , b ) } .
Proj ( m , n ) = B 2 { ProjScr ( a , b ) } .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.