Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Calibration method of multi-projector display system with extra-large FOV and quantitative registration accuracy analysis

Open Access Open Access

Abstract

The calibration of multi-projector display with extra-large field of view (FOV) and quantitative registration analysis for realizing perfect visual splicing is crucial and difficult. In this paper, we present a novel calibration method to realize the seamless splicing for a multi-projector display system with extra-large FOV. The display consists of 24 projectors, covering the range of 360 degrees in the longitude direction and 210 degrees in the latitude direction. A wide-angle camera fixed on a rotating optical system is used to scan the entire display scene and establish point-to-point correspondence between projector pixels and spatial points using the longitude and latitude information. Local longitude table and latitude table are established on the target of the wide-angle camera. A deterministic method is proposed to locate the North Pole of the display. The local tables corresponding to different camera views can be unified based on the image of the North Pole to form global longitude and latitude tables of arbitrary free-form surface. The mapping between the projector pixels and the camera pixels is established by inverse projection technique, and then each pixel of each projector can be appointed a pair of unique longitude and latitude values. A quantitative registration accuracy analysis method is proposed for multi-projector display system, in which, three-frequency temporal unwrapping method based on coded longitude and latitude values is applied to calculate the registration accuracy. Experiments prove that the registration error of the multi-projector system is less than 0.4 pixels.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Multi-projector display with extra-large field of view (FOV) is becoming increasingly significant in aerospace engineering, entertainment and visualization applications for bringing us immersive viewing experience [14]. The commercial projector array installed specially [5,6] is used in building such seamless and high-resolution displays. At present, the widely used projection displays mainly include planar display and curved display, among which the large dome display system is widely used because of its large field of view, high resolution and high immersion. In order to realize seamless displays on curved display with large field of view, many research issues need to be addressed, including eliminating the distortion of single projector image and the discontinuity across all projectors, compensating the color difference between different projectors and making the luminance of the overlapping area of adjacent projectors transition smoothly. Among these problems, the geometric registration of the overlapping areas of adjacent projectors is one of the most important steps to realize the seamless display.

For general displays including several projectors, a single camera is usually used to capture the projected image that is used to calculate the homography matrix related to the camera image and the projector’s frame [7] to achieve the calibration of multi-projector display [8]. As display screens become larger, it becomes increasingly hard for a single camera to capture the entire display surface. For the display system with extra-large FOV, either a multi-camera array or a single camera placed at different positions is needed to overcome the limitation of single camera’s resolution and view frustum [9]. Therefore, how to unify the FOV of the camera with different positions is a key issue to be solved in calibrating multi-projector display system with extra-large FOV.

The existing work is more concerned with the calibration of multi-projector display system under a single camera’s FOV, but less about the calibration of the extra-large FOV display covered by multiple camera’s field of views. In the early research, it is more for the calibration of the planar display. For instance, Chen et al. [10] used the spanning tree of homographies automatically constructed from several camera images to accurately register arbitrarily-mounted projectors to a global reference frame to realize the calibration of the large format multi-projector planar display wall. The average local and global alignment errors with their algorithm were less than 1 pixel and 3 pixels, respectively. Recently, more researches aimed at the calibration of the curved display. However, most of the existing methods need to measure the shape of the curve display in advance. For example, Sun et al. [11] presented a multi-projector display solution on a parameterized curved surface based on geometric parameters estimation. The coordinates of three-dimensional (3D) points on the screen needed to be measured in advance using a camera stereo pair with given internals. The average alignment error with this method was below 2mm. Wang et al. [12] proposed a 3D reconstruction algorithm based on Bezier surface models to calibrate multi-projector display walls with arbitrary continuous curved surfaces. The 3D mesh data of the display wall needed to be reconstructed in advance to obtain the local model of the projection surface. The approach could produce more outliers if the geometry of the scene is not recovered correctly. The average local and global alignment errors of their method in different environments were both less than 3 pixels. Bajestani et al. [13] proposed a calibration method of the multi-projector displays through solving optimization problem. The camera-projector pairs needed to be calibrated to extract the 3D shape observed in their field of views to perform the global registration. The local and global alignment accuracy of the method were less than 0.5 pixels and 1 pixel respectively.

In this paper, we propose a novel calibration method for multi-projector display system with extra-large FOV involving dozens of projectors and a rotating single camera. The entire display is covered by multiple camera’s field of views. The proposed method is suitable for the seamless splicing of images displayed on arbitrary free-form surface without measuring the surface in advance. A method for quantitative analysis of the registration accuracy of the multi-projector display is proposed as well. In the calibration process, each pixel of each projector in the display is appointed a pair of unique longitude and latitude values using the wide-angle camera fixed on a rotating optical system as a bridge. The local longitude table and latitude table are established on the target of the wide-angle camera. The local tables corresponding to different camera views are unified based on the image of the North Pole of the display to form the global longitude and latitude tables of arbitrary free-form surface. The mapping relationship between multiple projectors and multiple positions of the camera is determined by the inverse projection technique. Our dome display system is composed of 24 projectors, which covers the range of 360 degrees in the longitude direction and 210 degrees in the latitude direction. The seamless splicing results of the multi-projector display system with extra-large FOV are displayed, and the quantitative registration error analysis is given.

The rest of the paper is organized as follows: Section 2 describes the principles used in this paper, including the system outline and the principle of calculating registration accuracy. Section 3 shows some experimental results to validate the proposed method. Finally, Section 4 presents the conclusions.

2. Principle

2.1 System outline

In the large format high-resolution display system with multiple projectors, achieving the seamless splicing of the projected images is one of the key steps for exhibiting the high-quality panoramic images. In order to perform panoramic projection without visual error, the mapping between the spatial points of the panoramic display and each pixel of each projector must be established. To the best of our knowledge, the depth information of the uneven display normally needs to be computed by three-dimensional sensing technology for panoramic splicing. In this paper, we will discuss the seamless splicing without measuring the shape of the display in advance and conduct the quantitative analysis of the registration accuracy. Our dome display system includes 24 projectors with the projection field of view covering 360 degrees in the longitude direction and 210 degrees in the latitude direction. A wide-angle camera fixed on a rotating optical system is used as a bridge to appoint a pair of unique longitude and latitude values corresponding to a spatial point for a pixel of a projector. For clarity, a workflow for calibrating the multi-projector system is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Workflow for calibrating the multi-projector system.

Download Full Size | PDF

Here, for simplicity, a projector is drawn to explain how to set up the mapping between the projector pixels and the points on the display through the camera. A point on the panoramic display can be “seen” by a pixel on the calibrated camera, and then its longitude value and latitude value can be calculated by employing the image pixel coordinates of the points on the camera and the internal parameters of the camera. Without loss of generality, the local longitude and latitude tables of the partial display in the camera’s FOV are established on the camera target. Based on the located North Pole of the display, the global longitude table and latitude table of the display can be formed in a unified frame by registering the local longitude and latitude tables within the different camera views. The mapping between the projector pixels and the camera pixels is established by the inverse projection technique. Then the unique pixel of the designated projector can be appointed by a pair of longitude and latitude values of a point on the display. By assigning the information of the required projected image to the corresponding pixel of the projector, the seamless splicing of the projected images can be conducted theoretically. It is noted that after establishing the mapping relationship among projectors, camera and display surface, the projected image view is not dependent on the camera origin, but only on the projected images or the design of the projected content.

2.1.1 Establish local longitude and latitude tables using a calibrated wide-angle camera

Before the actual projection display, a pair of unique longitude and latitude values must be assigned to a pixel of a projector of the panoramic display. A calibrated wide-angle camera is used to establish local longitude and latitude tables for points on the display system in the FOV of the camera. The specific steps are listed as follows:

  • (1) Camera calibration. Zhang’s planar pattern algorithm [14] is used to calibrate the wide-angle camera to obtain the intrinsic parameters including principal point (u0, v0), focal length f and distortion parameters k, etc. The undistorted coordinates of camera pixels are calculated by Zhang’s undistorted model, on which the local longitude and latitude tables are established.
  • (2) Find out the North Pole of the display system. The North Pole is the intersection of the extension of the rotation axis of the rotating optical system and the display system. It is a datum point used to unify the FOV of the camera at multiple positions. In order to accurately find out the North Pole, the optical center of the camera must pass through the rotation axis of the turntable. Under the alignment condition, the image Nact of the North Pole on the camera target is immutable during camera rotating, while the imaging positions of other points on the display move on the camera target plane. The method in Ref. [15] is used to align the camera’s optical center with the rotation axis accurately. Based on the Nact, the global longitude table and latitude table of arbitrary free-form surface can be formed by employing the local tables corresponding to different camera views. A simple method is proposed to determine the Nact, in which, a specially designed structured light image, such as sinusoidal fringe, orthogonal fringe or a checkerboard, is projected on the top area of the display screen, the image sequences are recorded by the camera rotating with the turntable. The mean square deviation of the gray value of each homonymous pixel of the captured images is calculated. The pixel with the smallest mean square deviation is the image point Nact of the North Pole, because except for the Nact, the intensities of the homonymous pixels change with the rotation of the camera. The specific principle is shown in the Fig. 2.
  • (3) Set up local longitude and latitude tables on the camera target for the display system. First, the origin of the system is located on the optical center of the camera, as shown in Fig. 3(a), where the sphere denotes a space with longitude and latitude information, O-XYZ is the reference coordinate system, Oc-XcYcZc is the camera coordinate system, (u0, v0) is the principle point of the image target of the camera. Employing Eq. (1), taking (u0, v0) as the center, the conversion relation between the camera’s pixel coordinates (u, v) and the longitude and latitude (θ, φ) can be obtained:
    $$\left\{ \begin{array}{l} \theta \textrm{ = }{\tan^{ - 1}}\frac{{\sqrt {{{(u - {u_0})}^2} + {{(v - {v_0})}^2}} }}{f}\\ \varphi = {\tan^{ - 1}}\frac{{u - {u_0}}}{{v - {v_0}}} \end{array} \right., $$
    where latitude θ is the angle between the line connecting each pixel to the optical center and the optical axis of the system. Longitude φ is determined by the position relationship between each pixel on the CCD target plane and the principal point. Since the limitation of the camera’s field of view (FOV = 112.3249°), in order to ensure that the camera can “see” 210° vertical angle in our rotating optical system, the camera’s optical axis is rotated around the Y axis to form α angle, as shown in O-X1YZ1 of Fig. 3(a). The line connecting the principal point (u0, v0) and the optical center Oc is the Z1 axis. Assuming (x, y, z, 1) is the homogeneous coordinate of a space point in O-XYZ, it is expressed in polar coordinates as (rsinθcosφ, rsinθsinφ, rcosθ, 1). While (x’, y’, z’, 1) is the homogeneous coordinate of its corresponding point in O-X1YZ1, which is expressed in polar coordinates as (rsinθ1cosφ1, rsinθ1sinφ1, rcosθ1, 1). Here, r is the distance between point O and the space point. The relationship between (x, y, z, 1) and (x’, y’, z’, 1) can be obtained from Eq. (2-a)
    $$(x',y',z',1) = (x,y,z,1)\left[ {\begin{array}{{cccc}} {\textrm{cos}\alpha }&0&{\textrm{ - sin}\alpha }&0\\ 0&1&0&0\\ {\textrm{sin}\alpha }&0&{\textrm{cos}\alpha }&0\\ 0&0&0&1 \end{array}} \right],$$
where $\alpha \textrm{ = ta}{\textrm{n}^{ - 1}}\left( {\frac{{\sqrt {{{({N_1} - {u_0})}^2} + {{({N_2} - {v_0})}^2}} }}{f}} \right)$ and Nact = (N1, N2). Then, the actual local longitude φ1 and latitude θ1 after rotating α angle are calculated according to Eq. (2-a),
$$\left\{ \begin{array}{l} {\varphi_1} = {\tan^{ - 1}}\frac{{\textrm{sin}\theta \textrm{sin}\varphi }}{{\textrm{cos}\theta \textrm{sin}\alpha \textrm{ + sin}\theta \textrm{cos}\varphi \textrm{cos}\alpha }}\\ {\theta_1} = {\cos^{ - 1}}(\cos \theta \cos \alpha - \sin \theta \cos \varphi \sin \alpha ) \end{array} \right..$$

 figure: Fig. 2.

Fig. 2. The principle of calculating the image point Nact of the North Pole.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. (a) Schematic for establishing local longitude and latitude tables. (b) The local longitude and latitude information displayed as sinusoidal stripes.

Download Full Size | PDF

With the help of the camera rotating around Z axis, the local tables of different camera views can be used to form the global longitude table and latitude table in a unified frame, so as to mark each spatial point on the display. In order to more intuitively show the local longitude and latitude values within the camera’s FOV, they are encoded with Eq. (3) to form sinusoidal stripes, as shown in Fig. 3(b).

$$I = 0.5 + 0.25 \times \cos (\frac{{2\pi J}}{{{T_0}}}) + 0.25 \times \cos (\frac{{2\pi V}}{{{T_0}}}), $$
where J and V represent the generated local longitude and latitude values, respectively. T0 is the set fringe period.

2.1.2 Assign longitude and latitude values for each pixel of each projector

In order to seamlessly display the images on the panoramic screen (e.g. dome) using multi-projector system, the longitude and latitude values of each spatial point must be assigned to a pixel of a projector with the camera as a bridge. The reverse fringe projection technology [1618] is used to establish the corresponding relationship between the camera pixels and the projectors pixels. The specific steps are listed as follows.

  • (1) Each projector respectively projects sine-coded horizontal and vertical structured light fringes onto the panoramic display, and then the camera in the rotating optical system captures the deformed fringes modulated by the screen sequentially. In order to obtain the high-precision transfer relationship between the projector pixels and the camera pixels, three-frequency temporal phase unwrapping algorithm [1922] is used in our work to obtain the absolute phase in the horizontal and vertical directions respectively. For example, the intensity distributions of the captured phase-shifting fringes with period T can be expressed as:
    $$\begin{array}{l} {I_{hi}}(u,v) = A(u,v) + B(u,v)\cos [\frac{{2\pi v}}{T} + \frac{{2\pi i}}{N} + {\varphi _{hT}}(u,v)]\\ {I_{vi}}(u,v) = A(u,v) + B(u,v)\cos [\frac{{2\pi u}}{T} + \frac{{2\pi i}}{N} + {\varphi _{vT}}(u,v)] \end{array}, $$
    where T denotes the fringe period of 1-frequency, $\sqrt s$-frequency and s-frequency fringes. Ihi(u, v) and Ivi(u, v)are the fringe intensity distributions in the horizontal and vertical directions, respectively. A(u, v) and B(u, v) denote the average intensity and the intensity modulation, respectively. i=0, 1, 2, …, N−1 and N is the maximum phase shift steps. φhT(u, v) and φvT(u, v) are the measured phases corresponding to fringe with different period T in the horizontal and vertical directions, respectively. The absolute phase φh(u, v) and φv(u, v) in the horizontal and vertical directions can be obtained employing the temporal unwrapping algorithm from φhT(u, v) and φvT(u, v), respectively.
  • (2) The inverse projection technique is used to identify the projector pixel (l, m) corresponding to the camera pixel (u, v) employing φh(u, v) and φv(u, v). A function f(.) is used to abbreviate the relationship between the projector pixel (l, m) and the camera pixel (u, v), as expressed as Eq. (5)
    $$\left( \begin{array}{l} l\\ m\\ {I_{pro}} \end{array} \right) = f\left( \begin{array}{l} u\\ v\\ {I_{cam}} \end{array} \right), $$
    where Ipro and Icam represent the intensities of pixels (l, m) and (u, v), respectively. For the detailed introduction about inverse projection technique, please refer to Ref. [17].

Since each camera pixel (u, v) has been assigned a pair of unique longitude and latitude values, the projector pixel (l, m) can be appointed by the longitude and latitude values.

Figure 4 shows eight local longitude and latitude distributions in different fields of view of the camera and the global longitude and latitude distributions in stripe mode. Figure 4(a) shows the local longitude and latitude distributions. And Fig. 4(b) shows a global longitude and latitude distribution registered with the eight longitude and latitude tables based on the Nact. For clarity, the global tables are displayed on a dome with a field of view range of 360 degrees in the longitude direction and 210 degrees in the latitude direction. Figure 5 gives a brief summary about establishing the mapping between each projector pixel and the unique spatial point using the longitude and latitude tables.

 figure: Fig. 4.

Fig. 4. Schematic diagram of longitude and latitude values. (a) Eight local longitude and latitude tables. (b) Global longitude and latitude distribution displayed on a dome.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Calibration process of multi projection system.

Download Full Size | PDF

2.2 Analysis of calculating registration accuracy

In a multi-projector display system, the overlapping area illuminated by adjacent projectors should be assigned the same projection information, otherwise, it will cause obvious visual error. For instance, a feature point P in the overlapping area A should be illuminated by the pixel p’ of the Pro.1 and the pixel p'' of the Pro.2. Therefore, pixel p’ and p'' should be marked with the same longitude and latitude values. In our method, each pixel of each projector has been appointed a pair of unique longitude and latitude values, the perfect panoramic display can be conducted theoretically. In practice, due to the influence of noise, pixels p’ and p'' may be assigned different longitude and latitude values. On the displays, the points P’ and P’’ respectively illuminated by the pixel p’ and p'' does not coincide, as shown in Fig. 6, where Area.1, Area.2 and Area.3 are illuminated by Pro.1, Pro.2 and Pro.3, respectively. The error Δp between the two points is the geometric registration error of the feature point of the display system. The similar analysis is conducted between Pro.1 and Pro.3. The average error of the geometric registration errors of all points in all overlapping areas is defined as the geometric registration accuracy of the system. We will respectively encode longitude and latitude values of each pixel of adjacent projectors in sinusoidal fringe mode, and quantitatively calculate the registration errors by phase-shifting algorithm [23].

 figure: Fig. 6.

Fig. 6. Schematic diagram of calculating registration accuracy.

Download Full Size | PDF

Taking three adjacent projectors as an example, the specific steps for analyzing geometric registration accuracy are as follows:

  • (1) Specify the adjacent projectors in the multi-projector system to project encoded longitude and latitude fringes on the screen, as shown in Fig. 6. The longitude and latitude values assigned on the projector pixels are generated by the above method (seeing Section 2.1.2). The projection fringes are expressed as
    $$\begin{array}{l} {I^p}_{Jkn}({x^p},{y^p}) = A + B\cos [2\pi \frac{{{J_k}({x^p},{y^p})}}{{{d_j}}} + \frac{{2\pi (n - 1)}}{N}]\\ {I^p}_{Vkn}({x^p},{y^p}) = A + B\cos [2\pi \frac{{{V_k}({x^p},{y^p})}}{{{d_j}}} + \frac{{2\pi (n - 1)}}{N}] \end{array}, $$
    where superscript p represents the projector.${I^p}_{Jkn}({x^p},{y^p})$ and ${I^p}_{Vkn}({x^p},{y^p})$ represent the intensity of the projector pixel (xp, yp). A and B are the average intensity and intensity modulation. Jk and Vk are the longitude and latitude tables of the specified projectors, respectively. dj are fringe periods of the three sets of fringes, j = 1, 2, 3. N is the number of phase shift steps and n = 1, 2, …, N. k = 1, 2, 3 is the projector serial number.
  • (2) The phase-shifting algorithm is used to extract phase information carrying the longitude and latitude values. The intensities of the captured images can be expressed as follows:
    $$\begin{array}{l} {I^c}_{Jkn}({x^c},{y^c}) = A^{\prime} + B^{\prime}\cos [2\pi \frac{{{J_k}({x^c},{y^c})}}{{{d_j}}} + \frac{{2\pi (n - 1)}}{N}]\\ {I^c}_{Vkn}({x^c},{y^c}) = A^{\prime} + B^{\prime}\cos [2\pi \frac{{{V_k}({x^c},{y^c})}}{{{d_j}}} + \frac{{2\pi (n - 1)}}{N}] \end{array}, $$
    where superscript c represents the camera.${I^c}_{Jkn}({x^c},{y^c})$ and ${I^c}_{Vkn}({x^c},{y^c})$ represent the intensity of the camera pixel (xc, yc). The wrapped phase can be retrieved as follows:
    $$\begin{array}{l} {\varphi _{Jk}}({x^c},{y^c}) = {\tan ^{ - 1}}\frac{{\sum\limits_{n = 1}^N {{I^c}_{Jkn}({x^c},{y^c})} \textrm{sin[}{{2\mathrm{\pi }\textrm{(}n - 1)} / N}]}}{{\sum\limits_{n = 1}^N {{I^c}_{Jkn}({x^c},{y^c})} \textrm{cos[}{{2\mathrm{\pi }\textrm{(}n - 1)} / N}]}}\\ {\varphi _{Vk}}({x^c},{y^c}) = {\tan ^{ - 1}}\frac{{\sum\limits_{n = 1}^N {{I^c}_{Vkn}({x^c},{y^c})} \textrm{sin[}{{2\mathrm{\pi }\textrm{(}n - 1)} / N}]}}{{\sum\limits_{n = 1}^N {{I^c}_{Vkn}({x^c},{y^c})} \textrm{cos[}{{2\mathrm{\pi }\textrm{(}n - 1)} / N}]}} \end{array}$$
  • (3) Calculate the absolute phases of the overlapping areas. In area A, select a point as the starting point, as shown in Fig. 6, to unwrap the wrapped phases of Pro.1 and Pro.2, respectively. The absolute phases of Pro.1 and Pro.2 in area A are represented by ΦJ1, ΦJ2, ΦV1 and ΦV2 respectively, then the average values of the absolute phase difference of all points in area A are
    $$\begin{array}{l} mea{n_J}(1,2) = \frac{{\sum\limits_1^M {({\Phi _{J1}} - {\Phi _{J2}})} }}{M}\\ mea{n_V}(1,2) = \frac{{\sum\limits_1^M {({\Phi _{V1}} - {\Phi _{V2}})} }}{M} \end{array}, $$
    where M represents the number of points in the overlapping area A. meanJ (1, 2) and meanV (1, 2) represent the average value of the absolute phase difference in longitude and latitude, respectively.
  • (4) Repeat step (3) to obtain the average value of the absolute phase difference of Pro.1 and Pro.3 in both directions in the overlapping area B,
    $$\begin{array}{l} mea{n_J}(1,3) = \frac{{\sum\limits_1^{M^{\prime}} {({\Phi _{J1}} - {\Phi _{J3}})} }}{{M^{\prime}}}\\ mea{n_V}(1,3) = \frac{{\sum\limits_1^{M^{\prime}} {({\Phi _{V1}} - {\Phi _{V3}})} }}{{M^{\prime}}} \end{array}. $$
    where M ‘ represents the number of points in the overlapping area B.
  • (5) Convert the phase difference into the pixel difference by Eq. (11), which is the geometric registration accuracy of the system.
    $$\begin{array}{l} erro{r_J} = \frac{{X \cdot mea{n_J}}}{{2\pi {d_{\max }}}}\\ erro{r_V} = \frac{{Y \cdot mea{n_V}}}{{2\pi {d_{\max }}}} \end{array}, $$
    where X × Y is the resolution of the projector, and dmax is the maximum number of the period of the projected fringe.

3. Experiment

Experiments have been conducted to show the splicing results and test the registration accuracy of the display. Our dome display covers the range of 360 degrees in the longitude direction and 210 degrees in the latitude direction. The multi-projector system includes 24 projectors (SECO AP-DLU600, resolution: 1920×1200 pixels), and a partial scene of the projection display is shown in Fig. 7(a). The rotating optical system, as shown in Fig. 7(b), includes a single lens reflex (SLR) camera (Canon EOS 6D (W), resolution: 5472×3648 pixels), wide-angle lens (LW-FX12MMF 2.8D-Dreamer, 12mm), and a high precision turntable (GCD-0401M, resolution: 0.005°). The system is setup on a tripod.

 figure: Fig. 7.

Fig. 7. The experimental system. (a) A partial scene of the multi-projector system. (b) The rotating optical system.

Download Full Size | PDF

The SLR camera has been calibrated in advance with Zhang’s planar pattern algorithm, and the camera’s intrinsic parameters for calculating the local longitude and latitude tables are shown in Table 1.

Tables Icon

Table 1. The camera’s intrinsic parameters

Statistical analysis method is used to find out the image of North Pole of the dome before establishing the local longitude and latitude tables. The projector whose projection area covers the North Pole projects a specially designed structured light image (sinusoidal fringe here) on the top of the dome. The turntable drives the camera to rotate 360° with 5 degrees as an interval to take images. Figure 8 shows six frame images captured during the camera rotation. The mean square deviations of the intensity values of homonymous pixels of the captured images are calculated. As shown in Fig. 9, the pixel with the smallest mean square deviation is the image of the North Pole. In the experiment, the coordinates of the image of the North Pole are Nact = (1856,5208) pixels.

 figure: Fig. 8.

Fig. 8. Captured fringe patterns covering the North Pole.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Distribution of mean square deviation of homonymous pixels.

Download Full Size | PDF

Employing the coordinates of Nact, the internal parameters and the image pixel coordinates of the camera, the local longitude and latitude tables can be calculated using Eqs. (1) and (2), which is displayed as contour in Fig. 10. Figure 10(a) is the longitude map and Fig. 10(b) is the latitude map. Ten sets of local longitude and latitude tables established by the rotated camera at ten different positions can form the global longitude and latitude tables to mark each spatial point on our display. Each pixel of each projector can be assigned a pair of longitude and latitude values employing the reverse fringe projection technology (Section 2.1.2).

 figure: Fig. 10.

Fig. 10. The local longitude and latitude maps. (a) Longitude map. (b) Latitude map.

Download Full Size | PDF

Taking a line image projected on the display as an example, by assigning the intensity of the required projected image to the corresponding pixel of the projector employing the longitude and latitude tables, the seamless splicing of the projected images can be conducted. Figure 11(a) shows a part of images with checkerboard lines and Chinese words projected by different projectors after calibrating the display system. Visually, the information in the overlapping areas between adjacent projectors is well aligned and the splicing position error can be almost ignored on the display wall. Figure 11(b) shows a projected real-world panoramic photograph. The red box indicates one of the 24 projectors in our projection system, and the viewpoint of the projected image is independent of the position of the camera.

 figure: Fig. 11.

Fig. 11. Projection splicing results. (a) A part of images with checkerboard lines and Chinese words and the zoomed display. (b) Projection splicing result of a real-world panoramic photograph.

Download Full Size | PDF

The proposed quantitative registration error analysis is used to calculate the geometric splicing error. Projectors 1, 2 and 11 of the dome projection system were selected to project sinusoidal fringes encoded by their own local longitude and latitude values on the display wall respectively. Three-frequency 6-step phase-shifting algorithm was used to obtain the absolute phases. The camera aimed at the projection areas of the three projectors and took the fringes projected by the three projectors respectively. 108 fringe patterns were captured, 36 fringe patterns for each projector. The absolute phases of the overlapping areas illuminated by adjacent projectors were calculated, as shown in Fig. 12.

 figure: Fig. 12.

Fig. 12. The process of obtaining the absolute phases of the overlapping areas.

Download Full Size | PDF

The first column of the Fig. 12 shows the first-frame fringe patterns projected by projectors in the horizontal and vertical directions in the three-frequency method. The second column is the captured first frame image with different fringe periods. The third column is the horizontal and vertical wrapped phases calculated from horizontal and vertical fringes. The overlapping area A between projectors 1 and 2 and the overlapping area B between projectors 1 and 11 are marked with red boxes and green boxes, respectively. The overlapping area is determined by superimposing the fringe patterns of two adjacent projectors. The fringe intensity value within the overlapping areas is significantly higher than that in other areas. The absolute phases of the overlapping areas are displayed in the fourth column of the Fig. 12. Because each pixel of each projector is assigned the unique longitude and latitude values, each point in the overlapping areas illuminated by adjacent projectors should be assigned the same longitude and latitude values. Theoretically, if the multi-projector system is perfectly calibrated, the difference of the absolute phases of the overlapping areas is zero. As shown in the last column of Fig. 12, the phase distributions of adjacent projectors in the overlapping areas in longitude or latitude directions are almost the same visually.

The results of quantitative registration errors of the overlapping areas are shown in Fig. 13. The first and second columns of Fig. 13(a) display the profile lines and the enlarged parts of the absolute phases, respectively. The third column shows the difference of absolute phases in the overlapping areas among the three projectors in longitude direction. The fourth column displays the profile lines in the Y coordinate. Figure 13(b) shows the similar pictures with Fig. 13(a) in which the latitude distribution is displayed.

 figure: Fig. 13.

Fig. 13. Results of quantitative registration errors. (a) Display phase difference in longitude direction. (b) Display phase difference in latitude direction.

Download Full Size | PDF

Employing the obtained phase difference values, the registration accuracy of the three projectors in the longitude and latitude directions is calculated by Eqs. (911). The registration accuracy is listed in Table 2. Because the unified longitude and latitude information is used to evaluate the splicing error in the proposed method, the splicing accuracy of the overlapping areas of each pair of adjacent projectors is roughly equivalent. The average errors in longitude and latitude directions are both less than 0.4 pixels.

Tables Icon

Table 2. Registration accuracy

4. Conclusion and discussion

A novel calibration method for multi-projector display system with extra-large FOV involving 24 projectors and a rotating single camera is proposed, and the quantitative registration accuracy analysis of the system is conducted. The method is suitable for the seamless splicing of the images displayed on arbitrary free-form surface without measuring the surface in advance. The advantage of our method is that only a single camera fixed on a rotating optical system can be used to establish the correspondence between the projector pixels and the spatial points of the multi-projector display. Seamless splicing of the projected images on the display can be performed. In our method, establishing the corresponding relationship between camera pixels and projector pixels is essentially a self-calibrating method [24, 25], which is performed by using high accuracy phase information as a bridge. The camera calibration is required to obtain local longitude and latitude tables. A unique global longitude and latitude table can be obtained by unifying these local tables of different camera views.

In our work, the key steps for conducting multi-projector display with extra-large FOV include establishing local longitude and latitude tables on the target of the wide-angle camera, establishing global longitude and latitude tables in a unified frame employing the two local tables, appointing the projector pixels with longitude and latitude values and analyzing the quantitative registration error. The seamless splicing is also conducted on the display system with 24 projectors covering the range of 360 degrees in the longitude direction and 210 degrees in the latitude direction. The splicing results of our display system are exhibited. The registration error of our multi-projector system is less than 0.4 pixels.

Funding

National Natural Science Foundation of China (62075143); Ministry of Science and Technology of the People's Republic of China (2013YQ490879).

Acknowledgments

This work is supported by National Key Scientific Apparatus Development Project of China (2013YQ490879) and National Natural Science Foundation of China (62075143).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. Hereld, I. R. Judson, and R. L. Stevens, “Introduction to building projection-based tiled display systems,” IEEE Comput. Grap. Appl. 20(4), 22–28 (2000). [CrossRef]  

2. A. Raij, G. Gill, A. Majumder, H. Towles, and H. Fuchs, “Pixelflex2: A comprehensive, automatic, casually-aligned multi-projector display,” in 2003 IEEE International Workshop on Projector-Camera Systems, (2003), pp. 203–211.

3. C. Zoido, J. Maroto, G. Romero, and J. Felez, “Optimized methods for multi-projector display correction,” Int J Interact Des Manuf 7(1), 13–25 (2013). [CrossRef]  

4. S. Xing, S. Liu, and X. Sang, “Multi-projector three-dimensional display for 3D Geographic Information System,” Optik 139, 385–396 (2017). [CrossRef]  

5. S. Park, H. Seo, S. Cha, and J. Noh, “Auto-calibration of multi-projector displays with a single handheld camera,” in 2015 IEEE Scientific Visualization Conference (SciVis). IEEE, (2015), pp. 65–72.

6. M. Brown, A. Majumder, and R. Yang, “Camera-based calibration techniques for seamless multi projector displays,” IEEE Trans. Visual. Comput. Graphics 11(2), 193–206 (2005). [CrossRef]  

7. R. Raskar, J. Van Baar, and J. X. Chai, “A low-cost projector mosaic with fast registration,” in 2002 Asian Conference on Computer Vision (ACCV), (2002), 3(3).

8. Y. Chen, D. W. Clark, A. Finkelstein, T.C. Housel, and K. Li, “Automatic alignment of high-resolution multi-projector displays using an uncalibrated camera,” in Proceedings Visualization 2000. VIS 2000 (Cat. No. 00CH37145). IEEE, (2000), pp. 125–130.

9. H. Chen, R. Sukthankar, G. Wallace, and K. Li, “Scalable alignment of large-format multi-projector displays using camera homography trees,” in IEEE Visualization, 2002. VIS 2002. IEEE, (2002), pp. 339–346.

10. H. Chen, R. Sukthankar, G. Wallace, and T. J. Cham, “Calibrating scalable multi-projector displays using camera homography trees,” Computer Vision & Pattern Recognition. 19 (2001).

11. Y. Sun, S. Dai, C. Ren, and N. Chen, “Computer vision based geometric calibration in curved multi-projector displays,” 2010 3rd IEEE International Conference on Computer Science and Information Technology. 6, 342–348 (2010).

12. X. Wang, K. Yan, and Y. Liu, “Automatic Geometry Calibration for Multi-Projector Display Systems with Arbitrary Continuous Curved Surfaces,” IET Image Processing 13(7), 1050–1055 (2019). [CrossRef]  

13. S. A. Bajestani, H. Pourreza, and S. Nalbandian, “Scalable and view-independent calibration of multi-projector display for arbitrary uneven surfaces,” Machine Vision and Applications. 30(7-8), 1191–1207 (2019). [CrossRef]  

14. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

15. Y. Hou, X. Su, and W. Chen, “Alignment method of an axis based on camera calibration in a rotating optical measurement system,” Applied Sciences 10(19), 6962 (2020). [CrossRef]  

16. W. Li, T. Bothe, M. Kalms, C. Kopylow, and W. Jüptner, “Applications of inverse pattern projection,” in Optical Measurement Systems for Industrial Inspection III. International Society for Optics and Photonics, (2003), 5144: 492–503.

17. W. Li, T. Bothe, W. Osten, and M. Kalms, “Object adapted pattern projection—Part I: generation of inverse patterns,” Opt. Lasers Eng. 41(1), 31–50 (2004). [CrossRef]  

18. Y. Cai and X. Su, “Inverse projected-fringe technique based on multi projectors,” Opt. Lasers Eng. 45(10), 1028–1034 (2007). [CrossRef]  

19. J. Huntley and H. Saldner, “Temporal phase-unwrapping algorithm for automated interferogram analysis,” Appl. Opt. 32(17), 3047–3052 (1993). [CrossRef]  

20. J. Huntley, “Temporal phase unwrapping: application to surface profiling of discontinuous objects,” Opt. Lett. 36(13), 2770–2775 (1997). [CrossRef]  

21. M. Servin, J. M. Padilla, A. Gonzalez, and G. Garnica, “Temporal phase-unwrapping of static surfaces with 2-sensitivity fringe-patterns,” Opt. Express 23(12), 15806–15815 (2015). [CrossRef]  

22. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

23. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: a review,” Opt. Laser Eng. 109, 23–59 (2018). [CrossRef]  

24. R. Sukthankar, T. J. Cham, and G. Sukthankar, “Dynamic shadow elimination for multi-projector displays,” Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001. IEEE, 2: II (2001).

25. I. Khan, A. Bais, P. M. Roth, M. Akhtar, and M. Asif, “Self-calibrating laser-pointer's spotlight detection system for projection screen interaction,” 2015 12th International Bhurban Conference on Applied Sciences and Technology (IBCAST). IEEE, 175–180 (2015).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Workflow for calibrating the multi-projector system.
Fig. 2.
Fig. 2. The principle of calculating the image point Nact of the North Pole.
Fig. 3.
Fig. 3. (a) Schematic for establishing local longitude and latitude tables. (b) The local longitude and latitude information displayed as sinusoidal stripes.
Fig. 4.
Fig. 4. Schematic diagram of longitude and latitude values. (a) Eight local longitude and latitude tables. (b) Global longitude and latitude distribution displayed on a dome.
Fig. 5.
Fig. 5. Calibration process of multi projection system.
Fig. 6.
Fig. 6. Schematic diagram of calculating registration accuracy.
Fig. 7.
Fig. 7. The experimental system. (a) A partial scene of the multi-projector system. (b) The rotating optical system.
Fig. 8.
Fig. 8. Captured fringe patterns covering the North Pole.
Fig. 9.
Fig. 9. Distribution of mean square deviation of homonymous pixels.
Fig. 10.
Fig. 10. The local longitude and latitude maps. (a) Longitude map. (b) Latitude map.
Fig. 11.
Fig. 11. Projection splicing results. (a) A part of images with checkerboard lines and Chinese words and the zoomed display. (b) Projection splicing result of a real-world panoramic photograph.
Fig. 12.
Fig. 12. The process of obtaining the absolute phases of the overlapping areas.
Fig. 13.
Fig. 13. Results of quantitative registration errors. (a) Display phase difference in longitude direction. (b) Display phase difference in latitude direction.

Tables (2)

Tables Icon

Table 1. The camera’s intrinsic parameters

Tables Icon

Table 2. Registration accuracy

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

{ θ  =  tan 1 ( u u 0 ) 2 + ( v v 0 ) 2 f φ = tan 1 u u 0 v v 0 ,
( x , y , z , 1 ) = ( x , y , z , 1 ) [ cos α 0  - sin α 0 0 1 0 0 sin α 0 cos α 0 0 0 0 1 ] ,
{ φ 1 = tan 1 sin θ sin φ cos θ sin α  + sin θ cos φ cos α θ 1 = cos 1 ( cos θ cos α sin θ cos φ sin α ) .
I = 0.5 + 0.25 × cos ( 2 π J T 0 ) + 0.25 × cos ( 2 π V T 0 ) ,
I h i ( u , v ) = A ( u , v ) + B ( u , v ) cos [ 2 π v T + 2 π i N + φ h T ( u , v ) ] I v i ( u , v ) = A ( u , v ) + B ( u , v ) cos [ 2 π u T + 2 π i N + φ v T ( u , v ) ] ,
( l m I p r o ) = f ( u v I c a m ) ,
I p J k n ( x p , y p ) = A + B cos [ 2 π J k ( x p , y p ) d j + 2 π ( n 1 ) N ] I p V k n ( x p , y p ) = A + B cos [ 2 π V k ( x p , y p ) d j + 2 π ( n 1 ) N ] ,
I c J k n ( x c , y c ) = A + B cos [ 2 π J k ( x c , y c ) d j + 2 π ( n 1 ) N ] I c V k n ( x c , y c ) = A + B cos [ 2 π V k ( x c , y c ) d j + 2 π ( n 1 ) N ] ,
φ J k ( x c , y c ) = tan 1 n = 1 N I c J k n ( x c , y c ) sin[ 2 π ( n 1 ) / N ] n = 1 N I c J k n ( x c , y c ) cos[ 2 π ( n 1 ) / N ] φ V k ( x c , y c ) = tan 1 n = 1 N I c V k n ( x c , y c ) sin[ 2 π ( n 1 ) / N ] n = 1 N I c V k n ( x c , y c ) cos[ 2 π ( n 1 ) / N ]
m e a n J ( 1 , 2 ) = 1 M ( Φ J 1 Φ J 2 ) M m e a n V ( 1 , 2 ) = 1 M ( Φ V 1 Φ V 2 ) M ,
m e a n J ( 1 , 3 ) = 1 M ( Φ J 1 Φ J 3 ) M m e a n V ( 1 , 3 ) = 1 M ( Φ V 1 Φ V 3 ) M .
e r r o r J = X m e a n J 2 π d max e r r o r V = Y m e a n V 2 π d max ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.