Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Design of AR system tracking registration method using dynamic target light-field

Open Access Open Access

Abstract

In the process of tracking registration for an augmented reality (AR) system, it's essential first to obtain the system's initial state, as its accuracy significantly influences the precision of subsequent three-dimensional tracking registration. At this point, minor movements of the target can directly lead to calibration errors. Current methods fail to address the challenge of capturing the initial state of dynamic deformation in optically transparent AR systems effectively. To tackle this issue, the concept of a static light-field is expanded to a four-dimensional dynamic light-field, and a tracking registration method for an optical see-through AR system based on the four-dimensional dynamic light-field is introduced. This method begins by analyzing the relationship between the components of the optical see-through AR system and studying the impact of a dynamic target on the initial state model. Leveraging the fundamental principle of light-field correlation, the theory and model for four-dimensional dynamic light-field tracking registration are developed. A lot of experiments have confirmed the algorithm's accuracy, enhanced its stability, and demonstrated the superior performance of the three-dimensional tracking registration algorithm.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the advancement of artificial intelligence (AI) and the concept of the metaverse, virtual digital spaces can offer richer interaction and creative experiences. Immersive and social virtual digital spaces represent one of the future development directions. Augmented reality (AR) technology allows users to view virtual elements within the real world, significantly enhancing their perception and experience of information. It is widely applied in fields such as education, gaming, healthcare, electric power systems, industry, and more.

Augmented reality requires the real-time overlay of virtual information onto the real scene. Thus, it must determine the actual position of the virtual information overlay within the real scene through a visual algorithm. At this point, calibration of the augmented reality display device, virtual object, and world coordinate system becomes essential. Following precise calibration, the tracking camera captures the device's real-time transformed orientation, enabling the real-time transformation overlay of virtual information and enhancing the user's immersive experience [1].

Although many researchers have investigated the calibration of augmented reality systems, challenges such as low calibration accuracy and complex calibration methods persist, particularly in optical see-through AR systems. Optical see-through in augmented reality systems refers to an optical system that can project virtual information directly in front of the human eye. This system can provide the user with a clear viewpoint but requires relatively high-precision calibration. Large movements can easily result in virtual registration failures, making solving the calibration problem a prerequisite for achieving stable tracking registration. Current calibration efforts primarily address static calibration, but calibration accuracy is compromised with even minor dynamic changes.

The main works of this paper are as follows.

  • (1) The four-dimensional static light-field is extended to the four-dimensional dynamic light-field. This paper analyzes the coordinate relationship between the various parts of the optical see-through display system, and the problem of obtaining the initial state of the three-dimensional tracking registration is solved by using the coordinate relationship between the dynamic light-field. This method can improve the accuracy of the initial state.
  • (2) The polynomial equation is used to obtain the dynamic range change, and the tracking registration model is established to effectively reduce the initial state error under dynamic conditions.
  • (3) The tracking registration experiment is carried out, and the experiment proves that the algorithm can effectively solve the registration accuracy under dynamic initial environment, and is more stable under long-term tracking registration.

2. Related works

In an optical see-through head-up display system, the purpose of calibration is to establish the relationship between the human eye and the virtual imaging plane of the optical see-through projection display. This ensures that virtual information can be correctly projected onto the real scene, allowing the human eye to observe the merged virtual-real environment. In traditional camera calibration, the relationship between the world coordinate system and the camera coordinate system is established to obtain calibration parameters. However, in the optical see-through augmented reality system, calibration also involves the human eye and the optical display system, which introduces additional complexity [2]. The calibration model is more complex, and more factors must be considered during calibration. Typically, calibration parameters within the system are determined by establishing a coordinate system and a calibration model. This process is more complex than simple camera calibration [3,4]. The accuracy of the calibration directly influences the precision of subsequent three-dimensional tracking registration, which determines whether virtual information can be accurately overlaid on the corresponding position in the real scene [58].

Oishi et al. analyzed the causes of calibration errors and proposed a calibration method for optical see-through projective transformation [9]. The initial calibration was conducted using see-through head-mounted displays (STHMDs). By quantifying the parameter error of projection transformation, the actual viewing position difference of human eyes is calibrated. The minimum position error is identified through fitting, addressing the calibration issue of the optical see-through AR system. Genc et al. proposed a method that avoids decomposing the projection matrix of the tracking camera into internal and external parameters, considering the instability of this calculation method [10].This approach reduces the time and instability factors of the algorithm. Tuceryan put forward by a single point of active alignment method (single point active alignment method, SPAAM) and used to calibrate the see-through type augmented reality(AR) system [11], which is a classical calibration method. In order to reduce the influence of human factors on the calibration error, Owen et al. proposed a two-phase calibration method is proposed to effectively improve the calibration accuracy [12]. Gilson et al. simulated the position of human eyes with a camera and presented an optimization method for reprojection error, which requires no prior estimation and reduces computation time [13]. By establishing a correspondence between two-dimensional points and three-dimensional lines, Kellner et al. reduced the number of parameters that need to be measured to five, further accelerating the algorithm's computation speed [14]. Alexander et al. suggested combining a pre-calibrated display screen with the user's cornea position, estimating the single eye center for each image frame, and continuously recalibrating the system. This method achieves more accurate and stable calibration results and improves the distribution of projection error, although it is complex [15]. Hu et al. designed a special calibration tool for orthopedic surgery that calibrates both video see-through and optical see-through systems. This tool not only effectively reduces display error but also speeds up the algorithm. Building on this, a tracking registration experiment was conducted to achieve marker-free AR navigation. This algorithm is effectively applied in augmented reality computer-assisted orthopedic surgery [16].

According to the current research progress, the accuracy of calibration is affected by the scene information, calibration algorithm and other factors. The improvement of calibration accuracy of optically transparent augmented reality system can improve the accuracy of three-dimentional tracking registration. The traditional initial matrix acquisition methods are mostly based on the single point active alignment method, which does not consider the influence of light under the change of scene. The direction of the light propagating in the air can be recorded by the light-field, and the all-optical function includes the necessary information of the light-field. Therefore, this paper uses the change of light-field to obtain the initial matrix. Aiming at the influence of calibration target motion on calibration accuracy, this study solves the calibration problem by extending the four-dimensional light-field to the four-dimensional dynamic light-field, and further improves the accuracy of three-dimensional tracking registration.

3. Mthodology

At present, the stability of the optical see-through tracking registration algorithm is compromised over prolonged registration processes. To acquire a more accurate initial matrix for tracking registration, this study examines the factors impacting registration accuracy, aiming to address the challenge of obtaining the initial matrix amid dynamic movement. The basic process of the algorithm is illustrated in Fig. 1.

 figure: Fig. 1.

Fig. 1. Algorithm principle block diagram

Download Full Size | PDF

Calibration is the initial step in an AR system, and its precision significantly impacts the accuracy of subsequent three-dimensional tracking registration. Currently, optical see-through AR systems are utilized for calibrating static targets. However, when the calibrated target moves slightly, a dynamic calibration error occurs, which often goes unnoticed. In this study, we propose a new tracking registration method for the optical see-through AR system. This method accounts for the calibration error induced by dynamic deformation by analyzing the factors that affect the calibration accuracy of the optical see-through AR system. The calibration model is developed based on the theory of the four-dimensional light-field, transforming the four-dimensional static light-field issue into a dynamic light-field problem. Compared to traditional calibration methods, this approach yields more reliable calibration results. The overall flowchart for this method is presented in Fig. 2(a).

 figure: Fig. 2.

Fig. 2. The principle of calibration algorithm. (a) Overall flow chart of dynamic calibration. (b) Schematic diagram of four-dimentional dynamic light-field

Download Full Size | PDF

The calibration system is shown in Fig. 2(b) and consists of the following parts. In order to obtain calibration information, Itoh et al. [17] used tracking cameras to track the position of the human eye. In our experiment, to avoid introducing additional errors and to better capture the deformation of dynamic targets, we temporarily substituted the position of the human eye with a tracking camera, C1. Another camera, C2, is utilized to recognize dynamic markers in the scene. The eye, the optical see-through AR system, and the calibration plane are considered as a single optical system. According to the definition of the light-field, the optical system can be represented by two parallel planes, constituting a four-dimensional light-field. This optical system propagates light information, with light carrying parameters such as direction, wavelength, and time. Typically, the light-field is a seven-dimensional function, but for simplification, some researchers have reduced the seven-dimensional light-field to four dimensions. The underlying principle is that once light in space passes through the optical system, it can be approximately equated to light passing through two parallel planes. It is understood that if the coordinates of the point where the ray intersects with two parallel planes are known, the linear equation of the ray can be determined [18]. Therefore, this principle can be used to model the optical system of the optical see-through head-up display system. The entire optical system can be equivalent to Fig. 2(b).

Considering that optical systems can be reduced to a set of parallel planes in most cases, assume that the all-optical field function is $l(x,y,\alpha ,\beta ,\gamma ,\lambda ,t)$, the seven-dimensional function is reduced to a four-dimensional function as follows:

$$l(x,y,\alpha ,\beta ,\gamma ,\lambda ,t) \Rightarrow l(u,v,s,h), $$
where $(x,y)$ is the ray position parameter, $(\alpha ,\beta ,\gamma )$ is the ray direction parameter, $\lambda $ is the wavelength, and t is the time. $l(u,v,s,h)$ is the simplified four-dimentional light-field function, where the four parameters are the coordinates of a ray passing through the intersection of two parallel planes. This representation can suppress the singular points in the function and is easier to calculate.

Any ray propagating in space is defined in a spatial coordinate system. So a point on a ray is also a three-dimensional coordinate point. Since the light-field is in the three-dimensional space, the plane in the three-dimensional space is the three-dimensional plane. In the process of transforming the seven-dimensional light-field into a four-dimentional light-field, two three-dimentional planes need to be reduced to two parallel two-dimentional planes; that is, three-dimentional coordinate vectors need to be reduced to two-dimentional coordinate vectors. It can be derived step by step through the coordinate system transformation to get the mapped two-dimensional coordinates. For plane transformation, we adopted the method in Ref. [16]. Assuming that, before dimensionality reduction, the three-dimentional plane vectors are ${Q_1}$ and ${Q_2}$, and their corresponding two-dimentional vectors are ${q_1}$ and ${q_2}$, respectively, then the transformation of the three-dimentional vector into two-dimentional vector satisfies the following transformation relation:

$${q_1} = T_{{Q_1}}^{ - 1}{Q_1}, $$
where $T_{{Q_1}}^{ - 1}$ is the dimension reduction transformation matrix. The three dimensional plane is converted from the three dimensional space coordinate system to the two dimensional coordinate system, after the coordinate translation, rotation and scaling steps, in linear algebra, the linear transformation from one coordinate system to another can be represented by a matrix. Although a point on the plane can be mapped to a point in space one by one, the reverse mapping of a point in space to a point on the plane is not unique. Therefore, as long as the transformation matrix ${T_{{Q_1}}}$ of two-dimensional coordinates to three-dimensional space coordinates is obtained first, and then the inverse matrix $T_{{Q_1}}^{ - 1}$ of ${T_{{Q_1}}}$ is obtained, the three-dimensional coordinates can be converted to two-dimensional coordinates. If ${Q_1}$ is on the three-dimentional plane of the optical see-through display and ${Q_2}$ is the three-dimentional plane of the virtual image, then ${q_1}$ and ${q_2}$ are on the corresponding two-dimentional mapping plane, respectively.

The two tracking cameras are calibrated beforehand. Then, the position of eye position e and any point p on the calibration plate can be determined. The equation of line l can be determined. After a line passes through two parallel planes, the coordinates of the two intersecting points can also be determined, thus obtaining the values of ${q_1}$ and ${q_2}$. The coordinate transformation relationship between point ${q_2}$ on the virtual mapping plane and the tracking camera can be obtained according to the camera calibration equation.

However, in the calibration process, moving the calibration plate for many times is necessary to obtain more calibration data, which inevitably leads to dynamic deformation and dynamic change of the light-field. Thus, the static light-field problem is transformed into a dynamic light-field problem. To represent minor deformation, we first need to determine the position of point p after dynamic change and then determine the change direction m of l. We set the three-dimentional coordinate of checkerboard point as ${p_i} = {({x_i},{y_i},{z_i})^T}$. Dynamic deformation $\Delta {p_i}$ can be expressed by a polynomial equation:

$$\begin{array}{l} \Delta {p_i} = f({p_i},{\alpha _i}) = (0,0,{a_{n0}}x_i^n + {a_{0n}}y_i^n + \ldots \\ + {a_{11}}{x_i}{y_i} + {a_{10}}{x_i} + {a_{10}}{y_i} + {a_{00}}) \end{array}$$

We usually take quadratic polynomials,

$$(0,0,{a_{20}}x_i^2 + {a_{02}}y_i^2 + {a_{11}}{x_i}{y_i} + {a_{10}}{x_i} + {a_{10}}{y_i} + {a_{00}}).$$

Small deformation can be fitted by surface equation. We use the polynomial method because polynomial fitting has a higher degree of freedom, and more faces can be fitted by adjusting the parameters. It has good deformation simulation performance. At this point, the equation of the m-line can be determined, and the corresponding ${q^{\prime}_2}$ coordinates can be obtained. Let ${q^{\prime}_2} = {(u,v)^T}$ in the camera C2 coordinate system is ${{\boldsymbol p}_c} = {({x_c},{y_c},{z_c})^\textrm{T}}$. The relation between ${q^{\prime}_2}$ and points in the camera coordinate system is rotation and translation transformation. The above values are brought in using the SPAMM method [11]. Assuming that the transformation matrix between the two is ${{\boldsymbol T}_\textrm{3}}( {\boldsymbol R} |{\boldsymbol t}) \in {\textrm{R}^{3 \times 4}}$, the relation between ${q^{\prime}_2}$ and the homogeneous coordinates ${{\boldsymbol p}_c}$ and ${\tilde{q}^{\prime}_2}$ of ${\tilde{{\boldsymbol p}}_c}$ can be expressed as follows:

$$s{(u,v,1)^T} = {{\textbf T}_3}{({x_c},{y_c},{z_c},1)^T}, $$
where ${{\boldsymbol T}_\textrm{3}}( {\boldsymbol R} |{\boldsymbol t})\textrm{ = }\left( \begin{array}{l} {a_{11 }}\;{a_{12 }}\;{a_{13 }}\;{a_{14}}\\ {a_{21 }}\;{a_{22 }}\;{a_{23 }}\;{a_{24 }}\\ {a_{31 }}\;{a_{32 }}\;{a_{33 }}\;{a_{34 }} \end{array} \right)$ and s is the scale factor. Then, the above equation ${{\textbf T}_3}$ can be solved to obtain the calibration result.

Through the above process, the dynamic error of the initial matrix can be processed, and the influence of the initial error on the three-dimentional tracking registration estimation can be reduced. In the previous research, we conducted a study on the tracking registration method [19], which was based on the traditional calibration method, and conducted a tracking registration experiment. In this study, we improve the algorithm to obtain more accurate and stable tracking registration results.

4. Results and discussion

To evaluate the performance of the algorithm, we conducted a series of experiments. Initially, the proposed algorithm for initial parameter acquisition was tested, yielding calibration results for the optical see-through AR system that demonstrated a reduction in the dynamic calibration error. Subsequently, the calibration algorithm was integrated with the tracking registration algorithm to obtain tracking registration results, and these outcomes were then compared.

4.1 Calibration of optical see-through AR system

The calibration algorithm experiment are as follows. During the test experiment, the accuracy of the calibration algorithm was first simulated to test the calibration parameters of the optical see-through head-up display system. The internal parameters of the tracking camera were set as the fixed parameters that had been calibrated. The resolution of the optical see-through display system was set as 680 × 520 pixels, and the resolution of the tracking camera was set as 1080 × 824 pixels. Set the mean value of Gaussian noise added to the projection image of tracking camera as 0 and the variance as $\delta$.

In the original light-field, the three-dimentional point cloud of the calibration plate should be shown in Fig. 3(a). After the calibration plate moves or degenerates, the deformation of the calibration plate is simulated by using the principle of dynamic light-field, and the deformation point cloud is obtained. To build the dynamic three-dimentional point cloud transformation model of the calibration plate, the transformation equation of the calibration plate was fitted by using Eq. (3) combined with the three-dimentional point cloud data to simulate the deformation of the calibration plate, and the fixed-point cloud in Fig. 3(b)(c) was obtained. Figure 3(b)(c) shows the relationship and direction between the original point and the deformed point.

 figure: Fig. 3.

Fig. 3. Diagram of point cloud change of dynamic light-field.(a) Point cloud of original and dynamic deformation plane.(b) (c) Part of fixed-point cloud and their relationship.

Download Full Size | PDF

Figure 4 shows the comparison results of the average relative errors of 12 calibration parameters in T 3 matrix. The figure shows the comparison of calibration accuracy before and after dynamic correction. Our dynamic correction method significantly reduces the calibration error and effectively improves the calibration accuracy.

 figure: Fig. 4.

Fig. 4. The original error is compared with the calibration error of 4-D dynamic light-field.

Download Full Size | PDF

The influence of noise level on calibration accuracy was measured in the experiment, as shown in Fig. 5. The sum of noise from multiple natural sources in the system tends to be normally distributed. The variance of Gaussian noise increases from 0 to 2.0, and 50 independent experiments are conducted for different noise variance values. The average value of their relative errors is taken to obtain the calibration relative error results shown in Fig. 5. The a11 to a34 in Fig. 5 represent the calculated calibration parameter values. Figure 5 shows that the noise level has a great influence on calibration accuracy. With the increase of Gaussian variance, the precision of calibration is affected. Thus, ambient noise should not be too large during calibration experiments. When the noise variance is less than 2.0, the calibration relative error is less than 1%, which can meet the requirements of optical see-through head-up display system for calibration accuracy.

 figure: Fig. 5.

Fig. 5. The relation between noise quantity and mean relative error.

Download Full Size | PDF

The relative error between the calibration result and the calibration truth value of the optical see-through head-up display system is calculated as the evaluation standard of the calibration accuracy. To evaluate the accuracy of the algorithm, the proposed algorithm was compared with the calibration algorithm in Refs. [14] and [20]. This experiment based on Refs. [14,20], the algorithm, and the calibration select the same number. The number of fixed point calibration images for every independent experiment was carried out respectively. The true value and the calibration results of average relative error of $a{}_{21}$ are obtained in the calibration parameters, for example, the dynamic calibration algorithm. The experimental comparison results are shown in Fig. 6(a) and 6(b). As shown in Fig. 6(a), compared with Refs. [14] and [20], the calibration accuracy of the algorithm proposed in this study is improved when the number of reference points increases. In Fig. 6(b), the overall calibration accuracy of literature [14] and [20] is lower than that of the proposed algorithm when the number of calibration images increases under the condition that the number of calibration points and noise level remain unchanged. The above experimental comparison can verify the proposed algorithm has higher calibration accuracy under different calibration conditions.

 figure: Fig. 6.

Fig. 6. Mean relative error under different conditions. (a) The relationship between the number of standard points and mean relative error. (b) Relation between image quantity and mean relative error.

Download Full Size | PDF

In order to test the performance of the algorithm, we use optical see-through AR glasses to carry out the algorithm calibration experiment. In order to simulate the dynamic effect during the experiment, a person moved slowly with the calibration board and calibrated in the AR glasses. Figure 7(a) shows the system composition, and (b) shows the calibration results, which be able to see correctly marked in AR glasses.

 figure: Fig. 7.

Fig. 7. Calibration experiment. (a) The system composition. (b) Calibration results.

Download Full Size | PDF

4.2 Tracking registration experiment

After obtaining the initial matrix, we can calculate the tracking registration result of the system. In this experiment, the accuracy of tracking registration is evaluated, and the influence of static and dynamic calibration on tracking registration results is compared. First, the correlation distance D between the true value and the registered value is calculated. Among them, D1, D2 and D3 represent the correlation distance of Refs. [19,21,22] respectively, and D4 represents the error of the algorithm in this paper. A total of 10 independent experiments have been conducted.

As illustrated in Fig. 8, the correlation distance between the real value and the registered value of the algorithm presented in this paper is smaller, indicating a higher accuracy of the algorithm.

 figure: Fig. 8.

Fig. 8. Correlation distance. (a) Correlation distance in static field. (b) Correlation distance in dynamic field.

Download Full Size | PDF

To assess the stability of the algorithm, we utilized a depth camera to capture images over a period and conducted an error rate experiment. We calculated the inter-frame error for 1000 frames across several algorithms, which are categorized into static initial matrix acquisition and dynamic initial matrix acquisition in this study.

As Fig. 9 demonstrates, traditional methods are susceptible to significant deviations after extended periods of tracking registration, which can directly result in errors in the virtual-real fusion. Although the method referenced in [19] employs an error correction technique to mitigate some errors, deviations still persist. One primary cause of these issues is the initial calibration accuracy. This is because, in the optical see-through display system, errors arise not only from the optical system's design but also from installation errors and errors encountered during the initial matrix acquisition process. As the initial matrix data is obtained, the relative position between the target and the entire system may change, leading to calculation errors in the initial matrix. Figure 9 illustrates that, by applying the algorithm proposed in this paper, the tracking registration error is effectively reduced, and the algorithm's stability is enhanced.

 figure: Fig. 9.

Fig. 9. Algorithm stability experiment results

Download Full Size | PDF

4.3 Performance summary

According to the results of this study, the following performance summary is obtained:

(1)In three-dimensional tracking registration scenarios, traditional static calibration algorithms can derive the initial registration matrix. However, during dynamic calibration, the calibration error escalates. By transforming the four-dimensional static light-field into a four-dimensional dynamic light-field, the issue of calibration modeling under dynamic conditions is addressed.

(2)During dynamic calibration, minor changes in the target can impact the calibration results. A polynomial equation is employed to approximate the change in the dynamic surface, which is then integrated into the initial matrix calculation model. This approach not only addresses the dynamic calibration challenge in light-field space but also effectively minimizes the calculation error of the initial matrix, thereby enhancing the accuracy of three-dimensional tracking registration.

5. Conclusions

In this paper, we propose a tracking registration method for optical see-through AR display systems based on the four-dimensional dynamic light-field. Existing methods do not address the issue of dynamic deformation in AR systems. However, the method used in this study offers a solution for the impact of calibration target movement or deformation on system calibration accuracy. Consequently, the stability of three-dimensional tracking registration has been found lacking. This paper extends the four-dimensional static light-field to a four-dimensional dynamic light-field and introduces a tracking registration model for the four-dimensional dynamic light-field by analyzing the components of the optical see-through AR display system. Through experiments, the average relative error was measured, effectively improving tracking registration accuracy. The impact of environmental noise on the calibration result was assessed, and the factors affecting this noise were analyzed. As a result, the stability of three-dimensional tracking registration can be significantly enhanced.

Funding

Jilin Province Science and Technology Development Plan Project (20240304109SF).

Acknowledgments

The authors acknowledge the assistance of China Scholarship Council.

Disclosures

The authors declareno conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. Fuhrmann, D. Schmalstieg, W. Purgathofer, et al., “Practical Calibration Procedures for Augmented Reality,” Virtual Environments 2000: Proceedings of the Eurographics Workshop in Amsterdam, 3–12, (2000).

2. Y . Itoh and G. Klinker, “Interaction-free calibration for optical see-through head-mounted displays based on three-dimentional eye localization,” In Proceedings of IEEE Symposium on three-dimentional User Interfaces (three-dimentionalUI), Minneapolis75–82, (2014).

3. L. Shao, S. Yang, T. Fu, et al., “Augmented reality calibration using feature triangulation iteration-based registration for surgical navigation,” Computers in Biology and Medicine , 148, 105826 (2022). [CrossRef]  

4. S. Sheng, X. Chen, Chao Chen, et al., “Eigenvalue calibration method for dual rotating-compensator Mueller matrix polarimetry,” Opt. Lett. 46(18), 4618–4621 (2021). [CrossRef]  

5. Z. Zhang, D. Weng, Dongdong Weng, et al., “An accurate calibration method for optical see-through head-mounted displays based on actual eye-observation model,” in Proc. Int. Symp. Mixed Augmented Reality, pp. 245–250(2017).

6. Z. Zhang, D. Weng, Yue Liu, et al., “RIDE: Region-induced data enhancement method for dynamic calibration of optical see-through head-mounted displays,” in Proc. IEEE Virtual Reality, pp. 245–246(2017).

7. T. Langlotz, M. Cook, Holger Regenbrecht, et al., “Real-time radiometric compensation for optical see-through head-mounted displays,” IEEE Trans. Vis. Comput. Graph. 22(11), 2385–2394 (2016). [CrossRef]  

8. S. Liu, H. Hua, Dewen Cheng, et al., “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Visual. Comput. Graphics 16(3), 381–393 (2010). [CrossRef]  

9. T. Oishi, “Methods to calibrate projection transformation parameters for see-through head-mounted displays,” Presence: Teleoperators & Virtual Environments 5(1), 122–135 (1996). [CrossRef]  

10. Y. T. M. Genc and A. Khamene., “Optical See-through Calibration with Vision-Based Trackers: Propagation of Projection Matrices,” IEEE and ACM International Symposium, 147–156(2001).

11. M Tuceryan and N Navab, “Single point active alignment method (SPAAM) for optical see-through HMD calibration for AR,” Teleoperators & Virtual Environments 11(3), 259–276 (2002). [CrossRef]  

12. C. B. Owen and Z. Ji, “Display-Relative Calibration for Optical See-Through Head-Mounted Displays,” IEEE & Acm International Symposium on Mixed & Augmented Reality 1, 70–78 (2004). [CrossRef]  

13. S. J. Gilson, A. W. Fitzgibbon, Andrew Glennerster, et al., “Spatial calibration of an optical see-through head-mounted display,” J. Neurosci. Methods 173(1), 140–146 (2008). [CrossRef]  

14. K. Falko, B. Benjamin, Gerd Bruder, et al., “Geometric calibration of head-mounted displays and its effects on distance estimation,” IEEE Trans. Visual. Comput. Graphics 18(4), 589–596 (2012). [CrossRef]  

15. A. Plopski, Y . Itoh, Christian Nitschke, et al., “Corneal-imaging calibration for optical see-through head-mounted displays,” IEEE Trans. Visual. Comput. Graphics 21(4), 481–490 (2015). [CrossRef]  

16. X. Hu, F. R. Baena, Fabrizio Cutolo, et al., “Head-mounted augmented reality platform for markerless orthopaedic navigation,” IEEE J. Biomed. Health Inform. 26(2), 910–921 (2022). [CrossRef]  

17. I. Yuta, “Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays,” IEEE Trans. Visual. Comput. Graphics 21(4), 471–480 (2015). [CrossRef]  

18. J. Grubert, Y. Itoh, Kenneth Moser, et al., “A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays,” IEEE Trans. Visual. Comput. Graphics 24(9), 2649–2662 (2018). [CrossRef]  

19. Z. An and Y. Liu, “Tracking registration of optical see-through augmented reality based on the Riemannian manifold constraint,” Opt. Express 30(26), 46418–46434 (2022). [CrossRef]  

20. Z. An, “Research on virtual reality registration method of optical see-through HUD system,” Binggong Xuebao 39(5), 1006–1011 (2018). [CrossRef]  

21. R. Mur-Artal, J. Montiel, Juan D. Tardos, et al., “ORB-SLAM: a versatile and accurate monocular SLAM system,” IEEE Trans. Robot. 31(5), 1147–1163 (2015). [CrossRef]  

22. T. Fei, X. Liang, He. Zhi-Ying, et al., “A registration method based on nature feature with KLT tracking algorithm for wearable computers,” Proceedings of 2008 International Conference on Cyberworlds, 416–421 (2008). [CrossRef]  

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Algorithm principle block diagram
Fig. 2.
Fig. 2. The principle of calibration algorithm. (a) Overall flow chart of dynamic calibration. (b) Schematic diagram of four-dimentional dynamic light-field
Fig. 3.
Fig. 3. Diagram of point cloud change of dynamic light-field.(a) Point cloud of original and dynamic deformation plane.(b) (c) Part of fixed-point cloud and their relationship.
Fig. 4.
Fig. 4. The original error is compared with the calibration error of 4-D dynamic light-field.
Fig. 5.
Fig. 5. The relation between noise quantity and mean relative error.
Fig. 6.
Fig. 6. Mean relative error under different conditions. (a) The relationship between the number of standard points and mean relative error. (b) Relation between image quantity and mean relative error.
Fig. 7.
Fig. 7. Calibration experiment. (a) The system composition. (b) Calibration results.
Fig. 8.
Fig. 8. Correlation distance. (a) Correlation distance in static field. (b) Correlation distance in dynamic field.
Fig. 9.
Fig. 9. Algorithm stability experiment results

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

l ( x , y , α , β , γ , λ , t ) l ( u , v , s , h ) ,
q 1 = T Q 1 1 Q 1 ,
Δ p i = f ( p i , α i ) = ( 0 , 0 , a n 0 x i n + a 0 n y i n + + a 11 x i y i + a 10 x i + a 10 y i + a 00 )
( 0 , 0 , a 20 x i 2 + a 02 y i 2 + a 11 x i y i + a 10 x i + a 10 y i + a 00 ) .
s ( u , v , 1 ) T = T 3 ( x c , y c , z c , 1 ) T ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.