Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Photoelectric scanning-based resection method for mobile robot localization

Open Access Open Access

Abstract

Indoor localization is a key enabling technology for mobile robot navigation in industrial manufacturing. As a distributed metrology system based on multi-station intersection measurement, the workshop measurement positioning system (wMPS) is gaining increasing attention in mobile robot localization. In this paper, a new, to the best of our knowledge, wMPS-based resection localization method is proposed using a single onmidirectional transmitter mounted on a mobile robot with scanning photoelectric receivers distributed in the work space. Compared to the traditional method that requires multiple stationary transmitters, our new method provides higher flexibility and cost-effectiveness. The position and orientation of the mobile robot are then iteratively optimized with respect to the constraint equations. In order to obtain the optimal solution rapidly, two methods of initial value determination are presented for different numbers of effective receivers. The propagation of the localization uncertainty is also investigated using Monte-Carlo simulations. Moreover, two experiments of automated guided vehicle localization are conducted, and the results demonstrate the high accuracy of the proposed method.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. INTRODUCTION

Mobile robots have found increasing applications in industrial manufacturing due to high automaticity and flexibility [1]. In the field of mobile robotics, navigation is a fundamental problem to be solved [2]. There are four essential components of mobile robot navigation: perception, localization, cognition, and motion control [3]. Localization, which determines the position and orientation of a mobile robot, is the prerequisite to autonomous navigation and has consequently received considerable research attention [4], especially in indoor environments [5].

Numerous technologies have been applied to the indoor localization of mobile robots, including ultrasound, ultra-wideband (UWB), Bluetooth, radio frequency identification (RFID), and WiFi [611]. However, these technologies typically provide accuracies ranging from centimeter-level to meter-level, which can hardly meet the localization requirements of high-accuracy manufacturing processes (millimeter or even sub-millimeter) [12], such as aircraft assembly, shell drilling, and blade polishing [1316]. As large volume metrology systems represented by laser trackers continue to mature [17], they open up more possibilities for high-accuracy localization of mobile robots.

As one of the most popular large volume metrology systems, the laser tracker has made impressive progress in mobile robot localization [18]. Wang et al. [19] presented a navigation method for mobile robots using the real-time position information from a laser tracker. The accurate metrology guidance allowed the robot to perform surface scanning of the wind turbine blade section. Susemihl et al. [20] introduced different referencing strategies to maintain high positioning accuracy of the mobile machining robot with the laser tracker. The application oriented test achieved an average positioning error of 0.17 mm, which yielded an effective solution for accurate machining of large aircraft components. Brillinger et al. [21] further developed the Mobile Laser Tracker system to increase efficiency and accuracy of mobile robots in aircraft manufacturing. With the wide application of multi-robot cooperative systems, multi-robot localization has become an important index of indoor localization technology. However, the laser tracker supports single-target measurement only, which significantly lowers its efficiency [22]. Distributed metrology systems, for example, close-range photogrammetry systems and photoelectric scanning systems, can handle multi-target localization and working space extension much more effectively based on multi-station intersecting measurement [23].

Benefiting from the improvements of camera performance, close-range photogrammetry systems have also gained wide recognition in mobile robotics. Sargeant et al. [24] proposed a photogrammetric approach to achieve six degrees of freedom tracking of a mobile robot platform. This work enabled high-accuracy measurements over a large volume without separate registration or alignment. Qin et al. [25] set up a mobile robot localization system based on the motion capture technology, which ensured simultaneous positioning and collaboration of multiple robots. In addition, numerous localization methods using a single camera have been proposed [2630]. Compared with multi-camera localization method, the method using a single camera can effectively reduce the equipment and maintenance costs. Royer et al. [31] presented a real-time localization system using a single camera and natural landmarks for mobile robot. Zhong et al. [32] proposed a self-localization scheme for indoor mobile robot navigation based on monocular vision and artificial landmarks, which showed high robustness of the binary ring-code design and bilayer recognition process to environmental illumination and landmark incompleteness. Chan-Ley et al. [33] presented a simple method for mobile robot localization using an uncalibrated camera based on projective properties and coded-target recognition technique. However, the measurement range of a camera is limited by its depth of focus and field of view (FOV), which affects the applicability of close-range photogrammetry systems under industrial environments.

The workshop measurement positioning system (wMPS) is a novel coordinate measurement system based on photoelectric scanning technology. The wMPS realizes omnidirectional and uniform spatial scanning and, thus, provides larger measurement range than close-range photogrammetry systems. With multi-target tracking, strong anti-interference capability, and excellent extensibility, wMPS has shown great potential in mobile robot navigation [34]. In the previous research on wMPS-based mobile robot localization, the multi-station intersection measurement mode is mostly employed. The photoelectric receivers are rigidly fixed on the mobile robot, while the transmitter (at least two) are distributed in the working space and remain stationary. The 3D coordinates of receivers are calculated by spatial intersection of the measured angles (the principle of triangulation), and the position and orientation of the mobile robot can be subsequently obtained. Typical applications of the multi-station intersection localization mode are shown in [35,36]. The intersection method can be simplified to realize 2D localization using a single transmitter in some specific applications [37], where the robot moves on flat ground and the height of the robot is not concerned. It also employed the configuration that the receiver was fixed on the moving vehicle while the transmitter remained stationary. Based on the working principle of wMPS, one transmitter provides two angular constraint equations, which are enough to solve the 2D coordinates of the receiver in the horizontal directions. This method is actually a special case of the intersection measurement mode. Compared with the traditional multi-station intersection method, the localization accuracy of this method declined dramatically. In both the multi-station intersection method and the 2D localization method, more transmitters are needed to ensure continuous measurements when the moving range of the mobile robot expands, which will obviously raise the costs. Moreover, since the photoelectric receiver has limited FOV ($\pm55^\circ$), the laser signals emitted by the transmitter are frequently out of the FOV of the receiver or blocked by other objects, resulting in data loss. Therefore, it is essential to further optimize the system configuration and enhance the measurement reliability. Since the wMPS transmitter has the characteristics of omnidirectional coverage and uniform spatial resolution, resection localization using a single transmitter shows its feasibility. The specific method of solving the position and orientation as well as its adaptability and stability still needs to be studied.

The term resection is the name given to the process in which the spatial position and orientation of photograph is determined based on photogrammetric measurements of the images of ground control points appearing on the photograph [38]. The resection concept is a fundamental problem in photogrammetry and computer vision. The resection technology is widely used in various fields, especially in the visual localization field [39,40]. Thomas et al. developed a measurement system to provide the handheld camera position data using a number of markers in known positions [41]. AICON 3D Systems developed the ProCam System for industrial applications where the mobile video camera was pointed to a pre-calibrated reference point field [42]. Azuma et al. used the resection technology in an electro-optical tracking system for head-mounted displays [43]. Mostafa et al. discussed the process of direct georeferencing of aerial digital images using resection [44]. Kim et al. proposed a multi-sensor fusion navigation technology to determined the exterior orientation parameters by resection [45]. Chiang et al. proposed a resection aided pedestrian deal reckoning system with georeferenced images of a previously mapped environment to enable seamless navigation [46]. Pizzo et al. designed an autonomous visual landing system based on resection to determine its position and attitude with respect to a well-defined landing pattern [47]. However, the measurement range of a camera is limited by its depth of focus and FOV, and the measurement accuracy is affected by the distortions. Benefitting from the photoelectric scanning technology, wMPS provides omnidirectional measurement range and uniform measurement accuracy. With multi-target automatic measurement, wMPS can give full play to the advantages of the resection localization.

A large number of methods have been proposed for resection or perspective-$n$-point (${\rm P}n{\rm P}$) problem, and they can be classified into two categories: iterative methods and non-iterative methods [4850]. The non-iterative methods directly compute the position and orientation of the camera using polynomial equations that are derived from geometric constraints or first-order optimal conditions [5153]. The well-known direct linear transformation (DLT) algorithm achieved relatively accurate results from a large number of points [54]. Schweighofer et al. proposed a globally optimal $O(n)$ solution for large-size point sets using a semi-definite positive program [55]. Lepetit et al. presented an efficient non-iterative algorithm with linear complexity in $n$ by expressing the solution as weighted sum of null eigenvectors [56]. The non-iterative methods offer high computational efficiency and do not require an initial estimation. However, the non-iterative methods have low accuracy and robustness with a small number of correspondence points especially when $n \le 5$. With regard to the iterative methods, the error function in the image space or the object space is usually iteratively minimized by taking into consideration all nonlinear constraints [57]. Lu et al. developed an orthogonal iteration method to directly minimize the object space error [58]. Garro et al. offered an alternating minimization method to minimize an algebraic error defined in the image space [59]. Zeng established the mathematical model of resection based on Rodrigues matrix and presented the solution by linearization and iterative process [60]. Despite being sensitive to the initial estimation and computationally complex, the iterative methods are more accurate than the non-iterative ones. Hence, the iterative approach is employed in the proposed wMPS based localization method to realize more accurate mobile robot localization in manufacturing processes. The resection model based on the geometric constraints between receivers and laser planes is built to determine the position and orientation of the mobile robot. The initial value is analytically estimated and then iteratively optimized using the Levenberg–Marquardt algorithm that has fast convergence rate and high robustness.

In this paper, based on the omnidirectional measurement range and uniform spatial resolution of wMPS, we propose a new resection localization method for mobile robot navigation. Contrary to the intersection mode, a single transmitter is fixed on the mobile robot body, and photoelectric receivers are distributed in the working space. The mobile robot can then be localized in the global coordinate system as long as the laser signals of the transmitter are acquired by at least three receivers with known 3D coordinates. When the moving range of the mobile robot expands, we need to add more receivers whose costs are much lower than those of transmitters. The resection method also takes full advantages of omnidirectional coverage of the transmitter, which minimizes the impact of signal blockage and ensures continuous measurements even when some of the receivers fail to acquire laser signals. Furthermore, benefitting from the uniform spatial resolution of the transmitter, the resection method can still provide accurate localization results under poor layout of receivers. Hence, the proposed method greatly reduces costs and improves the adaptability of mobile robot localization in complex industrial environments. To the best of our knowledge, this is the first time the resection localization method for photoelectric scanning systems including the indoor Global Positioning System (iGPS) [6168] is comprehensively presented.

The remainder of this paper is organized as follows. In Section 2, the wMPS-based resection localization principle is elaborated, and the methods of initial value determination are presented. Section 3 investigates the uncertainty propagation law of the proposed method using Monte-Carlo simulation (MCS). Then, the experiment for performance evaluation is described in Section 4. Finally, Section 5 gives the conclusion and outlook.

2. LOCALIZATION PRINCIPLE

This section is divided into three parts to describe the wMPS-based resection localization method. The working principle of wMPS is first presented, including the system specification and the photoelectric scanning model. The nonlinear optimization strategy based on the geometric constraints is then developed for mobile robot localization. In the last part, two approaches are discussed to determine the initial value of the iterative optimization algorithm.

A. Working Principle of wMPS

The wMPS is mainly composed of transmitters, photoelectric receivers, and signal processors. As shown in Fig. 1, the transmitter consists of a rotating head whose speed can reach 3000 r/min and a stationary body. The rotating head emits two fanned laser planes that rotate counterclockwise at constant speed $\omega$ and scan the whole measurement space. The two laser planes that are regarded as ideal planes are inclined at 45° to the horizontal and offset by 90° to one another. Every time the transmitter rotates to the zero position, the stationary body emits omnidirectional laser pulses signaling the start signal of the measurement period.

 figure: Fig. 1.

Fig. 1. Components of wMPS and working mode of the transmitter.

Download Full Size | PDF

After the synchronization pulse and two laser planes scan over the photoelectric receiver successively, the receiver processes the collected laser signals in conjunction with the signal processor and then converts them into the corresponding arrival times of synchronization pulse $({t_0})$, laser plane 1 $({t_1})$ and laser plane 2 $({t_2})$, as illustrated in Fig. 2. Subsequently, the scanning angles of the laser planes are calculated with the angular speed $\omega$:

$$\left\{{\begin{split}{\beta _1}&= \omega ({t_1} - {t_0}) ,\\{\beta _2}&= \omega ({t_2} - {t_0}) .\end{split}} \right.$$
 figure: Fig. 2.

Fig. 2. Measuring principle of the scanning angles.

Download Full Size | PDF

During the measurement, the laser signals are the only sensing information carriers between the transmitter and the receiver. Based on the initial state of the transmitter, the transmitter coordinate system ${O_t}$-${X_t}{Y_t}{Z_t}$ is defined as follows: the axis of rotation is set as ${Z_t}$ axis; the origin ${O_t}$ is located at intersection of the ${Z_t}$ axis and laser plane 1; the ${X_t}$ axis lies in laser plane 1. The equations of scanning laser planes in the transmitter coordinate system, which are key parameters of the transmitter, then can be written as

$$\left\{{\begin{split}&{a_1}x + {b_1}y + {c_1}z = 0 ,\\&{a_2}x + {b_2}y + {c_2}(z - d) = 0 .\end{split}} \right.$$

In the above equation, $d$ represents the offset of two laser planes along the ${Z_t}$ axis caused by assembly error. The coefficients $({a_i},{b_i},{c_i})$ are also normalized:

 figure: Fig. 3.

Fig. 3. Configuration of the wMPS-based resection method for mobile robot localization.

Download Full Size | PDF

$$a_i^2 + b_i^2 + c_i^2 = 1,\quad i = 1,2.$$

The coefficients of laser planes at any scanning angle can be further determined:

$$\left[{\begin{array}{*{20}{c}}{{a_i}({\beta _i})}\\{{b_i}({\beta _i})}\\{{c_i}({\beta _i})}\end{array}} \right] = \left[{\begin{array}{*{20}{c}}{\cos {\beta _i}}&\quad{- \sin {\beta _i}}&\quad0\\{\sin {\beta _i}}&\quad{\cos {\beta _i}}&\quad0\\0&\quad0&\quad1\end{array}} \right]\left[{\begin{array}{*{20}{c}}{a_i}\\{b_i}\\{c_i}\end{array}} \right] .$$

Hence, the relationship between the 3D coordinates of receivers and the corresponding scanning angles is established. Multiple transmitters working cooperatively can realize multi-target intersection positioning and 3D coordinate measurement. The wMPS working principle elucidates its characteristics of omnidirectional measurement range and uniform spatial resolution, laying the foundation for the resection localization method.

B. Resection Localization Method

The configuration of the wMPS-based resection method for mobile robot localization is illustrated in Fig. 3. The photoelectric receivers are distributed in the working space, and their 3D coordinates in the global coordinate system ${O_g}$-${X_g}{Y_g}{Z_g}$ are calibrated in advance. A single transmitter is fixed on the mobile robot, and the transmitter coordinate system serves as the mobile robot coordinate system.

While moving with the mobile robot, the transmitter continuously scans the surrounding environment. The position $({x_0},{y_0},{z_0})$ and orientation $(\varphi ,\theta ,\psi)$ of the mobile robot satisfy the following equations:

$$\left[{\begin{array}{*{20}{c}}{x_g}\\{y_g}\\{z_g}\end{array}} \right] = {\boldsymbol R}\left[{\begin{array}{*{20}{c}}{x_t}\\{y_t}\\{z_t}\end{array}} \right] + \left[{\begin{array}{*{20}{c}}{{x_0}}\\{{y_0}}\\{{z_0}}\end{array}} \right] ,$$
$$\begin{split}{\boldsymbol R} &= {{\boldsymbol R}_\varphi} \cdot {{\boldsymbol R}_\theta} \cdot {{\boldsymbol R}_\psi}\\ &= \left[{\begin{array}{*{20}{c}}1&\quad0&\quad0\\0&\quad{\cos \varphi}&\quad{- \sin \varphi}\\0&\quad{\sin \varphi}&\quad{\cos \varphi}\end{array}} \right] \cdot \left[{\begin{array}{*{20}{c}}{\cos \theta}&\quad0&\quad{\sin \theta}\\0&\quad1&\quad0\\{- \sin \theta}&\quad0&\quad{\cos \theta}\end{array}} \right] \cdot \left[{\begin{array}{*{20}{c}}{\cos \psi}&\quad{- \sin \psi}&\quad0\\{\sin \psi}&\quad{\cos \psi}&\quad0\\0&\quad0&\quad1\end{array}} \right]\\ &= \left[{\begin{array}{*{20}{c}}{\cos \theta \cos \psi}&\quad{- \cos \theta \sin \psi}&\quad{\sin \theta}\\{\cos \varphi \sin \psi + \sin \varphi \sin \theta \cos \psi}&\quad{\cos \varphi \cos \psi - \sin \varphi \sin \theta \sin \psi}&\quad{- \sin \varphi \cos \theta}\\{\sin \varphi \sin \psi - \cos \varphi \sin \theta \cos \psi}&\quad{\sin \varphi \cos \psi + \cos \varphi \sin \theta \sin \psi}&\quad{\cos \varphi \cos \theta}\end{array}} \right],\end{split}$$
where the rotation angles $\varphi$, $\theta$, and $\psi$ represent the corresponding rotations about the $X$ axis, $Y$ axis, and $Z$ axis, respectively, and the rotation matrix ${\boldsymbol R}$ is defined by the successive application of three individual rotations in the order $\psi$-$\theta$-$\varphi$.

Based on the working principle of wMPS, each receiver that acquires signals, thus providing two constraint equations corresponding to the two laser planes:

$$\left\{{\begin{split}{\delta _{j1}} &= \left[{\begin{array}{*{20}{c}}{{a_1}({\beta _{j1}})}&{{b_1}({\beta _{j1}})}&{{c_1}({\beta _{j1}})}\end{array}} \right]{{\boldsymbol R}^{T}}\left[{\begin{array}{*{20}{c}}{{x_{{gj}}} - {x_0}}\\{{y_{{gj}}} - {y_0}}\\{{z_{{gj}}} - {z_0}}\end{array}} \right],\\{\delta _{j2}} &= \left[{\begin{array}{*{20}{c}}{{a_2}({\beta _{j2}})}&{{b_2}({\beta _{j2}})}&{{c_2}({\beta _{j2}})}\end{array}} \right]{{\boldsymbol R}^{T}}\left[{\begin{array}{*{20}{c}}{{x_{{gj}}} - {x_0}}\\{{y_{{gj}}} - {y_0}}\\{{z_{{gj}}} - {z_0}}\end{array}} \right] - {c_2}({\beta _{j2}})d,\end{split}} \right.$$
where $j$ refers to the index of receiver. Since there are in total six unknowns $({x_0},{y_0},{z_0},\varphi ,\theta ,\psi)$, no fewer than three receivers $(n \ge 3)$ are needed to satisfy the condition that the number of constraint equations is not fewer than that of unknowns $(2n \ge 6)$. In order to ensure that the localization results are as accurate as possible, iterative optimization strategy is developed based on the constraint equations. When three or more than three receivers acquire the laser signals emmited by the transmitter, the position and orientation of the mobile robot can be determined using the following optimization objective function:
$$\min \sum\limits_{j = 1}^n (\delta _{j1}^2 + \delta _{j2}^2) .$$

The Levenberg–Marquardt algorithm [69] is employed to solve this nonlinear optimization problem and ultimately accomplish mobile robot localization.

C. Initial Value Determination

The initial value of the iterative optimization problem is the key to obtaining the optimal solution rapidly. Two methods of initial value determination are proposed, depending on the number of receivers that acquire the laser signals.

1. Initial Value for Three Receivers

As shown in Fig. 4, three receivers $({P_1},{P_2},{P_3})$ acquire the laser signals from the transmitter located at position ${O_t}$. The offset $d$ in Eq. (2) is very small compared with the measuring distance and can be ignored in the initial value estimation. Hence, when the scanning laser arrives at the receiver ${P_j}$, the vector $\overrightarrow {{O_t}{P_j}}$ is assumed to be perpendicular to the normal vector of the laser plane $\overrightarrow {{n_{{ji}}}} = ({a_i}({\beta _{{ji}}}),{b_i}({\beta _{{ji}}}),{c_i}({\beta _{{ji}}}))$. The unit vector in the direction of $\overrightarrow {{O_t}{P_j}}$ can, thus, be expressed as

$${\overrightarrow{{u_j}} } = {\overrightarrow {{n_{j1}}}} \times \overrightarrow {{n_{j2}}} .$$
 figure: Fig. 4.

Fig. 4. Initial value estimation for the localization method with three receivers.

Download Full Size | PDF

The angles between each two vectors are written as

$$\cos {\alpha _1} = \overrightarrow {{u_1}} \cdot \overrightarrow {{u_2}} ,\quad \cos {\alpha _2} = \overrightarrow {{u_2}} \cdot \overrightarrow {{u_3}} ,\quad \cos {\alpha _3} = \overrightarrow {{u_1}} \cdot \overrightarrow {{u_3}} .$$

Based on the law of cosines, the following equations are given:

$$\left\{{\begin{split}{{\overline {{P_1}{P_2}}}^2} &= {{\overline {{O_t}{P_1}}}^2} + {{\overline {{O_t}{P_2}}}^2} - 2\left({\overline {{O_t}{P_1}}} \right)\left({\overline {{O_t}{P_2}}} \right)\cos {\alpha _1} ,\\{{\overline {{P_2}{P_3}}}^2} &= {{\overline {{O_t}{P_2}}}^2} + {{\overline {{O_t}{P_3}}}^2} - 2\left({\overline {{O_t}{P_2}}} \right)\left({\overline {{O_t}{P_3}}} \right)\cos {\alpha _2} ,\\{{\overline {{P_1}{P_3}}}^2} &= {{\overline {{O_t}{P_1}}}^2} + {{\overline {{O_t}{P_3}}}^2} - 2\left({\overline {{O_t}{P_1}}} \right)\left({\overline {{O_t}{P_3}}} \right)\cos {\alpha _3} ,\end{split}} \right.$$
where $({\overline {{P_1}{P_2}} ,\overline {{P_2}{P_3}} ,\overline {{P_1}{P_3}}})$ are calculated with the coordinates of receivers in the global coordinate system. After variable transformation, this system of equations can be solved using Wu–Ritt’s zero decomposition algorithm [70]. There are up to four possible solutions, and the state of the mobile robot is checked to exclude the false solutions. When $({\overline {{O_t}{P_1}} ,\overline {{O_t}{P_1}} ,\overline {{O_t}{P_3}}})$ are determined, the coordinates of receivers in the transmitter coordinate system are given by
$${{\boldsymbol P}_{{jt}}} = \left({\overline {{O_t}{P_j}}} \right)\overrightarrow{{u_j}} .$$

According to the coordinate transformation in Eq. (5), the initial values of position and orientation are then obtained [71].

2. Initial Value for More Than Three Receivers

The global coordinates of each receiver ${{\boldsymbol P}_{{jg}}}$ can be expressed as a weighted sum of four non-coplanar virtual control points ${{\boldsymbol C}_{{kg}}}$:

$${{\boldsymbol P}_{{jg}}} = \sum\limits_{k = 1}^4 {\lambda _{{jk}}}{{\boldsymbol C}_{{kg}}},\quad \sum\limits_{k = 1}^4 {\lambda _{{jk}}} = 1.$$

The control points are usually selected based on the centroid and three principal directions of the receivers. The coordinates of each receiver in the transmitter coordinate system are then given by

$$\begin{split}{{\boldsymbol P}_{{jt}}} &= \left[{\begin{array}{*{20}{c}}{{{\boldsymbol R}^{T}}}&{- {{\boldsymbol R}^{T}}{{\boldsymbol X}_0}}\end{array}} \right]\left[{\begin{array}{*{20}{c}}{{{\boldsymbol P}_{{jt}}}}\\1\end{array}} \right]\\ &= \left[{\begin{array}{*{20}{c}}{{{\boldsymbol R}^{T}}}&{- {{\boldsymbol R}^{T}}{{\boldsymbol X}_0}}\end{array}} \right]\left[{\begin{array}{*{20}{c}}{\sum\limits_{k = 1}^4 {\lambda _{{jk}}}{{\boldsymbol P}_{{kg}}}}\\{\sum\limits_{k = 1}^4 {\lambda _{{jk}}}}\end{array}} \right]\\ &= \sum\limits_{k = 1}^4 {\lambda _{{jk}}}{{\boldsymbol P}_{{kt}}}.\end{split}$$

The measurement model of wMPS thus can be written as

$$\left\{{\begin{split}&\sum\limits_{k = 1}^4 \left({{\lambda _{{jk}}}{a_1}({\beta _{j1}}){x_{{kt}}} + {\lambda _{{jk}}}{b_1}({\beta _{j1}}){y_{{kt}}} + {\lambda _{{jk}}}{c_1}({\beta _{j1}}){z_{{kt}}}} \right) = 0,\\&\sum\limits_{k = 1}^4 \left({{\lambda _{{jk}}}{a_2}({\beta _{j2}}){x_{{kt}}} + {\lambda _{{jk}}}{b_2}({\beta _{j2}}){y_{{kt}}} + {\lambda _{{jk}}}{c_2}({\beta _{j2}}){z_{{kt}}}} \right) = {c_2}({\beta _{j2}})d.\end{split}} \right.$$

The offset $d$ is also ignored here, and the following linear system is formed by concatenating the measurements of all receivers:

$${\boldsymbol MX} = {\textbf 0} ,$$
$${\boldsymbol M} = {\left[{\begin{array}{*{20}{c}}{{\lambda _{11}}{a_1}({\beta _{11}})}&\quad{{\lambda _{11}}{b_1}({\beta _{11}})}&\quad{{\lambda _{11}}{c_1}({\beta _{11}})}& \quad\cdots &\quad{{\lambda _{14}}{a_1}({\beta _{11}})}&\quad{{\lambda _{14}}{b_1}({\beta _{11}})}&\quad{{\lambda _{14}}{c_1}({\beta _{11}})}\\{{\lambda _{11}}{a_2}({\beta _{12}})}&\quad{{\lambda _{11}}{b_2}({\beta _{12}})}&\quad{{\lambda _{11}}{c_2}({\beta _{12}})}& \quad\cdots &\quad{{\lambda _{14}}{a_2}({\beta _{12}})}&\quad{{\lambda _{14}}{b_2}({\beta _{12}})}&\quad{{\lambda _{14}}{c_2}({\beta _{12}})}\\ \vdots & \quad\vdots & \quad\vdots & \quad\ddots & \quad\vdots & \quad\vdots & \quad\vdots \\{{\lambda _{n1}}{a_1}({\beta _{n1}})}&\quad{{\lambda _{n1}}{b_1}({\beta _{n1}})}&\quad{{\lambda _{n1}}{c_1}({\beta _{n1}})}& \quad\cdots &\quad{{\lambda _{n4}}{a_1}({\beta _{n1}})}&\quad{{\lambda _{n4}}{b_1}({\beta _{n1}})}&\quad{{\lambda _{n4}}{c_1}({\beta _{n1}})}\\{{\lambda _{n1}}{a_2}({\beta _{n2}})}&\quad{{\lambda _{n1}}{b_2}({\beta _{n2}})}&\quad{{\lambda _{n1}}{c_2}({\beta _{n2}})}& \quad\cdots &\quad{{\lambda _{n4}}{a_2}({\beta _{n2}})}&\quad{{\lambda _{n4}}{b_2}({\beta _{n2}})}&\quad{{\lambda _{n4}}{c_2}({\beta _{n2}})}\end{array}} \right]_{2n \times 12}} ,$$
$${\boldsymbol X} = {\left[{\begin{array}{*{20}{c}}{{x_{1t}}}&{{y_{1t}}}&{{z_{1t}}}& \cdots &{{x_{4t}}}&{{y_{4t}}}&{{z_{4t}}}\end{array}} \right]^{T}} ,$$
where $n$ is the number of receivers and the unknown ${\boldsymbol X}$ belongs to the null space of ${\boldsymbol M}$ and can be computed with the EPnP algorithm [56]. Once the four control points in the transmitter coordinate system are determined, the coordinates of each receiver ${{\boldsymbol P}_{{jt}}}$ can be recovered, and ultimately the initial values of $({x_0},{y_0},{z_0},\varphi ,\theta ,\psi)$ can be obtained.
 figure: Fig. 5.

Fig. 5. Initial setup of the Monte-Carlo simulation for uncertainty analysis.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Localization uncertainty with varying position uncertainty of receivers.

Download Full Size | PDF

3. UNCERTAINTY SIMULATION

In the proposed resection method, the mobile robot localization uncertainty mainly arises from the angle measurement uncertainty of wMPS and the position uncertainty of receivers. It is also affected by the geometric configuration, such as the number of receivers, the lateral position of the transmitter, and the measuring distance. In order to investigate the relationship between the localization uncertainty and these parameters, MCS is applied to the numerical analysis of uncertainty propagation. Normally distributed noise with typical variance is added to the input data (scanning angles and global coordinates of each receiver), and the MCS then produces the resulting localization uncertainty of the transmitter.

Based on the initial setup illustrated in Fig. 5, the uncertainty behavior is observed by altering some major parameters. The default value of the position uncertainty of receivers is set to 0.3 mm. Since the angle measurement uncertainty is expected to be normally distributed with a standard deviation better than 2 arc sec [72], the effect of its variation is negligible, and thus it is not discussed in this paper. The following simulations are conducted with ${10^6}$ samples, thus ensuring accurate results.

A. Position Uncertainty of Receivers

It can be expected that the localization uncertainty increases with increasing position uncertainty of receivers. Noises varying from 0.1 mm to 1.0 mm are added to the global coordinates of each receiver in the initial setup, and the corresponding results are shown in Fig. 6. There is a clear linear correlation of all orientation and position uncertainties with the uncertainty of receivers. The orientation uncertainty around the $Y$ axis and position uncertainty along the $Y$ axis are smaller than the other two directions. The results can provide some guidance on the selection of an appropriate global coordinate measuring instrument to meet the requirements for localization accuracy.

B. Number of Receivers

More receivers are expected to lead to a higher localization accuracy but at a higher cost. In order to obtain an intuitive trend of the localization uncertainty, the number of receivers is altered between three and twelve while the receivers cover the same area of ${4}\;{\rm m} \times 4\;{\rm m}$. As shown in Fig. 7, there is a significant decrease in the localization uncertainty with increasing number of receivers from three to four. When more than four receivers are available, the orientation and position uncertainties become less sensitive.

 figure: Fig. 7.

Fig. 7. Localization uncertainty with varying number of receivers.

Download Full Size | PDF

C. Lateral Position of Transmitter

In this simulation, the lateral position of the transmitter along the $X$ axis varies from ${-}{5}\;{\rm m}$ to 5 m. As seen from Fig. 8, the orientation and position uncertainties are symmetrical about $X = 0\;{\rm m}$. The orientation uncertainties around the $X$ axis and $Z$ axis decrease as the transmitter moves to either side. In contrast, orientation uncertainty around the $Y$ axis indicates a slightly increasing trend. With regard to the position uncertainty, the $Z$ axis is almost unaffected by the lateral position of the transmitter. Opposite trends in the position uncertainties along the $X$ axis and $Y$ axis can also be observed, and the $X$ axis shows higher accuracy than the $Y$ axis when the lateral movement of the transmitter exceeds 3 m.

 figure: Fig. 8.

Fig. 8. Localization uncertainty with varying lateral position of the transmitter.

Download Full Size | PDF

D. Measuring Distance

A variation in the $Z$ coordinate of the transmitter can illustrate the effect of the measuring distance on the localization uncertainty. Figure 9 shows the orientation uncertainty and position uncertainty with the measuring distance ranging from 5 m to 10 m. It can be observed that the orientation uncertainties around the $X$ axis and $Z$ axis present similar linear increases with increasing measuring distance while the $Y$ axis remains quite stable. Moreover, the position uncertainties along the $X$ axis and $Z$ axis both behave parabolically indicating higher sensitivity with increasing measuring distance, and the $Y$ axis follows a linear relationship with the measuring distance.

 figure: Fig. 9.

Fig. 9. Localization uncertainty with varying measuring distance.

Download Full Size | PDF

4. EXPERIMENT

In order to evaluate the performance of the proposed method, two experiments for automated guided vehicle (AGV) localization were conducted using the laser tracker measurements as ground truth. The static experiment was first carried out to compare the actually obtained results with the simulation of the real configuration. Then, the resection localization accuracy of the AGV under operating conditions was studied in the dynamic experiment.

A. Static Experiment

In the previous section, the MCSs were conducted in a simple configuration to clearly reflect the effects of several configuration parameters. However, the real configuration is usually much more complex. Hence, the static experiment was designed to investigate the localization uncertainty in the real configuration. As shown in Fig. 10, a single transmitter was rigidly fixed on the AGV, and a total of 10 receivers was distributed in the ${12}\;{\rm m} \times 6\;{\rm m}$ room. Since the 1.5 in spherically mounted retroreflector of the laser tracker is compatible with the target holder for photoelectric receivers of wMPS, the global coordinates of the receivers were measured by the Leica Absolute Tracker AT901. Then, the localization uncertainty of the transmitter at any location of the working space can be estimated using MCS. In order to fully investigate the localization uncertainty distribution in the real configuration, we selected 25 locations (see Fig. 11) to simulate the corresponding localization uncertainties. It can be seen from the simulation results shown in Fig. 12 that the localization uncertainties showed cyclical changes with locations of the transmitter. When the transmitter moved away from the receivers along the $X$ axis, the orientation uncertainties and position uncertainties increased. Moreover, when the transmitter moved along the $Y$ axis, the localization accuracy in the middle area was higher than that in either side. These conclusions are consistent with the simulation results in Section 3.

 figure: Fig. 10.

Fig. 10. Setup of the static experiment.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Setup of the Monte-Carlo simulations in the real configuration. The green dots represent the receivers distributed in the room. The red dots represent the selected locations of transmitter for uncertainty simulation.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Localization uncertainty with varying location of the transmitter.

Download Full Size | PDF

The simulation results were then verified by the actually obtained results. Since the true values of the transmitter’s position and orientation cannot be directly obtained, the localization accuracy was indirectly evaluated using the laser tracker measurements as ground truth. The Leica T-Mac, which can provide high-accuracy six degrees of freedom measurements, was also mounted on the AGV. When the AGV moved along a straight line, the transmitter and T-Mac traveled the same distance. The difference of traveling distances between the transmitter and T-Mac can, thus, represent the localization error of the proposed resection method. We tested 15 sets of distances around different locations in the order $(1,3,5,6,8, \ldots ,25)$ as depicted in Fig. 11. The errors of measured distances are shown in Fig. 13. It can be observed that the actually obtained results are consistent with the simulations in the variation trend and error magnitude.

 figure: Fig. 13.

Fig. 13. Error of measured distance using the wMPS-based resection localization method with respect to laser tracker in different locations.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Setup of the AGV localization experiment with the wMPS-based resection method, using the laser tracker and T-Mac as reference device.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Trajectories of the transmitter and T-Mac in the laser tracker coordinate system.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. Orientation and position of the transmitter with respect to the T-Mac.

Download Full Size | PDF

B. Dynamic Experiment

A dynamic experiment for AGV localization in operating condition was also conducted, as depicted in Fig. 14. There were 12 receivers distributed in the ${10}\;{\rm m} \times 6\;{\rm m}$ room. The global coordinates of the receivers were measured by the Leica Absolute Tracker AT901. The transmitter was fixed on the AGV to provide real-time localization information using the resection method:

$${{\boldsymbol X}_l} = {{\boldsymbol R}_{{tl}}}{{\boldsymbol X}_t} + {{\boldsymbol X}_{{tl} 0}} .$$

The Leica T-Mac, a key component of the laser tracker, was also mounted on the AGV, acting as the reference device for high-accuracy six degrees of freedom tracking:

$${{\boldsymbol X}_l} = {{\boldsymbol R}_{{ml}}}{{\boldsymbol X}_m} + {{\boldsymbol X}_{{ml} 0}} .$$

Consequently, the measurements of the transmitter and the T-Mac were unified in the laser tracker coordinate system.

In the experiment, the AGV was set to travel at 0.2 m/s, which is a typical speed for AGV under operating conditions [7375]. The transmitter was tested in advance, and its angle measurement accuracy was better than 2 arc sec. The measuring frequency of wMPS was set as 30 Hz. In addition, the heading angle of the AGV was altered within the observation range of the T-Mac. As the AGV moved, the number of effective receivers varied between three and twelve. The trajectories of the transmitter and the T-Mac are shown in Fig. 15. Since the transmitter and the T-Mac remained relatively stationary regardless of the motion state of the AGV, the position and orientation of the transmitter with respect to the T-Mac were expected to be constant:

$${{\boldsymbol X}_m} = {{\boldsymbol R}_{{tm}}}{{\boldsymbol X}_t} + {{\boldsymbol X}_{{tm} 0}} = {\boldsymbol R}_{{ml}}^{T}{{\boldsymbol R}_{{tl}}}{{\boldsymbol X}_t} + {\boldsymbol R}_{{ml}}^{T}({{\boldsymbol X}_{{tl} 0}} - {{\boldsymbol X}_{{ml} 0}}) .$$

However, due to the resection localization error, the actually obtained results are presented in Fig. 16. In the processing pipeline, the data of the transmitter and T-Mac are matched based on the recorded measuring time. Standard deviations of the 1726 sets of results, which can signify the accuracy of the wMPS-based resection localization method, are listed in Table 1. The AGV in static state was also tested at 12 different positions, and the corresponding standard deviations are also listed. The standard deviations of the orientation errors around each axis are less than 0.02°, and the standard deviations of the position errors along three directions are all under 1 mm. Besides, the position error in dynamic state is larger than that in static state, which is caused by the dynamic measurement error of wMPS and needs to be further improved. The results demonstrate the feasibility and high accuracy of the proposed method for mobile robot localization.

Tables Icon

Table 1. Standard Deviations of the Obtained Orientations and Positions of the Transmitter with Respect to the T-Mac in Static State and Dynamic State

5. CONCLUSION

This paper presents a flexible and cost-effective method for mobile robot localization based on photoelectric scanning technology. A single transmitter is fixed on the mobile robot, and the photoelectric receivers are distributed in the surrounding environment. The mobile robot can be localized in real time by minimizing the objective function, which is formed by the constraint equations of each receiver. In the iterative optimization, two different methods are introduced to determine the initial value for the conditions with three receivers and more than three receivers. The relationships between the localization uncertainty and several configuration parameters are also investigated using MCSs, providing some guidance on practical localization applications. Furthermore, the proposed method is tested on an AGV system, and impressive performance is demonstrated with reference measurements of the laser tracker and T-Mac.

As the dynamic performance of wMPS is limited by low measurement rate, an inertial measurement unit will be utilized in future work. The wMPS-based resection localization method can cope better with fast movement and occlusion by fusing the high-rate inertial measurements with the photoelectric scanning measurements.

Funding

National Natural Science Foundation of China (51835007, 51775380, 51721003); Engineering and Physical Sciences Research Council (EP/P006930/1).

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

REFERENCES

1. M. Schneier, M. Schneier, and R. Bostelman, Literature Review of Mobile Robots for Manufacturing (U.S. Department of Commerce, National Institute of Standards and Technology, 2015).

2. A. G. Gonzalez, M. V. Alves, G. S. Viana, L. K. Carvalho, and J. C. Basilio, “Supervisory control-based navigation architecture: a new framework for autonomous robots in industry 4.0 environments,” IEEE Trans. Ind. Inf. 14, 1732–1743 (2017). [CrossRef]  

3. T. T. Mac, C. Copot, D. T. Tran, and R. De Keyser, “Heuristic approaches in robot path planning: a survey,” Rob. Auton. Syst. 86, 13–28 (2016). [CrossRef]  

4. S. Guo, T.-T. Fang, T. Song, F.-F. Xi, and B.-G. Wei, “Tracking and localization for omni-directional mobile industrial robot using reflectors,” Adv. Manuf. 6, 118–125 (2018). [CrossRef]  

5. P. Lin, X. Hu, Y. Ruan, H. Li, J. Fang, Y. Zhong, H. Zheng, J. Fang, Z. L. Jiang, and Z. Chen, “Real-time visible light positioning supporting fast moving speed,” Opt. Express 28, 14503–14510 (2020). [CrossRef]  

6. R. F. Brena, J. P. Garca-Vázquez, C. E. Galván-Tejada, D. Muñoz-Rodriguez, C. Vargas-Rosales, and J. Fangmeyer, “Evolution of indoor positioning technologies: A survey,” J. Sens. 2017, 2630413 (2017). [CrossRef]  

7. F. Ma, F. Liu, X. Zhang, P. Wang, H. Bai, and H. Guo, “An ultrasonic positioning algorithm based on maximum correntropy criterion extended Kalman filter weighted centroid,” Signal Image Video Process. 12, 1207–1215 (2018). [CrossRef]  

8. Y. Xu, Y. S. Shmaliy, C. K. Ahn, G. Tian, and X. Chen, “Robust and accurate UWB-based indoor robot localisation using integrated EKF/EFIR filtering,” IET Radar Sonar Navig. 12, 750–756 (2018). [CrossRef]  

9. X. Hou and T. Arslan, “Monte Carlo localization algorithm for indoor positioning using Bluetooth low energy devices,” in 2017 International Conference on Localization and GNSS (ICL-GNSS) (IEEE, 2017), pp. 1–6.

10. A. Motroni, P. Nepa, V. Magnago, A. Buffi, B. Tellini, D. Fontanelli, and D. Macii, “SAR-based indoor localization of UHF-RFID tags via mobile robot,” in 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN) (IEEE, 2018), pp. 1–8.

11. L. Zhang, Z. Chen, W. Cui, B. Li, C. Chen, Z. Cao, and K. Gao, “Wifi-based indoor robot positioning using deep fuzzy forests,” IEEE Internet Things J. 7, 10773–10781 (2020). [CrossRef]  

12. H. Huang, B. Lin, L. Feng, and H. Lv, “Hybrid indoor localization scheme with image sensor-based visible light positioning and pedestrian dead reckoning,” Appl. Opt. 58, 3214–3221 (2019). [CrossRef]  

13. D. Zhu, X. Feng, X. Xu, Z. Yang, W. Li, S. Yan, and H. Ding, “Robotic grinding of complex components: a step towards efficient and intelligent machining–challenges, solutions, and applications,” Rob. Comput. Integr. Manuf. 65, 101908 (2020). [CrossRef]  

14. G. Barbosa, A. Hernandes, S. Luz, J. Batista, V. Nunes, M. Becker, and M. Arruda, “A conceptual study towards delivery of consumable materials to aircraft assembly stations performed by mobile robots based on industry 4.0 principles,” Int. J. Aeronaut. Aerosp. Eng. 6, 2 (2017). [CrossRef]  

15. S. Guo, Q. Diao, and F. Xi, “Vision based navigation for omni-directional mobile industrial robot,” Procedia Comput. Sci. 105, 20–26 (2017). [CrossRef]  

16. Z. Chong, F. Xie, X.-J. Liu, J. Wang, and H. Niu, “Design of the parallel mechanism for a hybrid mobile robot in wind turbine blades polishing,” Rob. Comput. Integr. Manuf. 61, 101857 (2020). [CrossRef]  

17. R. Schmitt, M. Peterek, E. Morse, W. Knapp, M. Galetto, F. Härtig, G. Goch, B. Hughes, A. Forbes, and W. Estler, “Advances in large-scale metrology–review and future trends,” CIRP Ann. 65, 643–665 (2016). [CrossRef]  

18. B. Tao, X. Zhao, and H. Ding, “Mobile-robotic machining for large complex components: a review study,” Sci. China Technol. Sci. 62, 1388–1400 (2019). [CrossRef]  

19. Z. Wang, M. Liang, and P. G. Maropoulos, “High accuracy mobile robot positioning using external large volume metrology instruments,” Int. J. Comput. Integr. Manuf. 24, 484–492 (2011). [CrossRef]  

20. H. Susemihl, C. Brillinger, S. P. Stürmer, S. Hansen, C. Boehlmann, S. Kothe, J. Wollnack, and W. Hintze, “Referencing strategies for high accuracy machining of large aircraft components with mobile robotic systems,” SAE Technical Paper (2017).

21. C. Brillinger, H. Susemihl, F. Ehmke, T. Staude, K. Deutmarg, M. Klemstein, C. Boehlmann, W. Hintze, and J. Wollnack, “Mobile laser trackers for aircraft manufacturing: increasing accuracy and productivity of robotic applications for large parts,” SAE Technical Paper 2019-01-1368 (SAE International, 2019).

22. Y. Juqing, W. Dayong, and Z. Weihu, “Precision laser tracking servo control system for moving target position measurement,” Optik 131, 994–1002 (2017). [CrossRef]  

23. Z. Wei and X. Liu, “Vanishing feature constraints calibration method for binocular vision sensor,” Opt. Express 23, 18897–18914 (2015). [CrossRef]  

24. B. Sargeant, S. Robson, E. Szigeti, P. Richardson, A. El-Nounu, and M. Rafla, “A method to achieve large volume, high accuracy photogrammetric measurements through the use of an actively deformable sensor mounting platform,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (International Society for Photogrammetry and Remote Sensing, 2016), Vol. 23, pp. 123–129.

25. Y. Qin, M. T. Frye, H. Wu, S. Nair, and K. Sun, “Study of robot localization and control based on motion capture in indoor environment,” Integr. Ferroelectr. 201, 1–11 (2019). [CrossRef]  

26. E. Krotkov, “Mobile robot localization using a single image,” in ICRA (1989), Vol. 89, pp. 978–983.

27. M. Betke and L. Gurvits, “Mobile robot localization using landmarks,” IEEE Trans. Robot. Autom. 13, 251–263 (1997). [CrossRef]  

28. O. A. Aider, P. Hoppenot, and E. Colle, “A model-based method for indoor mobile robot localization using monocular vision and straight-line correspondences,” Rob. Auton. Syst. 52, 229–246 (2005). [CrossRef]  

29. S.-Y. Hwang and J.-B. Song, “Monocular vision-based global localization using position and orientation of ceiling features,” in 2013 IEEE International Conference on Robotics and Automation (IEEE, 2013), pp. 3785–3790.

30. F. Liu, J. Zhang, J. Wang, and B. Li, “The indoor localization of a mobile platform based on monocular vision and coding images,” ISPRS Int. J. Geo-Inf. 9, 122 (2020). [CrossRef]  

31. E. Royer, M. Lhuillier, M. Dhome, and J.-M. Lavest, “Monocular vision for mobile robot localization and autonomous navigation,” Int. J. Comput. Vis. 74, 237–260 (2007). [CrossRef]  

32. X. Zhong, Y. Zhou, and H. Liu, “Design and recognition of artificial landmarks for reliable indoor self-localization of mobile robots,” Int. J. Adv. Rob. Syst. 14, 1729881417693489 (2017). [CrossRef]  

33. M. Chan-Ley, G. Olague, G. E. Altamirano-Gomez, and E. Clemente, “Self-localization of an uncalibrated camera through invariant properties and coded target location,” Appl. Opt. 59, D239–D245 (2020). [CrossRef]  

34. Z. Huang, L. Yang, Y. Zhang, Y. Guo, Y. Ren, J. Lin, and J. Zhu, “Photoelectric scanning-based method for positioning omnidirectional automatic guided vehicle,” Opt. Eng. 55, 034105 (2016). [CrossRef]  

35. S. Shi, L. Yang, J. Lin, Y. Ren, S. Guo, and J. Zhu, “Omnidirectional angle constraint based dynamic six-degree-of-freedom measurement for spacecraft rendezvous and docking simulation,” Meas. Sci. Technol. 29, 045005 (2018). [CrossRef]  

36. J. Lin, J. Chen, L. Yang, Y. Ren, Z. Wang, P. Keogh, and J. Zhu, “Design and development of a ceiling-mounted workshop measurement positioning system for large-scale metrology,” Opt. Laser Eng. 124, 105814 (2020). [CrossRef]  

37. S. Guo, Y. Ren, Z. Huang, Y. Chen, and T. Hong, “2D position guidance with single-station optical scan-based system,” Proc. SPIE 9623, 96230F (2015). [CrossRef]  

38. T. Shih and W. Faig, “A solution for space resection in closed form,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 27, 547–556 (1988).

39. N. A. Borghese and G. Ferrigno, “An algorithm for 3-D automatic movement detection by means of standard TV cameras,” IEEE Trans. Biomed. Eng. 37, 1221–1225 (1990). [CrossRef]  

40. Z. Zeng and X. Wang, “A general solution of a closed-form space resection,” Photogramm. Eng. Remote Sens. 58, 327 (1992).

41. G. A. Thomas, J. Jin, T. Niblett, and C. Urquhart, “A versatile camera position measurement system for virtual reality TV production,” in 1997 International Broadcasting Convention IBS 97 (IET, 1997), pp. 284–289.

42. R. Mautz and S. Tilch, “Survey of optical indoor positioning systems,” in 2011 International Conference on Indoor Positioning and Indoor Navigation (IEEE, 2011), pp. 1–7.

43. R. Azuma and M. Ward, “Space resection by collinearity mathematics behind the optical ceiling head-tracker”,” 1991 Technical Report (University of North Carolina at Chapel Hill, 1991).

44. M. M. Mostafa, K.-P. Schwarz, and P. Gong, “GPS/INS integrated navigation system in support of digital image georeferencing,” in Proceedings of the 54th Annual Meeting of the Institute of Navigation (1998), pp. 435–444.

45. S.-B. Kim, S.-Y. Lee, T.-H. Hwang, and K.-H. Choi, “An advanced approach for navigation and image sensor integration for land vehicle navigation,” in IEEE 60th Vehicular Technology Conference(IEEE, 2004), Vol. 6, pp. 4075–4078.

46. K.-W. Chiang, J.-K. Liao, S.-H. Huang, H.-W. Chang, and C.-H. Chu, “The performance analysis of space resection-aided pedestrian dead reckoning for smartphone navigation in a mapped indoor environment,” ISPRS Int. J. Geo-Inf. 6, 43 (2017). [CrossRef]  

47. S. Del Pizzo, U. Papa, S. Gaglione, S. Troisi, and G. Del Core, “A vision-based navigation system for landing procedure,” Acta IMEKO 7, 102–109 (2018). [CrossRef]  

48. A. Smith, “The explicit solution of the single picture resection problem, with a least squares adjustment to redundant control,” Photogramm. Rec. 5, 113–122 (1965). [CrossRef]  

49. E. Thompson, “Space resection: Failure cases,” Photogramm. Rec. 5, 201–207 (1966). [CrossRef]  

50. R. Hunt, “Estimation of initial values before bundle adjustment of close range data,” Int. Arch. Photogramm. Remote Sens. 25, 419–428 (1984).

51. Q. Ji, M. S. Costa, R. M. Haralick, and L. G. Shapiro, “A robust linear least-squares estimation of camera exterior orientation using multiple geometric features,” ISPRS J. Photogramm. Remote Sens. 55, 75–93 (2000). [CrossRef]  

52. S. Li, C. Xu, and M. Xie, “A robust O(n) solution to the perspective-n-point problem,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1444–1450 (2012). [CrossRef]  

53. C. Meng and W. Xu, “ScPnP: a non-iterative scale compensation solution for PnP problems,” Image Vision Comput. 106, 104085 (2021). [CrossRef]  

54. Y. I. Abdel-Aziz, H. Karara, and M. Hauck, “Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry,” Photogramm. Eng. Remote Sens. 81, 103–107 (2015). [CrossRef]  

55. G. Schweighofer and A. Pinz, “Globally optimal O(n) solution to the PNP problem for general camera models,” in BMVC (2008), pp. 1–10.

56. V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: An accurate O(n) solution to the PnP problem,” Int. J. Comput. Vis. 81, 155 (2009). [CrossRef]  

57. A. M. G. Tommaselli and C. L. Tozzi, “A recursive approach to space resection using straight lines,” Photogramm. Eng. Remote Sens. 62, 57–65 (1996).

58. C.-P. Lu, G. D. Hager, and E. Mjolsness, “Fast and globally convergent pose estimation from video images,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 610–622 (2000). [CrossRef]  

59. V. Garro, F. Crosilla, and A. Fusiello, “Solving the PnP problem with anisotropic orthogonal Procrustes analysis,” in 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission (IEEE, 2012), pp. 262–269.

60. H. Zeng, “Iterative algorithm of space resection using Rodrigues matrix,” in 2010 The 2nd Conference on Environmental Science and Information Application Technology (IEEE, 2010), Vol. 1, pp. 191–194.

61. A. R. Norman, A. Schönberg, I. A. Gorlach, and R. Schmitt, “Validation of iGPS as an external measurement system for cooperative robot positioning,” Int. J. Adv. Manuf. Technol. 64, 427–446 (2013). [CrossRef]  

62. Z. Wang, L. Mastrogiacomo, F. Franceschini, and P. Maropoulos, “Experimental comparison of dynamic tracking performance of iGPS and laser tracker,” Int. J. Adv. Manuf. Technol. 56, 205–213 (2011). [CrossRef]  

63. R. Schmitt, S. Nisch, A. Schönberg, F. Demeester, and S. Renders, “Performance evaluation of iGPS for industrial applications,” in 2010 International Conference on Indoor Positioning and Indoor Navigation (IEEE, 2010), pp. 1–8.

64. J. Schwendemann, T. Müller, and R. Krautschneider, “Indoor navigation of machines and measuring devices with iGPS,” in 2010 International Conference on Indoor Positioning and Indoor Navigation (IEEE, 2010), pp. 1–8.

65. G. Mosqueira, J. Apetz, K. Santos, E. Villani, R. Suterio, and L. Trabasso, “Analysis of the indoor GPS system as feedback for the robotic alignment of fuselages using laser radar measurements as comparison,” Rob. Comput. Integr. Manuf. 28, 700–709 (2012). [CrossRef]  

66. M. de Campos Porath, L. A. F. Bortoni, R. Simoni, and J. S. Eger, “Offline and online strategies to improve pose accuracy of a Stewart platform using indoor-GPS,” Precis. Eng. 63, 83–93 (2020). [CrossRef]  

67. Q. Chen, L. Tang, Z. Yang, L. Wang, W. Feng, S. Zheng, and W. Yi, “Study and application of lunar rover tracking measurement technology with laser radar and iGPS,” IOP Conf. Ser. 592, 012147 (2019). [CrossRef]  

68. T. Konrad, T. Kamp, and D. Abel, “Indoor state estimation for multirotors using a loosely-coupled integration of inertial navigation,” in 2019 IEEE Conference on Control Technology and Applications (CCTA) (IEEE, 2019), pp. 776–782.

69. J. J. Moré, “The Levenberg-Marquardt algorithm: implementation and theory,” in Numerical Analysis (Springer, 1978), pp. 105–116.

70. X.-S. Gao, X.-R. Hou, J. Tang, and H.-F. Cheng, “Complete solution classification for the perspective-three-point problem,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 930–943 (2003). [CrossRef]  

71. B. R. Harvey, “Transformation of 3D co-ordinates,” Aust. Surveyor 33, 105–125 (1986). [CrossRef]  

72. J. Zhao, Y. Ren, J. Lin, S. Yin, and J. Zhu, “Study on verifying the angle measurement performance of the rotary-laser system,” Opt. Eng. 57, 044106 (2018). [CrossRef]  

73. A. Böckenkamp, F. Weichert, Y. Rudall, and C. Prasse, “Automatic robot-based unloading of goods out of dynamic AGVS within logistic environments,” in Commercial Transport (Springer, 2016), pp. 397–412.

74. D. S. Schueftan, M. J. Colorado, and I. F. M. Bernal, “Indoor mapping using slam for applications in flexible manufacturing systems,” in 2015 IEEE 2nd Colombian Conference on Automatic Control (CCAC) (IEEE, 2015), pp. 1–6.

75. R. Liu, A. Koch, and A. Zell, “Mapping UHF RFID tags with a mobile robot using a 3D sensor model,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2013), pp. 1589–1594.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Components of wMPS and working mode of the transmitter.
Fig. 2.
Fig. 2. Measuring principle of the scanning angles.
Fig. 3.
Fig. 3. Configuration of the wMPS-based resection method for mobile robot localization.
Fig. 4.
Fig. 4. Initial value estimation for the localization method with three receivers.
Fig. 5.
Fig. 5. Initial setup of the Monte-Carlo simulation for uncertainty analysis.
Fig. 6.
Fig. 6. Localization uncertainty with varying position uncertainty of receivers.
Fig. 7.
Fig. 7. Localization uncertainty with varying number of receivers.
Fig. 8.
Fig. 8. Localization uncertainty with varying lateral position of the transmitter.
Fig. 9.
Fig. 9. Localization uncertainty with varying measuring distance.
Fig. 10.
Fig. 10. Setup of the static experiment.
Fig. 11.
Fig. 11. Setup of the Monte-Carlo simulations in the real configuration. The green dots represent the receivers distributed in the room. The red dots represent the selected locations of transmitter for uncertainty simulation.
Fig. 12.
Fig. 12. Localization uncertainty with varying location of the transmitter.
Fig. 13.
Fig. 13. Error of measured distance using the wMPS-based resection localization method with respect to laser tracker in different locations.
Fig. 14.
Fig. 14. Setup of the AGV localization experiment with the wMPS-based resection method, using the laser tracker and T-Mac as reference device.
Fig. 15.
Fig. 15. Trajectories of the transmitter and T-Mac in the laser tracker coordinate system.
Fig. 16.
Fig. 16. Orientation and position of the transmitter with respect to the T-Mac.

Tables (1)

Tables Icon

Table 1. Standard Deviations of the Obtained Orientations and Positions of the Transmitter with Respect to the T-Mac in Static State and Dynamic State

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

{ β 1 = ω ( t 1 t 0 ) , β 2 = ω ( t 2 t 0 ) .
{ a 1 x + b 1 y + c 1 z = 0 , a 2 x + b 2 y + c 2 ( z d ) = 0 .
a i 2 + b i 2 + c i 2 = 1 , i = 1 , 2.
[ a i ( β i ) b i ( β i ) c i ( β i ) ] = [ cos β i sin β i 0 sin β i cos β i 0 0 0 1 ] [ a i b i c i ] .
[ x g y g z g ] = R [ x t y t z t ] + [ x 0 y 0 z 0 ] ,
R = R φ R θ R ψ = [ 1 0 0 0 cos φ sin φ 0 sin φ cos φ ] [ cos θ 0 sin θ 0 1 0 sin θ 0 cos θ ] [ cos ψ sin ψ 0 sin ψ cos ψ 0 0 0 1 ] = [ cos θ cos ψ cos θ sin ψ sin θ cos φ sin ψ + sin φ sin θ cos ψ cos φ cos ψ sin φ sin θ sin ψ sin φ cos θ sin φ sin ψ cos φ sin θ cos ψ sin φ cos ψ + cos φ sin θ sin ψ cos φ cos θ ] ,
{ δ j 1 = [ a 1 ( β j 1 ) b 1 ( β j 1 ) c 1 ( β j 1 ) ] R T [ x g j x 0 y g j y 0 z g j z 0 ] , δ j 2 = [ a 2 ( β j 2 ) b 2 ( β j 2 ) c 2 ( β j 2 ) ] R T [ x g j x 0 y g j y 0 z g j z 0 ] c 2 ( β j 2 ) d ,
min j = 1 n ( δ j 1 2 + δ j 2 2 ) .
u j = n j 1 × n j 2 .
cos α 1 = u 1 u 2 , cos α 2 = u 2 u 3 , cos α 3 = u 1 u 3 .
{ P 1 P 2 ¯ 2 = O t P 1 ¯ 2 + O t P 2 ¯ 2 2 ( O t P 1 ¯ ) ( O t P 2 ¯ ) cos α 1 , P 2 P 3 ¯ 2 = O t P 2 ¯ 2 + O t P 3 ¯ 2 2 ( O t P 2 ¯ ) ( O t P 3 ¯ ) cos α 2 , P 1 P 3 ¯ 2 = O t P 1 ¯ 2 + O t P 3 ¯ 2 2 ( O t P 1 ¯ ) ( O t P 3 ¯ ) cos α 3 ,
P j t = ( O t P j ¯ ) u j .
P j g = k = 1 4 λ j k C k g , k = 1 4 λ j k = 1.
P j t = [ R T R T X 0 ] [ P j t 1 ] = [ R T R T X 0 ] [ k = 1 4 λ j k P k g k = 1 4 λ j k ] = k = 1 4 λ j k P k t .
{ k = 1 4 ( λ j k a 1 ( β j 1 ) x k t + λ j k b 1 ( β j 1 ) y k t + λ j k c 1 ( β j 1 ) z k t ) = 0 , k = 1 4 ( λ j k a 2 ( β j 2 ) x k t + λ j k b 2 ( β j 2 ) y k t + λ j k c 2 ( β j 2 ) z k t ) = c 2 ( β j 2 ) d .
M X = 0 ,
M = [ λ 11 a 1 ( β 11 ) λ 11 b 1 ( β 11 ) λ 11 c 1 ( β 11 ) λ 14 a 1 ( β 11 ) λ 14 b 1 ( β 11 ) λ 14 c 1 ( β 11 ) λ 11 a 2 ( β 12 ) λ 11 b 2 ( β 12 ) λ 11 c 2 ( β 12 ) λ 14 a 2 ( β 12 ) λ 14 b 2 ( β 12 ) λ 14 c 2 ( β 12 ) λ n 1 a 1 ( β n 1 ) λ n 1 b 1 ( β n 1 ) λ n 1 c 1 ( β n 1 ) λ n 4 a 1 ( β n 1 ) λ n 4 b 1 ( β n 1 ) λ n 4 c 1 ( β n 1 ) λ n 1 a 2 ( β n 2 ) λ n 1 b 2 ( β n 2 ) λ n 1 c 2 ( β n 2 ) λ n 4 a 2 ( β n 2 ) λ n 4 b 2 ( β n 2 ) λ n 4 c 2 ( β n 2 ) ] 2 n × 12 ,
X = [ x 1 t y 1 t z 1 t x 4 t y 4 t z 4 t ] T ,
X l = R t l X t + X t l 0 .
X l = R m l X m + X m l 0 .
X m = R t m X t + X t m 0 = R m l T R t l X t + R m l T ( X t l 0 X m l 0 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.