Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Bootstrap geometric ground calibration method for wide angle star sensors

Open Access Open Access

Abstract

Wide angle star sensors are becoming more prevalent in aeronautics. A wide angle lens provides a greater field of view for star detection, but consequently incurs significant lens distortion. The effects of distortion complicate star identification, causing algorithms to fail or report false identifications. We address the issue of calibrating a wide angle star sensor without any specialized equipment, by analyzing two time-separated images captured from a static camera. An initial estimate of the focal length is obtained by observing the displacement of stars between the images. The focal length is subsequently used to build an initial estimate of camera intrinsics, and to identify stars in the image. A RANSAC-augmented Kabsch algorithm is implemented to determine camera orientation, while simultaneously removing false identifications. The identified stars are used to provide a precise estimate of camera focal length, before applying non-linear optimization in a radial search algorithm. The methodology was tested on two cameras, demonstrating the effectiveness of this algorithm in achieving a precise geometric calibration using real hardware, without any specialized calibration equipment.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. INTRODUCTION

The night sky offers a potent array of calibration targets, whose positions are known to sub-arcsec levels of accuracy [1]. Star fields have been used for decades to calibrate stellar imaging systems [2]; however, these autonomous calibration methods are faced with a causality dilemma when their respective imaging systems contain non-linear distortions: to autonomously rectify the distortions, the system must be able to identify stars in the scene, but in order to identify stars in the scene, the system must be able to rectify the distortions.

Star sensors are commonplace in aeronautical and space navigation applications (e.g., [35]). The accuracy of a celestial navigation system is dependent upon the accuracy to which observations can be made. Many stabilized applications utilize narrow fields of view (e.g., [6]); however, non-stabilized star sensors require larger fields of view due to their inability to actively track a target. Consequently, calibration methods seen in the literature tend to be targeted towards narrow field of view sensors, which closely conform with the pinhole projection model. This study is focused on wide angle star sensors, which are more suited to aeronautical applications.

Narrow field of view calibration approaches tend to utilize non-dimensional approaches to star identification, using the inner angle of star triples to preclude the need for camera intrinsic parameters such as focal length or the optical center [7,8]. Examples of wide angle sensor calibration in the literature utilize the constraint that the center of radial distortion will be approximately coincident with the image center; therefore, the inner angles formed by star triples that are equidistant from the image center are approximately invariant to both the camera intrinsics and non-linear distortions [9]. Other approaches to this problem assume that the stars are already identified, and do not address the problem of identifying stars in a distorted image [10,11]. Furthermore, the scope of the studies was limited to a laboratory environment or simulation.

Contrary to the findings in [79], star triples are not invariant under transforms in the projective general linear group, and consequently are not appropriate for non-dimensional star identification. It has been shown that an asterism must contain a minimum of five points (of which no three are collinear) to meet the criteria for invariance when both the orientation and the calibration of the camera are unknown [12]. The use of five stars for matching, however, raises its own difficulties—each asterism has 120 permutations, leading to significantly increased database sizes and more complex matching algorithms. For this reason, the $J$-invariant or the ${p^2}$-invariant are typically used; however, this approach reduces a five-star asterism to two independent descriptors. Even using the $J$-invariant, the problem becomes intractable in the general lost-in-space case, with approximately 1.4 million asterisms in the database given a conservative magnitude threshold of 5.0 and a maximum intra-asterism angular separation of 15°.

In practice, lost-in-space star identification can be achieved with only the focal length of the imaging system, provided that the principal point may be closely approximated at the image center. We postulate that star identification in the presence of unknown intrinsics should not be a necessary task, as long as one can fix the location of the star sensor for a short duration (around 30 s). The intent of this study is to demonstrate a methodology that averts the need to perform star identification without camera intrinsics, and addresses the problem of practically acquiring the intrinsics and distortion parameters. Additionally, our proposed methodology addresses the need for an accurate method of geometrically calibrating wide angle star sensors, without the use of expensive equipment (e.g., [13]).

Some recent works have also addressed this need. For example, one approach utilizes a ring-laser gyroscope in a strapdown configuration to generate a large number of observations in a short period [14]. The compiled set of images was used to calibrate the imaging sensor in a conventional manner. Similar approaches are seen in [15,16]; however, these are focused on geometric alignment. They have a particular focus on simultaneously calibrating the IMU through precise control of a turntable. Their respective observability analyses determine the optimal motions for the alignment of the star sensors and inertial units. The calibration method presented in [17] utilizes the motion of the Earth from a fixed configuration. Similarly, the low-cost methodologies presented in [18,19] utilize the motion of Earth as an effective turntable. These studies are truly independent of expensive equipment; however, they do not address the concerns of bootstrapping the system, by performing star identification without a parameterized imaging system. Additionally, the time required to execute these methods is significantly greater than the methods presented in this study.

We present a method that can be achieved without calibration equipment, and with basic variables that should be readily available to a star sensor, such as position, time, and optionally attitude. Such a method enables calibration in the field, which carries further benefits, such as the ability to calibrate the star sensor directly before operation, and the ability to calibrate in true environmental conditions. The intrinsic parameters of an optical system will change with respect to temperature and pressure; thus, they are best calibrated as close to operating conditions as possible, which is often not possible to achieve in a lab.

We present the methodology in Section 2, addressing the problems of star centroiding (Section 2.B), obtaining an initial estimate of focal length (Section 2.C), orientation estimation and star identification (Section 2.D), precise focal length estimation (Section 2.E), and non-linear optimization (Section 2.F). The results are presented in Section 3, a discussion is presented in Section 4, and we conclude in Section 5.

2. METHODS

Throughout this paper we refer to observed stars, and database stars. Observed stars are detected through image processing, represented by their sub-pixel location in the image. Database stars are those that have been identified, and have a known location in the celestial-equatorial coordinate system, as provided by the Yale Bright Star Catalogue. The apparent right ascension, $\alpha$, and declination, $\delta$, of these database stars are calculated following the methodology in [20], taking into account proper motion, precession, nutation, and aberration. The star co-ordinates for a local observer are subsequently calculated, taking into account atmospheric refraction. We omit the details for this study, however refer the reader to [20] for a comprehensive description of the calculation.

A. Camera Projection

The projection of a star in north-east-down (NED) coordinates is given by the following equation:

$$\alpha \textbf{x} = \textbf{KRX},$$
where $\alpha$ is the scaling factor to ensure a homogeneous projection, $\textbf{X}$ is the vector containing the star location in NED world coordinates
$$\textbf{X} = \left[{\begin{array}{*{20}{c}}X\\Y\\Z\end{array}} \right],$$
$\textbf{x}$ is the vector containing the homogeneous pixel coordinates
$$\textbf{x} = \left[{\begin{array}{*{20}{c}}u\\v\\1\end{array}} \right],$$
$\textbf{K}$ is the matrix containing the camera intrinsics
$$\textbf{K} = \left[{\begin{array}{*{20}{c}}{{f_x}}&0&{{p_x}}\\0&{{f_y}}&{{p_y}}\\0&0&1\end{array}} \right],$$
and $\textbf{R}$ is the rotation matrix describing the camera orientation:
$$\textbf{R} = \left[{\begin{array}{*{20}{c}}{\cos (\theta)\cos (\psi)}&{\cos (\theta)\sin (\psi)}&{- \sin (\theta)}\\{- \cos (\phi)\sin (\psi) + \sin (\phi)\sin (\theta)\cos (\psi)}&{\cos (\phi)\cos (\psi) + \sin (\phi)\sin (\theta)\sin (\psi)}&{\sin (\phi)\cos (\theta)}\\{\sin (\phi)\sin (\psi) + \cos (\phi)\sin (\theta)\cos (\psi)}&{- \sin (\phi)\cos (\psi) + \cos (\phi)\sin (\theta)\sin (\psi)}&{\cos (\phi)\cos (\theta)}\end{array}} \right],$$
where $\phi$, $\theta$, and $\psi$ represent the Euler angles of roll, pitch, and yaw, respectively.

Stars are assumed to be infinitely far away; therefore, translation has no apparent effect on their location.

B. Star Centroiding

Star centroiding is used to determine the effective position of a star on the image plane. The position of a star is represented by a $(u,v)$ pixel coordinate, which is used for the geometric alignment with its predicted location in the star database. The star sensor is defocused by a small amount, to increase the apparent size of the star, enabling sub-pixel centroiding. The defocused star is closely approximated by an elliptical Gaussian point-spread function [21]. A region of interest (ROI) is defined for each star detected in the image, and for each ROI the parameters of the point-spread function are estimated.

The ROI for each star can be identified by use of a binary thresholding operation. Following [22], the threshold should be set at five standard deviations above the mean image intensity. Contours with a minimum area of 4 pixels are extracted from the binary image. The bounding box is chosen with the minimum area required to contain the contour, and subsequently expanded by 2 pixels in each direction, so as to include low-signal regions of the star that are below the binary threshold. The corresponding location of the bounding box is used to extract a ROI from the original image.

We use a standard multivariate Gaussian distribution to fit the point-spread function of a star, as shown below:

$$p(\textbf{x};\textbf{u},{\boldsymbol \Sigma}) = \alpha \exp \!\left({- \frac{1}{2}{{(\textbf{x} - \textbf{u})}^T}{{\boldsymbol \Sigma}^{- 1}}(\textbf{x} - \textbf{u})} \right),$$
where $\textbf{x}$ represents each of the $(u,v)$ coordinates within the ROI, $\textbf{u}$ is the sub-pixel $({u_c},{v_c})$ location of the star centroid, $\alpha$ is the scale factor, and ${\boldsymbol \Sigma}$ is the covariance matrix. We use a numerical Levenberg–Marquardt method to find the optimal parameters to describe each star.

The efficiency and stability of the Levenberg–Marquardt algorithm may be improved by providing good initial approximations of the Gaussian parameters. Following [21], the initial covariance matrix may be assumed to be diagonal, with $\sigma = {\sigma _{\textit{xx}}} = {\sigma _{\textit{yy}}}$, whose initial value is estimated from the full width at half maximum ($fwhm$):

$$\sigma = \frac{{fwhm}}{{2\sqrt {2\ln (2)}}},$$
where the full width at half maximum is estimated by taking the square root of the number of pixels that are greater than half the maximum intensity.

A fast initial approximation for the centroid, $\textbf{u}$, may be estimated using a weighted average of the ROI [21],

$$({u_c},{v_c}) = \left({\frac{{\sum\nolimits_{i,j} {I_{i,j}}{u_i}}}{{\sum\nolimits_{i,j} {I_{i,j}}}},\frac{{\sum\nolimits_{i,j} {I_{i,j}}{v_j}}}{{\sum\nolimits_{i,j} {I_{i,j}}}}} \right),$$
for pixel intensity $I$ at position $(i,j)$ on the image plane.

C. Focal Length—Initial Approximation

A camera’s focal length is critical for the identification of stars. Some approaches to camera calibration attempt to utilize the interior angles of star triples to perform identification without camera intrinsics. It has been shown, however, that the two unique numbers that define the interior angles are not invariant [12]. Thus, we present a simple method for obtaining an initial estimate of focal length. The focal length is later estimated with greater precision once the stars have been identified, and the camera orientation resolved.

Provided the camera remains stationary, the rotation of Earth can be observed in two time-separated images of the night sky. We utilize the fact that radial distortion is near-negligible at the image center, to select an appropriate star and measure its displacement over a period of time. Provided a camera that is directed towards the zenith, with knowledge of the time and location at which the images are captured, the focal length can be estimated. This estimate can be improved with precise knowledge of the camera’s attitude (using an inertial measurement unit, for example); however, in practice we have found that a spirit level is sufficient to align the optical axis with the zenith to within 1°. We note that for calibration in polar regions of the Earth, the camera should be directed away from the zenith, to avoid boresighting the camera towards the Earth’s rotation axis.

Given a single rotation of the Earth in 23 h, 56 min, and 4 s, the angular rate of the Earth is calculated to be $\omega = 7.2921{\rm e} - 05\;{\rm rad}/{\rm s}$. To an observer at latitude $\phi$ directed towards the zenith, the apparent angular displacement of the stars between two images is given by

$$\theta = (\omega \cos \phi)\delta t,$$
where $\delta t$ is the elapsed time between images. A number of reference stars are selected within a threshold distance (in pixels) from the principal point. We use a threshold of 1/15th the image width to ensure that the selected reference stars are subjected to minimal radial distortion. The motion of each star is measured, and the average motion across all stars, $\delta p$, is used to calculate the dimensionless focal length of the lens, as seen in Fig. 1:
$$\hat f = \frac{{\frac{{\delta p}}{2}}}{{\tan \frac{\theta}{2}}}.$$
 figure: Fig. 1.

Fig. 1. Geometry of focal length estimation.

Download Full Size | PDF

This estimate of focal length may be used to generate an initial estimate of the camera intrinsics matrix. Assuming that the $x$ and $y$ components of focal length are equal, an initial estimate of the intrinsics matrix, $\,\;{\hat{\!\!\boldsymbol K}}$, can be made:

$$\,\;{\hat{\!\!\boldsymbol K}} = \left[{\begin{array}{*{20}{c}}{\hat f}&\;\;0&\;\;{{{\hat p}_x}}\\0&\;\;{\hat f}&\;\;{{{\hat p}_y}}\\0&\;\;0&\;\;1\end{array}} \right],$$
where the principal point $({\hat p_x},{\hat p_y})$ is assumed to be located at the center of the image.

D. Orientation Estimation

The aim of orientation estimation is to find a rotation matrix that minimizes the squared positional error between a set of observed stars in the camera frame, and a set of identified stars in the NED frame. Commonly referred to as Wahba’s problem [23], the aim is to find a rotation matrix $\textbf{R}$ that minimizes the residual

$${\cal E} = \sum\limits_{i = 1}^n ||{\textbf{u}_i} - \textbf{R}{\textbf{v}_i}{||^2},$$
provided a set of observations in the camera frame, ${\textbf{u}_i}$, and a set of corresponding identified stars in the local NED frame, ${\textbf{v}_i}$. It follows that in order to estimate orientation, we must first identify stars in the image.

Star identification algorithms require an appropriate magnitude threshold, so as to exclude unobservable stars from the database. Doing so both speeds up star identification, and reduces the chance of false identifications. According to [22], the relationship between the total number of stars in the full sky, $N$, and the star magnitude, $Mv$, is given by

$$N = 6.57{e^{1.08Mv}}.$$

We can utilize this formula to determine an appropriate magnitude threshold for star identification. Assuming that stars are projected onto a unit sphere, the full sky surface area is $4\pi$ units, and a spherical cap with solid angle $\theta$ degrees has surface area $2\pi (1 - \cos \theta)$ units. Given that stars are chosen from the image plane within some radius of the principal point, we can compute the solid angle $\theta$ by simply taking

$$\theta = \arctan \left({\frac{r}{{\hat f}}} \right),$$
where $r$ is the radius, in pixels, within which stars are selected. Therefore, the portion of stars observed is given by
$$\zeta = \frac{1}{2}(1 - \cos \theta).$$

Thus, assuming a uniform distribution of stars, and by rearranging Eq. (13), the threshold for the visual magnitude of stars can be found:

$$Mv = \frac{{\ln\! \left({\frac{N}{{6.57\zeta}}} \right)}}{{1.08}}.$$

The initial estimate of the camera intrinsics, $\,\;{\hat{\!\!\boldsymbol K}}$, provides the requisite accuracy for star identification algorithms to function on the minimally distorted region of the image plane. A minimum of three correctly identified stars is required to determine the orientation of the camera system. We implemented a log-polar star identification algorithm for this study. The details of this algorithm are omitted, as star identification is well understood [24], and the algorithms vary greatly depending on the application. We note, however, that this initial estimate of camera intrinsics is likely to incur greater numbers of false identifications due to the initial estimate of $\,\;{\hat{\!\!\boldsymbol K}}$. False identifications occur when the star identification algorithm misclassifies a star, giving it an incorrect label, and thus incorrect coordinates. In order to attain a robust estimate of orientation, we therefore require a method for rejecting false identifications.

We utilize an augmented approach to the Kabsch algorithm [25], in which we use a random sample consensus (RANSAC) to simultaneously estimate the orientation of the camera system and detect false identifications. This approach prevents false identifications from biasing the orientation estimate. Consequently, a large number of false identifications may cause the algorithm to fail, depending on the heuristic ratio of inlier-to-outlier. We provide a summary of the Kabsch algorithm in Section 2.D.1, with its RANSAC augmentation in Section 2.D.2.

1. Kabsch Algorithm

We provide a concise overview of the algorithm here in the context of star identification; however, a rigorous justification for the use of this algorithm is provided in [26]. We let $\textbf{A}$ be the matrix containing $n$ observations in the camera frame with dimension $[n \times 3]$, and let $\textbf{B}$ be the matrix of $n$ star unit vectors, computed from the database, paired to the unit vectors of $\textbf{A}$, with dimension $[n \times 3]$. Given centroids ${\bar{\boldsymbol a}}$ and ${\bar{\boldsymbol b}}$, respectively, the vectors in $\textbf{A}$ and $\textbf{B}$ are translated such that ${\textbf{a}_i} = {\textbf{a}_i} - {\bar{\boldsymbol a}}$ and ${\textbf{b}_i} = {\textbf{b}_i} - {\bar{\boldsymbol b}}$. The singular value decomposition is computed for the covariance matrix $\textbf{C} = {\textbf{A}^T}\textbf{B}$. The first two elements of the diagonal matrix ${\boldsymbol \Sigma}$ are set to ${+}{1}$, with the third element, corresponding to the $z$ component of rotation, set to either ${-}{1}$ or ${+}{1}$ to ensure a right-handed rotation. The rotation $\textbf{R}$ is computed by taking the transpose of the resultant decomposition. This procedure is shown in Algorithm 1.

2. Kabsch-RANSAC

The RANSAC algorithm utilizes the minimum required number of samples to generate a model, and tests this model against each of the remaining data points to determine whether they are within a specified tolerance. Data points that are within tolerance are considered inliers, whereas data points that are outside tolerance are considered outliers. An acceptable proportion of inliers to outliers is heuristically selected to balance the accuracy of the algorithm with the rate at which a return value is successfully attained. The Kabsch algorithm requires three pairs of star observations and identifications to estimate orientation. This orientation is then used to categorize the remaining pairs of observation-identification data points as inliers or outliers. The complete orientation estimation algorithm is outlined in Algorithm 2.

Tables Icon

Algorithm 2. Kabsch-RANSAC

It is more useful to express the residual in terms of angular error. Assuming that all star locations are projected onto the unit sphere, the squared distance ${\cal E}$ is related to angular error $\theta$ by

$${\cal E} = 2 - 2\cos \theta ,$$
which can be seen in Fig. 2. An appropriate angular tolerance for the RANSAC inliers may be selected using this relationship. The number of iterations required to find an orientation is dependent on the proportion of false identifications in the sample set. Given $n$ identifications and a proportion ${p_f}$ of false identifications, the probability of a false identification being present in a random selection of three pairs is given by
$$P(F) = \sum\limits_{i = 1}^{i = 3} \frac{{\left({\begin{array}{*{20}{c}}{{p_f}n}\\i\end{array}} \right)\left({\begin{array}{*{20}{c}}{n(1 - {p_f})}\\{3 - i}\end{array}} \right)}}{{\left({\begin{array}{*{20}{c}}n\\3\end{array}} \right)}}.$$
Thus, the number of iterations required to achieve at least one RANSAC iteration with no outliers, with confidence ${p_s}$, is given by
$$k = \frac{{\log (1 - {p_s})}}{{\log (P(F))}}.$$
In practice, a better fit is achieved by choosing $k$ to ensure that multiple estimates are taken without outliers, giving the RANSAC algorithm an opportunity to optimize within the set of inliers. This allows for the minimization of measurement errors that are within tolerance, such as those introduced through distortion and centroiding.
 figure: Fig. 2.

Fig. 2. Relationship between squared distance error and angular error in Kabsch-RANSAC algorithm.

Download Full Size | PDF

E. Focal Length Estimation

The identification of stars and estimation of camera orientation, as described in Section 2.D, allow for a precise measurement of the lens focal length. For each identified star, the projection from NED coordinates onto the sensor plane is given by Eq. (1). Written in full, this can be seen in Eq. (20):

$$\left[{\begin{array}{*{20}{c}}{\alpha u}\\{\alpha v}\\\alpha \end{array}} \right] = \left[{\begin{array}{*{20}{c}}{{f_x}}&0&{{p_x}}\\0&{{f_y}}&{{p_y}}\\0&0&1\end{array}} \right]\left[{\begin{array}{*{20}{c}}{{R_1}}&{{R_2}}&{{R_3}}\\{{R_4}}&{{R_5}}&{{R_6}}\\{{R_7}}&{{R_8}}&{{R_9}}\end{array}} \right]\left[{\begin{array}{*{20}{c}}X\\Y\\Z\end{array}} \right],$$
which, given that $\textbf{R}$ and $\textbf{X}$ are known, we simplify $\textbf{RX}$ to find ${\textbf{X}^c}$, expressed in the camera coordinate system
$$\left[{\begin{array}{*{20}{c}}{\alpha u}\\{\alpha v}\\\alpha \end{array}} \right] = \left[{\begin{array}{*{20}{c}}{{f_x}}&0&{{p_x}}\\0&{{f_y}}&{{p_y}}\\0&0&1\end{array}} \right]\left[{\begin{array}{*{20}{c}}{{X^c}}\\{{Y^c}}\\{{Z^c}}\end{array}} \right].$$

It can be seen that

$$\begin{split} {\alpha u}& = {{f_x}{X^c} + {p_x}{Z^c},}\\{\alpha v}& = {{f_y}{Y^c} + {p_y}{Z^c},}\\\alpha & = {{Z^c}.}\end{split}$$

We rearrange for ${f_x}$ and ${f_y}$:

$$\left[{\begin{array}{*{20}{c}}{{X^c}}&0\\0&{{Y^c}}\end{array}} \right]\left[{\begin{array}{*{20}{c}}{{f_x}}\\{{f_y}}\end{array}} \right] = \left[{\begin{array}{*{20}{c}}{u{Z^c} - {p_x}{Z^c}}\\{v{Z^c} - {p_y}{Z^c}}\end{array}} \right],$$
which, for multiple observations, we vertically stack. For $n$ observations we have
$$\left[{\begin{array}{*{20}{c}}{X_1^c}&0\\0&{Y_1^c}\\ \vdots & \vdots \\{X_n^c}&0\\0&{Y_n^c}\end{array}} \right]\left[{\begin{array}{*{20}{c}}{{f_x}}\\{{f_y}}\end{array}} \right] = \left[{\begin{array}{*{20}{c}}{{u_1}Z_1^c - {p_x}Z_1^c}\\{{v_1}Z_1^c - {p_y}Z_1^c}\\ \vdots \\{{u_n}Z_n^c - {p_x}Z_n^c}\\{{v_n}Z_n^c - {p_y}Z_n^c}\end{array}} \right],$$
which, if we let
$$\textbf{A} = \left[{\begin{array}{*{20}{c}}{X_1^c}&0\\0&{Y_1^c}\\ \vdots & \vdots \\{X_n^c}&0\\0&{Y_n^c}\end{array}} \right]$$
and
$$\textbf{f} = \left[{\begin{array}{*{20}{c}}{{f_x}}\\{{f_y}}\end{array}} \right]$$
and
$$\textbf{b} = \left[{\begin{array}{*{20}{c}}{{u_1}Z_1^c - {p_x}Z_1^c}\\{{v_1}Z_1^c - {p_y}Z_1^c}\\ \vdots \\{{u_n}Z_n^c - {p_x}Z_n^c}\\{{v_n}Z_n^c - {p_y}Z_n^c}\end{array}} \right],$$
then the least squares approximation of $\textbf{f}$ is given by
$${\hat{\boldsymbol f}} = ({\textbf{A}^T}\textbf{A}{)^{- 1}}{\textbf{A}^T}\textbf{b},$$
where ${\hat{\boldsymbol f}}$ contains the ${f_x}$ and ${f_y}$ components of focal length in units of pixels.

F. Iterative Optimization

Finally, we perform a non-linear optimization step to identify distortion parameters, and refine the estimate of the intrinsic parameters. At this stage of calibration, the camera properties are known only within the quasi-linear region of the image. The effects of lens distortion will cause star identification to fail outside of this region, as distortion can cause stars to shift by a significant amount on the image plane. We make two assumptions that help us to overcome this problem:

  • 1. the density of stars in the image is sufficient to characterize the lens distortion;
  • 2. the distortion is radially symmetric.

The identification of distortion-displaced stars can be achieved by iteratively identifying stars within the quasi-calibrated region of the image at radius $r$, determining the radial distortion parameters, and re-running star identification at an increased radius. Assuming radially symmetric distortion, this iterative process allows the extrapolation of distortion coefficients to greater radii, enabling the progressive identification of stars, while simultaneously estimating the distortion parameters.

Provided the camera intrinsic and extrinsic parameters, star identification at this stage of the calibration may be performed by a simple geometric thresholding operation whereby, if an observed star is within radius of a database star, it is considered a match. Observations that are within the radius of multiple database stars should be matched with whichever star minimizes the error between observation and database star.

For the purposes of this study, we use a radial distortion model with a tangential component, as described in [27]. The transformation from distorted normalized image plane coordinates $({x_d},{y_d})$ to undistorted normalized image plane coordinates, $({x_u},{y_u})$ is given by

$$\begin{split}& {{x_u} = {x_d}(1 + {k_1}{r^2} + {k_2}{r^4} + {k_3}{r^6}) + {p_1}({r^2} + 2x_d^2) + 2{p_2}{x_d}{y_d},}\\&{{y_u} = {y_d}(1 + {k_1}{r^2} + {k_2}{r^4} + {k_3}{r^6}) + {p_2}({r^2} + 2y_d^2) + 2{p_1}{x_d}{y_d},}\end{split}$$
where the image plane coordinates are derived from pixel coordinates, $x = (u - {p_x})/{f_x}$ and $y = (v - {p_y})/{f_y}$, and the radial component, $r$, is given by $x_d^2 + y_d^2$. The coefficients ${k_1},{k_2},{k_3}$ describe the radial distortion, and the coefficients ${p_1},{p_2}$ describe the tangential distortion.

Following [28], the initial radial distortion coefficients for each iteration are estimated using a least-squares approximation. The problem may be formulated as follows:

$$\left[{\begin{array}{*{20}{c}}{{x_d}{r^2}}&{{x_d}{r^4}}&{{x_d}{r^6}}\\{{y_d}{r^2}}&{{y_d}{r^4}}&{{y_d}{r^6}}\end{array}} \right]\left[{\begin{array}{*{20}{c}}{{k_1}}\\{{k_2}}\\{{k_3}}\end{array}} \right] = \left[{\begin{array}{*{20}{c}}{{x_u} - {x_d}}\\{{y_u} - {y_d}}\end{array}} \right],$$
and the system $\textbf{Dk} = \textbf{d}$ can be solved for the radial distortion parameters $\textbf{k}$. Parameters ${p_1}$ and ${p_2}$ are initialized at zero, as the tangential distortion tends to be minimal by comparison. Additionally, the initial rotation matrix at each iteration is estimated using the Kabsch-RANSAC algorithm, as described in Section 2.D.2. The non-linear numerical optimization is performed at each iteration using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm, as implemented in Python’s SciPy version 1.10. The intrinsic parameters and the distortion parameters are passed to the algorithm, in addition to the Euler angles describing the rotation matrix.

The optimization is formulated to minimize the mean squared error (MSE) between the pinhole projection $\alpha \textbf{x} = \textbf{KRX}$ of the database points, and the distortion-corrected observations. For $n$ pairs of observation and identification, we aim to minimize function $g$:

$$g({f_x},{f_y},{p_x},{p_y},{k_1},{k_2},{k_3},{p_1},{p_2},\psi ,\theta ,\phi) = \frac{1}{n}\sum\limits_i^n |{\textbf{x}_i} - \textbf{x}_i^\prime {|^2},$$
where ${\textbf{x}_i}$ is the ideal database projection, and $\textbf{x}_i^\prime $ is the rectified observation. The complete iterative algorithm is described in Algorithm 3. To prevent overfitting, the order of the distortion function is limited. Initially, only ${k_1}$ is estimated. Once the search radius reaches one third of the image width, ${k_1}$ and ${k_2}$ are estimated. Once the search radius reaches one half the image width, all parameters are estimated.
Tables Icon

Algorithm 3. Star Identification and Numerical Optimization

3. RESULTS

A. Focal Length Estimation

The initial estimate of focal length is critical for star identification. We test the methodology presented in Section 2.C using simulated distorted images. Pairs of time-separated images were generated following the methodology in [20], setting the observable threshold to Mv 6.5. The image size was set to $3280 \times 2464$, with dimensionless focal lengths in the range 1000 (approximately 120° horizontal field of view), and 5000 (approximately 35° horizontal field of view). Radial distortion coefficients were randomly generated for each pair, and used to distort the pair of images, with $(- 1.0 \lt {k_1} \lt 1.0)$, $(- 0.5 \lt {k_2} \lt 0.5)$, $(- 0.1 \lt {k_3} \lt 0.1)$. An example of the radial distortion may be seen in Fig. 3. The boresight declination was randomly selected in the range $(- {60^ \circ}{,60^ \circ})$, and the right ascension was randomly selected in the range $(- {180^ \circ}{,180^ \circ})$. A time difference of 20 s was used between the first and second images; 1000 pairs of images were generated, and the focal length was estimated for each pair.

The focal length error is measured as the difference between the true focal length and the estimated focal length. The error is normalized to be expressed as a percentage of the true focal length. The results from simulation testing are summarized in Table 1. The mean absolute error in focal length estimation was 1.07%, with 99.7% ($3\sigma$) of estimates accurate to within 4.52%. Figure 4 shows a histogram of the normalized focal length errors.

 figure: Fig. 3.

Fig. 3. Zoomed-in look at the effects of radial distortion on a simulation image.

Download Full Size | PDF

B. Camera Calibration

We present here the calibration results from two hardware configurations. The first configuration consists of a standard Pi Camera HQ, with the official 6 mm wide angle lens. The second configuration consists of an Alvium 1800 U-240 monochrome sensor, fitted with a 6 mm lens. The focal lengths of both lenses were adjusted to maximize the intensity of visible stars, with a slight offset to blur the stars over multiple pixels. We followed the methodology as described in Section 2 in order to estimate focal length, identify orientation, detect stars, and perform a geometric calibration.

Tables Icon

Table 1. Simulated Focal Length Errors

 figure: Fig. 4.

Fig. 4. Histogram of errors between the true focal length, and the estimated focal length using the methodology presented in Section 2.C.

Download Full Size | PDF

Tables Icon

Table 2. Camera Calibration Coefficients

 figure: Fig. 5.

Fig. 5. Scatter plot of reprojection errors after calibration of the Pi Cam HQ with wide angle 6 mm lens.

Download Full Size | PDF

1. Raspberry Pi Camera HQ

Two images were captured 30 s apart on a clear night, at an exposure interval of 3000 ms, with a resolution of ${3280} \times {2464}\;{\rm pixels}$. The camera was resting on the ground at an approximate latitude of ${-}{34^ \circ}$, boresighted towards the zenith. The radial search algorithm took 40 iterations to identify 310 out of the 350 stars observed in the image. The focal length prior to star identification was estimated at 3215.82 pixels. This value was refined to 3213.57 for ${f_x}$, and 3214.06 for ${f_y}$ after star identification and orientation estimation. The final parameters can be seen in Table 2. The RMSE reprojection error was 0.968 pixels after calibration, which translates to an angular error of 0.0172° at the image center. A scatter plot of the reprojection errors is shown in Fig. 5. The radial distortion parameters at each iteration can be seen in Fig. 6.

2. Alvium 1800 U-240m

Two images were captured from the Alvium, 20 s apart, at an exposure interval of 5000 ms. The camera was resting on the ground at an approximate latitude of ${-}{34^ \circ}$, boresighted towards the zenith. The radial search algorithm took 12 iterations to identify 240 out of 259 stars in the image. The focal length prior to star identification was estimated at 1853.36 pixels. This value was refined to 1836.65 for ${f_x}$ and 1823.21 for ${f_y}$ after star identification and orientation estimation. The final parameters can be seen in Table 2. The RMSE reprojection error was 0.265 pixels after calibration, which translates to an angular error of 0.0083° at the image center. A visual representation of the radial search algorithm can be seen in Fig. 7, showing the iterative progression of star identification. A scatter plot of the reprojection errors is shown in Fig. 8.

 figure: Fig. 6.

Fig. 6. Radial distortion coefficients at each iteration of calibration for the Pi Cam HQ.

Download Full Size | PDF

4. DISCUSSION

The results indicate that the proposed method provides an effective bootstrap solution for uncalibrated wide angle star sensors. In the absence of camera intrinsics or distortion parameters, a geometrically calibrated camera may be achieved without specialist equipment in a relatively short time period. This is significant, as the standard “checkerboard” approach to calibration tends to produce poor results, rendering star identification imprecise. For example, the Alvium camera was calibrated against 17 checkerboard images, printed on an A1 sheet of foam board. When using identified stars from the bootstrap method, the RMSE reprojection error of the checkerboard calibration was 10.86 pixels (${41} \times$ larger). This reiterates the need for high-precision low-cost alternatives to existing methods. The stars are, due to their functionally infinite distance, the most accurate calibration reference that is freely available.

While the position of stars is known to high levels of precision, this study was not intended to improve upon existing calibration models or formulas. Research continues on the improvement of camera precision using the star field (e.g., [29]). Rather, the aim was to formulate a rigorous calibration method while avoiding the need for non-dimensional star identification. As such, higher-order or alternative distortion models may offer a more precise calibration. In particular, the low-cost Pi Cam HQ suffered significant reprojection error, whose distortion may best be described by higher-order polynomial or rational models. In principle, once the stars in the calibration image have been identified, these observation-identification point pairs may be used to calibrate the camera against any feasible distortion model. Indeed, this may be a necessary step for sensors that are not well approximated by a radial distortion model.

 figure: Fig. 7.

Fig. 7. Iterative star identification and parameter estimation from the Alvium 1800 U240m. Red stars indicate the location of observed stars in the image. Circles represent the identified stars, reprojected onto the image.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Scatter plot of reprojection errors after calibration of the Alvium 1800 U-240m with 6 mm lens.

Download Full Size | PDF

The selection of hardware for this study was based on the practical requirements for aeronautical star identification. Both imaging systems had an approximate 53.5° horizontal field of view, which is far from the field of view achieved by fisheye or catadioptric lenses, for example. Satellite based star identification sensors tend to have very narrow fields of view by comparison (an example of a “medium” FOV star sensor has a FOV of 3° [6]); thus the term “wide angle” may be considered in practical terms, relative to current implementations.

5. CONCLUSION

We presented a method for calibrating a wide angle star sensor without any prior knowledge of the camera properties. By capturing two time-separated images, the focal length of the camera could be estimated to within 4.52% ($3\sigma$) accuracy, enabling the use of conventional lost-in-space star identification methods. Stars were identified, and the orientation of the camera was detected with false identification tolerance using a RANSAC-augmented variant of the Kabsch algorithm. A least-squares approximation of the focal length was determined using the identified stars and orientation. An iterative algorithm was developed for identifying stars within a quasi-linear radius from the image center, while simultaneously identifying distortion parameters and refining the camera intrinsic and extrinsic parameters. The bootstrap calibration method was tested on a Pi Cam HQ with a 6 mm lens, and an Alvium 1800 U-240m with a 6 mm lens, achieving RMSE reprojection errors of 0.968 pixels and 0.265 pixels, respectively. This approach to camera calibration averts the need for non-dimensional star identification algorithms, which have been proven not to conform to the assumptions of projective invariance. Furthermore, the radial search algorithm enabled star identification from an uncalibrated camera in the presence of significant lens distortion.

Acknowledgment

This work was supported by Scope Global Pty Ltd. under the Commonwealth Scholarships Program, and the Commonwealth of South Australia under the Australian Government Research Training Program.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

REFERENCES

1. J. Meeus, Astronomical Algorithms (1991).

2. T. A. Clarke and J. G. Fryer, “The development of camera calibration methods and models,” Photogramm. Rec. 16, 51–66 (1998). [CrossRef]  

3. B. Gou, Y.-M. Cheng, and A. H. de Ruiter, “INS/CNS navigation system based on multi-star pseudo measurements,” Aerosp. Sci. Technol. 95, 105506 (2019). [CrossRef]  

4. K. Chen, J. Zhou, F.-Q. Shen, et al., “Hypersonic boost–glide vehicle strapdown inertial navigation system/global positioning system algorithm in a launch-centered earth-fixed frame,” Aerosp. Sci. Technol. 98, 105679 (2020). [CrossRef]  

5. S. Teague and J. Chahl, “Strapdown celestial attitude estimation from long exposure images for UAV navigation,” Drones 7, 52 (2023). [CrossRef]  

6. H. Lei, B. Li, Q. Wei, et al., “Inertial information based star detection for airborne star sensor,” Opt. Laser Technol. 162, 109325 (2023). [CrossRef]  

7. M. A. Samaan, D. Mortari, and J. L. Junkins, “Nondimensional star identification for uncalibrated star cameras,” J. Astronaut. Sci. 54, 95–111 (2006). [CrossRef]  

8. F. Somayehee, A. A. Nikkhah, J. Roshanian, et al., “Blind star identification algorithm,” IEEE Trans. Aerosp. Electron. Syst. 56, 547–557 (2019). [CrossRef]  

9. M. J. Ajdadi, M. Ghafarzadeh, M. Taheri, et al., “Star identification algorithm for uncalibrated, wide FOV cameras,” Astron. J. 149, 182 (2015). [CrossRef]  

10. G. Yunhai, W. Shuang, and C. Binglong, “Calibration for star tracker with lens distortion,” in IEEE International Conference on Mechatronics and Automation (IEEE, 2012), pp. 681–686.

11. Y. Li, J. Zhang, W. Hu, et al., “Laboratory calibration of star sensor with installation error using a nonlinear distortion model,” Appl. Phys. B 115, 561–570 (2014). [CrossRef]  

12. J. A. Christian and J. L. Crassidis, “Star identification and attitude determination with projective cameras,” IEEE Access 9, 25768–25794 (2021). [CrossRef]  

13. X. Wei, G. Zhang, Q. Fan, et al., “Star sensor calibration based on integrated modelling with intrinsic and extrinsic parameters,” Measurement 55, 117–125 (2014). [CrossRef]  

14. L. Ma, S. Xiao, W. Tang, et al., “Attitude correlated frames based calibration method for star sensors,” Sensors 24, 67 (2023). [CrossRef]  

15. C. Song, C. Ji, S. Li, et al., “A decoupling three-position calibration method based on observability analysis for SINS/CNS integrated navigation,” IEEE Sens. J. 22, 15284–15295 (2022). [CrossRef]  

16. Z. Xu, Z. Zhou, Z. Chang, et al., “All-parameter system-level calibration for SINS/CNS based on global observability analysis,” IEEE Sens. J. 23, 10856–10870 (2023). [CrossRef]  

17. N. N. Vasilyuk, G. A. Nefedov, E. A. Sidorova, et al., “Calibration of the intrinsic parameters of the digital camera of a star tracker based on ground-based observations of stars, taking atmospheric refraction and aberration of light into account,” Meas. Tech. 66, 593–609 (2024). [CrossRef]  

18. H. Han, K. Baeck, J. Wi, et al., “Low-budget cubesat star tracker calibration using earth’s rotation,” Adv. Space Res. 70, 1880–1889 (2022). [CrossRef]  

19. H. Yoon, K. Baeck, and J. Wi, “Star tracker geometric calibration through full-state estimation including attitude,” Int. J. Aeronaut. Space Sci. 23, 180–191 (2022). [CrossRef]  

20. S. Teague and J. Chahl, “Imagery synthesis for drone celestial navigation simulation,” Drones 6, 207 (2022). [CrossRef]  

21. T. Delabie, J. D. Schutter, and B. Vandenbussche, “An accurate and efficient Gaussian fit centroiding algorithm for star trackers,” J. Astronaut. Sci. 61, 60–84 (2014). [CrossRef]  

22. C. C. Liebe, “Accuracy performance of star trackers-a tutorial,” IEEE Trans. Aerosp. Electron. Syst. 38, 587–599 (2002). [CrossRef]  

23. G. Wahba, “A least squares estimate of satellite attitude,” SIAM Rev. 7, 409 (1965). [CrossRef]  

24. D. Rijlaarsdam, H. Yous, J. Byrne, et al., “A survey of lost-in-space star identification algorithms since 2009,” Sensors 20, 2579 (2020). [CrossRef]  

25. W. Kabsch, “A solution for the best rotation to relate two sets of vectors,” Acta Crystallogr. A 32, 922–923 (1976). [CrossRef]  

26. J. Lawrence, J. Bernal, and C. Witzgall, “A purely algebraic justification of the Kabsch-Umeyama algorithm,” J. Res. Natl. Inst. Stand. Technol. 124, 124028 (2019). [CrossRef]  

27. Z. Tang, R. G. Von Gioi, P. Monasse, et al., “A precision analysis of camera distortion models,” IEEE Trans. Image Process. 26, 2694–2704 (2017). [CrossRef]  

28. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000). [CrossRef]  

29. Z. Yinhu, C. Shaojie, C. Zhang, et al., “Rigorous and integrated self-calibration model for a large-field-of-view camera using a star image,” Chin. J. Aeronaut. 36, 375–389 (2023). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Geometry of focal length estimation.
Fig. 2.
Fig. 2. Relationship between squared distance error and angular error in Kabsch-RANSAC algorithm.
Fig. 3.
Fig. 3. Zoomed-in look at the effects of radial distortion on a simulation image.
Fig. 4.
Fig. 4. Histogram of errors between the true focal length, and the estimated focal length using the methodology presented in Section 2.C.
Fig. 5.
Fig. 5. Scatter plot of reprojection errors after calibration of the Pi Cam HQ with wide angle 6 mm lens.
Fig. 6.
Fig. 6. Radial distortion coefficients at each iteration of calibration for the Pi Cam HQ.
Fig. 7.
Fig. 7. Iterative star identification and parameter estimation from the Alvium 1800 U240m. Red stars indicate the location of observed stars in the image. Circles represent the identified stars, reprojected onto the image.
Fig. 8.
Fig. 8. Scatter plot of reprojection errors after calibration of the Alvium 1800 U-240m with 6 mm lens.

Tables (5)

Tables Icon

Algorithm 2. Kabsch-RANSAC

Tables Icon

Algorithm 3. Star Identification and Numerical Optimization

Tables Icon

Table 1. Simulated Focal Length Errors

Tables Icon

Table 2. Camera Calibration Coefficients

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

α x = KRX ,
X = [ X Y Z ] ,
x = [ u v 1 ] ,
K = [ f x 0 p x 0 f y p y 0 0 1 ] ,
R = [ cos ( θ ) cos ( ψ ) cos ( θ ) sin ( ψ ) sin ( θ ) cos ( ϕ ) sin ( ψ ) + sin ( ϕ ) sin ( θ ) cos ( ψ ) cos ( ϕ ) cos ( ψ ) + sin ( ϕ ) sin ( θ ) sin ( ψ ) sin ( ϕ ) cos ( θ ) sin ( ϕ ) sin ( ψ ) + cos ( ϕ ) sin ( θ ) cos ( ψ ) sin ( ϕ ) cos ( ψ ) + cos ( ϕ ) sin ( θ ) sin ( ψ ) cos ( ϕ ) cos ( θ ) ] ,
p ( x ; u , Σ ) = α exp ( 1 2 ( x u ) T Σ 1 ( x u ) ) ,
σ = f w h m 2 2 ln ( 2 ) ,
( u c , v c ) = ( i , j I i , j u i i , j I i , j , i , j I i , j v j i , j I i , j ) ,
θ = ( ω cos ϕ ) δ t ,
f ^ = δ p 2 tan θ 2 .
K ^ = [ f ^ 0 p ^ x 0 f ^ p ^ y 0 0 1 ] ,
E = i = 1 n | | u i R v i | | 2 ,
N = 6.57 e 1.08 M v .
θ = arctan ( r f ^ ) ,
ζ = 1 2 ( 1 cos θ ) .
M v = ln ( N 6.57 ζ ) 1.08 .
E = 2 2 cos θ ,
P ( F ) = i = 1 i = 3 ( p f n i ) ( n ( 1 p f ) 3 i ) ( n 3 ) .
k = log ( 1 p s ) log ( P ( F ) ) .
[ α u α v α ] = [ f x 0 p x 0 f y p y 0 0 1 ] [ R 1 R 2 R 3 R 4 R 5 R 6 R 7 R 8 R 9 ] [ X Y Z ] ,
[ α u α v α ] = [ f x 0 p x 0 f y p y 0 0 1 ] [ X c Y c Z c ] .
α u = f x X c + p x Z c , α v = f y Y c + p y Z c , α = Z c .
[ X c 0 0 Y c ] [ f x f y ] = [ u Z c p x Z c v Z c p y Z c ] ,
[ X 1 c 0 0 Y 1 c X n c 0 0 Y n c ] [ f x f y ] = [ u 1 Z 1 c p x Z 1 c v 1 Z 1 c p y Z 1 c u n Z n c p x Z n c v n Z n c p y Z n c ] ,
A = [ X 1 c 0 0 Y 1 c X n c 0 0 Y n c ]
f = [ f x f y ]
b = [ u 1 Z 1 c p x Z 1 c v 1 Z 1 c p y Z 1 c u n Z n c p x Z n c v n Z n c p y Z n c ] ,
f ^ = ( A T A ) 1 A T b ,
x u = x d ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + p 1 ( r 2 + 2 x d 2 ) + 2 p 2 x d y d , y u = y d ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + p 2 ( r 2 + 2 y d 2 ) + 2 p 1 x d y d ,
[ x d r 2 x d r 4 x d r 6 y d r 2 y d r 4 y d r 6 ] [ k 1 k 2 k 3 ] = [ x u x d y u y d ] ,
g ( f x , f y , p x , p y , k 1 , k 2 , k 3 , p 1 , p 2 , ψ , θ , ϕ ) = 1 n i n | x i x i | 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.