Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

In-orbit geometric calibration of multi-linear array optical remote sensing satellites with tie constraints

Open Access Open Access

Abstract

When some sub-images lack ground control points (GCPs) or GCPs are not evenly distributed, the estimated camera parameters are often deviated in in-orbit geometric calibration. In this study, a feasible in-orbit geometric calibration method for multi-linear array optical remote sensing satellites with tie constraints is presented. In the presented method, both GCPs and tie points are employed. With the help of tie constraints provided by tie points, all charge coupled devices (CCDs) are logically connected into a complete CCD. The internal camera parameters of all CCDs can then be simultaneously and precisely estimated, even if sufficient evenly distributed GCPs in some sub-images are unavailable. Three GaoFen-6 images and two ZiYuan3-02 images were tested. Compared with the conventional method, the experimental results showed that the deviations of the estimated camera parameters could be effectively eliminated by the presented method. The average geometric stitching accuracy of the adjacent sub-images of all the tested images were improved from approximately 0.5 pixel to 0.1 pixel. The geometric quality of the stitched images was thereby improved.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Currently, multi-linear array charge coupled devices (CCDs) are often equipped on optical remote sensing satellites (ORSSs) in order to obtain an image with a wide image swath. In in-orbit operations, each CCD collects a sub-image in a push-broom mode, and the adjacent sub-images have a small overlap. Then, sensor orientation, radiometric correction, geometric stitching, and band registration of all the sub-images are performed in ground processing, and a standard optical remote sensing satellite image (ORSSI) is produced. In order to produce a standard ORSSI with an optimal geometric quality, accurate camera parameters such as the focal length, lens distortions, and camera installation angles, are indispensable.

In-orbit geometric calibration is a widely used method to obtain accurate camera parameters of ORSSs and has been successfully used by many ORSSs, such as SPOT-5 [1], OrbView-3 [2], IRS-P6 [3], ALOS [4], Deimos-2 [5], ZiYuan-3 [6,7], GaoFen-1/2 [8,9], and HaiYang-1C satellites [10]. In the early days of an ORSS’s launch, in-orbit geometric calibration should be performed timely, so that the geometric quality of ORSSIs can be improved. When the ORSS operates in orbit, the status of satellite cameras and the status of satellite position and attitude determination systems may change due to space environment changes and satellite instrument depletions. In-orbit geometric calibration should be performed aperiodically according to the instrument status changes.

Existing in-orbit geometric calibration methods for ORSSs can be mainly classified into two categories: field-dependent methods and field-independent methods. For the former, many ground control points (GCPs) are often taken as absolute controls and used to precisely estimate camera parameters. GCPs can be manually surveyed in a calibration field by a global navigation positioning system (GNSS) or automatically extracted from some reference data. For example, Gachet performed internal calibration of the SPOT-5 HRG and HRS cameras with many GCPs extracted from the reference data [1]. Mulawa performed geometric calibration of the OrbView-3 cameras with thousands of GCPs extracted from the medium-scale aerial images [2]. Wang et al. extracted dense GCPs from the reference digital orthophoto maps (DOMs) and digital elevation models (DEMs) and calibrated external and internal parameters of the ZiYuan-1 02C, ZiYuan-3, and GaoFen-6 satellite cameras [6,11]. Cao et al. performed geometric calibration of the HaiYang-1C satellite cameras with dense GCPs extracted from the reference DOMs and DEMs [10,12]. For the latter, tie constraints are often taken as relative controls and used to estimate camera parameters. Tie constraints are mainly provided by tie points among multiple overlapped images. For example, Greslou et al. and Lebègue et al. calibrated the viewing reference frame biases of the Pleiades-HR camera with a couple of images collected in an auto-reverse mode and calibrated the focal plane with a couple of images collected in a cross mode [13,14]. Cheng et al. employed the tie constraints of three-view stereoscopic images to perform self-calibration of the GaoFen-2 internal parameters [8]. Pi et al. took full advantage of the tie constraints of a cross-image pair and multi-attitude images and presented a self-calibration method for ORSSs [15,16].

With the help of tie constraints, the field-independent calibration methods can theoretically estimate external and internal camera parameters without GCPs. In fact, however, external parameters can be only roughly estimated with tie constraints [13]. From previous studies, we can see that it is actually very difficult to achieve real field-independent calibration. Sparse GCPs are still necessary to achieve an optimal external orientation accuracy [8,17]. Moreover, the field-independent methods usually require that ORSSs have a strong imaging maneuverability (e.g., auto-reverse imaging mode and cross imaging mode). Unfortunately, the majority of in-orbit ORSSs have not such a maneuverability yet. Hence, the field-dependent calibration methods are still popular methods and widely used at present. In the field-dependent methods, an image block of each sub-image should be covered by a calibration field. In practice, in order to reduce the negative influence of satellite position and attitude errors on the camera parameter estimation of different CCDs, field-covered image blocks of all sub-images should be collected at almost the same time, as the valid image block shown in Fig. 1. Meanwhile, valid image blocks of all sub-images should be texture-riched and the collection time difference between image blocks and the field reference data should be as small as possible, so that highly precise GCPs can be extracted. Generally, as long as sufficient evenly distributed GCPs in all sub-image blocks are available, camera parameters of each CCD can be precisely estimated. Specially, the left and the right edges of each sub-image should have GCPs (e.g., sub-image 4 in Fig. 1), so that all the adjacent sub-images can be seamlessly stitched in geometry in the following ground processing.

 figure: Fig. 1.

Fig. 1. Sketch map of the field-dependent calibration methods.

Download Full Size | PDF

In practice, due to some uncontrollable factors such as land surface changes, lack of textures, radiometric differences, and ground sampling distance (GSD) differences, we cannot always obtain sufficient evenly distributed GCPs in the valid blocks of all sub-images. Besides, in order to meet ORSSI users’ current application needs for wider image swath and higher resolution, more and more CCDs are placed on a camera focal plane. For example, a panchromatic camera of the Jilin-1 KF 01 satellite has 24 CCDs, and each CCD has 6144 detectors. The GSD of panchromatic images is smaller than 1.0 meter, and the total image swath reaches 136 kilometers [18]. Undoubtedly, the difficulty of in-orbit geometric calibration increases as the number of CCDs increases, because it is very difficult to guarantee that the valid image blocks of all sub-images have sufficient evenly distributed GCPs at almost the same time. In such cases, camera parameters of some CCDs often cannot be precisely estimated by the conventional field-dependent calibration methods. As a result, the geometric quality of standard ORSSIs cannot reach an optimal level.

In this study, we presented a feasible in-orbit geometric calibration method for multiple linear-array ORSSs with tie constraints. In the presented method, GCPs in the valid image blocks are taken as absolute controls, and tie points between the adjacent sub-images are taken as relative controls. With the help of both absolute and relative controls, camera parameters of all CCDs can be precisely and simultaneously estimated. Compared with the conventional field-dependent calibration methods, the major innovation of the presented method is that tie constraints provided by tie points between the adjacent sub-images are introduced. The calibration procedures of multi-linear array ORSSs with both GCPs and tie points are designed. With the introduced tie constraints, all CCDs are logically connected into a complete CCD, and the left and the right edges of all CCDs could be effectively constrained. Consequently, camera parameters of all CCDs can be precisely estimated, even if sufficient evenly distributed GCPs in some sub-images are unavailable.

The remainder of this paper is organized as follows. Section 2 details the establishment of the in-orbit geometric calibration model, the conventional geometric calibration methods, and the presented geometric calibration method. Section 3 describes the use of three GaoFen-6 panchromatic images and two ZiYuan3-02 nadir images to analyze the feasibility and effectiveness of the presented method. Section 4 gives the conclusions.

2. Methodology

2.1 In-orbit geometric calibration model

An in-orbit geometric calibration model of a multiple linear-array ORSS is often established based on a physical sensor model. The physical sensor model should be able to precisely describe the geometric relationship between an image point and the corresponding ground point. In practice, different ORSSs perhaps have different physical sensor models due to different imaging instruments, different definitions of space coordinate systems, and different data processing procedures. Accordingly, different ORSSs perhaps have different in-orbit geometric calibration models. In this study, a look-angle-based in-orbit geometric calibration model is used, and its mathematical expression is as follows [6,10,19].

$$ \left[\begin{array}{c} \alpha_{0}+\alpha_{1} n+\alpha_{2} n^{2}+\alpha_{3} n^{3} \\ \beta_{0}+\beta_{1} n+\beta_{2} n^{2}+\beta_{3} n^{3} \\ 1 \end{array}\right]=\lambda\left(\mathbf{R}_{\mathrm{Camera}}^{\mathrm{ADS}}\right)^{\mathrm{T}} \mathbf{R}_{\mathrm{J} 2000}^{\mathrm{ADS}} \mathbf{R}_{\mathrm{WGS} 84}^{\mathrm{J} 2000}\left[\left[\begin{array}{c} \left(\frac{a}{\sqrt{1-e^{2} \sin ^{2} B}}+H\right) \cos B \cos L \\ \left(\frac{a}{\sqrt{1-e^{2} \sin ^{2} B}}+H\right) \cos B \sin L \\ \left(\frac{a}{\sqrt{1-e^{2} \sin ^{2} B}}\left(1-e^{2}\right)+H\right) \sin B \end{array}\right]-\left[\begin{array}{c} X_{S} \\ Y_{S} \\ Z_{S} \end{array}\right]\right] $$
where (α0, α1, α2, α3, β0, β1, β2, β3) are the model parameters of the CCD-detector look angle model; n is the CCD detector number; λ is the scale factor; ${\textbf R}_{\textrm{Camera}}^{\textrm{ADS}}$ represents the rotation matrix from the camera coordinate system to the attitude determination system (ADS) coordinate system; ${\textbf R}_{\textrm{J2000}}^{\textrm{ADS}}$ represents the rotation matrix from the J2000 celestial coordinate system to the ADS coordinate system; ${\textbf R}_{\textrm{WGS84}}^{\textrm{J2000}}$ represents the rotation matrix from the WGS 84 geocentric coordinate system to the J2000 coordinate system; ${(B,L,H)^\textrm{T}}$ are the latitude, longitude, and height coordinates of a ground point in the WGS 84 coordinate system; $({X_S},{Y_S},{Z_S})_{}^\textrm{T}$ are the space coordinates of the satellite position in the WGS 84 coordinate system; a and e are respectively the semimajor axis and the eccentricity of the earth.

In in-orbit geometric calibration, (α0, α1, α2, α3, β0, β1, β2, β3) are used to represent the look angles (φx, φy) of each CCD detector and often considered as the internal camera parameters. These internal parameters can describe the comprehensive influences of principal point distance errors, focal length errors, linear-array rotation errors, and lens distortions of a satellite camera on the ORSSI sensor orientation accuracy. ${\textbf R}_{\textrm{Camera}}^{\textrm{ADS}}$ is constructed by the installation angles (pitch, roll, yaw) of the satellite camera, and (pitch, roll, yaw) are often considered as the external camera parameters. These external parameters can describe the comprehensive influences of camera installation errors, ADS installation errors, and GNSS installation errors. It is noted that the look angles of all CCD detectors are defined in a same camera coordinate system Oc-XcYcZc, as shown in Fig. 2; that is, all CCDs share a same set of external parameters while each CCD has its own internal parameters.

 figure: Fig. 2.

Fig. 2. Sketch map of the look angles of CCD detectors.

Download Full Size | PDF

2.2 Conventional geometric calibration

According to the established geometric calibration model in Eq. (1), the main procedures of the conventional field-dependent calibration methods are as follows [6,10,20].

  • (1) A narrow image block (e.g., 1000 image lines) in the valid image block of each sub-image is selected, as shown in Fig. 1. Dense GCPs in the selected narrow block of each sub-image are extracted from the reference DOMs and DEMs by dense image matching.
  • (2) The middle CCD of all CCDs is taken as the master CCD, and the other CCDs are taken as the slave CCDs. The internal parameters of all CCDs are initialized according to the designed or laboratory-calibrated CCD values. Then, the initialized internal parameters of the master CCD in Eq. (1) are considered free of errors, and the external parameters of the satellite camera are estimated with the extracted GCPs in the master sub-image.
  • (3) Taking the estimated external parameters as true values, the internal parameters of each CCD are separately estimated with the extracted GCPs in each sub-image.
In the above procedures, the geometric calibration accuracy of each sub-image and the geometric stitching accuracy of the adjacent sub-images are often taken as two geometric indictors to judge whether the in-orbit geometric calibration is successful. Only both the calibration accuracies of all the sub-images and the stitching accuracies of all the adjacent sub-images reach a satisfied level, the geometric calibration can be considered successful. Then, the calibrated external and internal parameters can be used in the following ground processing, such as sensor orientation, geometric stitching, and band registration.

2.3 Presented geometric calibration

In the conventional field-dependent calibration methods, only GCPs are used as control constraints. The geometric calibration accuracy of each sub-image is mainly determined by the GCP extraction accuracy and the internal accuracy of the reference DOMs and DEMs. The geometric stitching accuracy of the adjacent sub-images is mainly determined by the GCP distributions. Specifically, the CCD-detector look angle model in Eq. (1) is mathematically a polynomial model. In theory, GCPs in the narrow image block of a sub-image should be evenly distributed and the left and the right edges of the sub-image should have sufficient GCPs, so that the polynomial model parameters can be effectively constrained and estimated.

Generally, an accurate and robust image matching algorithm can improve the GCP extraction accuracy. Higher-resolution and highly accurate remote sensing images (e.g., aerial images or unmanned aerial vehicle images) can be used to produce the reference DOMs and DEMs, so that the external and internal accuracy of the reference DOMs and DEMs can be improved. Comparatively, we often cannot update the reference DOMs and DEMs in time, because it needs a lot of manpower, material resources, and financial resources. In such a case, many uncontrollable factors such as land surface changes, lack of textures, radiometric differences, and GSD differences, have a negative influence on the GCP extraction. It is thereby very difficult to simultaneously guarantee that all sub-images have satisfied GCPs in the selected narrow image blocks, especially when many CCDs are placed on the focal plane.

When we estimate the internal parameters in Eq. (1), the internal parameters of a certain CCD are actually used to mathematically model the CCD-detector look angles of all the GCPs in the corresponding sub-image. When the sub-image has not sufficient evenly distributed GCPs, perhaps we can achieve a satisfied calibration accuracy. However, the stitching accuracy of the adjacent sub-images is often worse than expected, because the internal parameters are only constrained by the GCPs. In the image areas where lack of GCPs, such as the left and the right edges, the internal parameters will have no effective constraints; that is, the estimated internal parameters are deviated, although the calibration accuracy may be satisfied.

In the field-independent calibration methods, we can see from previous studies that tie constraints provided by tie points are the major geometric constraints to estimate camera parameters [816]. Here, tie constraints mean that each pair of tie points in the overlapped images should spatially intersect into a same ground point. In fact, we can also introduce such constraints in the field-dependent methods, since the adjacent sub-images have dense tie points. With the help of tie constraints, the left and the right edges of each sub-image can be theoretically constrained. The stitching accuracy of the adjacent sub-images can be then improved. On basis of this idea, we presented a feasible in-orbit geometric calibration method for multiple linear-array ORSSs with tie constraints. The main procedures are as follows.

  • (1) A narrow image block (e.g., 1000 image lines) in the valid image block of each sub-image is selected, as shown in Fig. 1. Dense GCPs in the selected narrow block of each sub-image are extracted from the reference DOMs and DEMs by dense image matching.
  • (2) Dense tie points in each overlapped area between the adjacent sub-images are matched by dense image matching, as shown in Fig. 1.
  • (3) The middle CCD of all CCDs is taken as the master CCD, and the other CCDs are taken as the slave CCDs. The internal parameters of all CCDs are initialized according to the designed or laboratory-calibrated CCD values. Then, the initialized internal parameters of the master CCD in Eq. (1) are considered free of errors, and the external parameters of the satellite camera are estimated with the extracted GCPs in the master sub-image.
  • (4) Taking the estimated external parameters as true values, the internal parameters of all CCDs and the ground coordinates of all tie points are simultaneously estimated with both the extracted GCPs in each sub-image and the matched tie points in each overlapped area.
In the above procedures, the external parameters can be estimated as performed in the conventional calibration methods, which can refer to [6]. The major difference lies in the internal parameter estimation. In the conventional methods, the internal parameters of each CCD are separately estimated with only GCPs. In the presented method, tie constraints provided by tie points between the adjacent sub-images are introduced, and the internal parameters of all CCDs are simultaneously estimated with both GCPs and tie points.

It is noted that the matched tie points in each overlapped area are weekly converged. We cannot consider all the three coordinate components (i.e., latitude, longitude, and height) of tie points as unknowns. Otherwise, week convergences will result in large estimation deviations. In order to overcome the week convergence problem, DEM-supported parameter estimation is a preferred method, in which DEMs are used as a height constraint [21,22]. In this study, the main procedures of the DEM-supported internal parameter estimation are as follows.

  • (1) Eq. (1) is transformed into the following Eq. (2).
    $$\left\{ \begin{array}{l} {F_x} = \frac{{{a_{11}}({X - {X_S}} )+ {a_{12}}({Y - {Y_S}} )+ {a_{13}}({Z - {Z_S}} )}}{{{a_{31}}({X - {X_S}} )+ {a_{32}}({Y - {Y_S}} )+ {a_{33}}({Z - {Z_S}} )}} - ({{\alpha_0} + {\alpha_1}n + {\alpha_2}{n^2} + {\alpha_3}{n^3}} )\\ {F_y} = \frac{{{a_{21}}({X - {X_S}} )+ {a_{22}}({Y - {Y_S}} )+ {a_{23}}({Z - {Z_S}} )}}{{{a_{31}}({X - {X_S}} )+ {a_{32}}({Y - {Y_S}} )+ {a_{33}}({Z - {Z_S}} )}} - ({{\beta_0} + {\beta_1}n + {\beta_2}{n^2} + {\beta_3}{n^3}} )\end{array} \right.$$
    where $\left[\begin{array}{lll}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{array}\right]=\left(\mathbf{R}_{\text {Camera }}^{\mathrm{ADS}}\right)^{\mathrm{T}} \mathbf{R}_{\mathrm{J} 2000}^{\mathrm{ADS}} \mathbf{R}_{\mathrm{WGS} 84}^{\mathrm{J} 2000}$; $\left[ {\begin{array}{@{}c@{}} X\\ Y\\ Z \end{array}} \right] = \left[ {\begin{array}{@{}c@{}} {\left( {\frac{a}{{\sqrt {1 - {e^2}{{\sin }^2}B} }} + H} \right)\cos B\cos L}\\ {\left( {\frac{a}{{\sqrt {1 - {e^2}{{\sin }^2}B} }} + H} \right)\cos B\sin L}\\ {\left( {\frac{a}{{\sqrt {1 - {e^2}{{\sin }^2}B} }}({1 - {e^2}} )+ H} \right)\sin B} \end{array}} \right]$.
  • (2) According to Eq. (2), a set of error equations are established with all the extracted GCPs in all the sub-images as follows.
    $${{\textbf V}_g} = {{\textbf C}_g}{\textbf S} - {{\textbf L}_g}$$
    where ${{\textbf V}_g} = \left[ {\begin{array}{*{20}{c}} \vdots \\ {{v_{{F_{x,g,k,i}}}}}\\ {{v_{{F_{y,g,k,i}}}}}\\ \vdots \end{array}} \right]$ is the residual error matrix of GCPs;

    ${{\textbf C}_g} = \left[ {\begin{array}{*{20}{c}} \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \cdots &{\frac{{\partial {F_{x,g,k,i}}}}{{\partial {\alpha_{0,k}}}}}&{\frac{{\partial {F_{x,g,k,i}}}}{{\partial {\alpha_{1,k}}}}}&{\frac{{\partial {F_{x,g,k,i}}}}{{\partial {\alpha_{2,k}}}}}&{\frac{{\partial {F_{x,g,k,i}}}}{{\partial {\alpha_{3,k}}}}}&0&0&0&0& \cdots \\ \cdots &0&0&0&0&{\frac{{\partial {F_{y,g,k,i}}}}{{\partial {\beta_{0,k}}}}}&{\frac{{\partial {F_{y,g,k,i}}}}{{\partial {\beta_{1,k}}}}}&{\frac{{\partial {F_{y,g,k,i}}}}{{\partial {\beta_{2,k}}}}}&{\frac{{\partial {F_{y,g,k,i}}}}{{\partial {\beta_{3,k}}}}}& \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \end{array}} \right]$ is the partial derivative matrix of the unknown internal parameters;

${\textbf S} = {\left[ {\begin{array}{*{20}{c}} \cdots &{d{\alpha_{0,k}}}&{d{\alpha_{1,k}}}&{d{\alpha_{2,k}}}&{d{\alpha_{3,k}}}&{d{\beta_{0,k}}}&{d{\beta_{1,k}}}&{d{\beta_{2,k}}}&{d{\beta_{3,k}}}& \cdots \end{array}} \right]^\textrm{T}}$ is the correction matrix of the unknown internal parameters; and

${{\textbf L}_g} = \left[ {\begin{array}{*{20}{c}} \vdots \\ {\left( {{\alpha_0} + {\alpha_1}n + {\alpha_2}{n^2} + {\alpha_3}{n^3} - \frac{{{a_{11}}({X - {X_S}} )+ {a_{12}}({Y - {Y_S}} )+ {a_{13}}({Z - {Z_S}} )}}{{{a_{31}}({X - {X_S}} )+ {a_{32}}({Y - {Y_S}} )+ {a_{33}}({Z - {Z_S}} )}}} \right)_{g,k,i}^0}\\ {\left( {{\beta_0} + {\beta_1}n + {\beta_2}{n^2} + {\beta_3}{n^3} - \frac{{{a_{21}}({X - {X_S}} )+ {a_{22}}({Y - {Y_S}} )+ {a_{23}}({Z - {Z_S}} )}}{{{a_{31}}({X - {X_S}} )+ {a_{32}}({Y - {Y_S}} )+ {a_{33}}({Z - {Z_S}} )}}} \right)_{g,k,i}^0}\\ \vdots \end{array}} \right]$ is the constant matrix and can be obtained with the estimated external parameters and the initialized internal parameters. Here, the subscript g denotes GCPs; the subscript k = 1, 2, …, m denotes the kth sub-image; and the subscript i = 1, 2, …, ng denotes the ith GCP in the kth sub-image.

  • (3) For each pair of tie points, two image points in the adjacent sub-images are respectively projected onto the DEMs, and two sets of ground point coordinates (i.e., latitude, longitude, and height) are obtained. The average values of the two sets of ground point coordinates are taken as the initial latitude, longitude, and height of the tie point. Then, in order to overcome the week convergence problem, only the latitude and the longitude coordinates of each tie point are taken as unknowns. According to Eq. (2), a set of error equations are established with all the matched tie points in all the adjacent sub-images as follows.
    $${{\textbf V}_t} = {{\textbf C}_t}{\textbf S} + {{\textbf D}_t}{\textbf T} - {{\textbf L}_t}$$
where ${{\textbf V}_t}$, ${{\textbf C}_t}$, and ${{\textbf L}_t}$ have the same meanings with ${{\textbf V}_g}$, ${{\textbf C}_g}$, and ${{\textbf L}_g}$ in Eq. (3);

${{\textbf D}_t} = \left[ {\begin{array}{*{20}{c}} \vdots \\ {\begin{array}{*{20}{c}} \ldots &{\frac{{\partial {F_{x,t,k,j}}}}{{\partial {B_j}}}}&{\frac{{\partial {F_{x,t,k,j}}}}{{\partial {L_j}}}}& \ldots \end{array}}\\ {\begin{array}{*{20}{c}} \ldots &{\frac{{\partial {F_{y,t,k,j}}}}{{\partial {B_j}}}}&{\frac{{\partial {F_{y,t,k,j}}}}{{\partial {L_j}}}}& \ldots \end{array}}\\ \vdots \end{array}} \right]$ is the partial derivative matrix of the unknown latitudes and longitudes of tie points; and ${\textbf T} = {\left[ {\begin{array}{*{20}{c}} \cdots &{d{B_j}}&{d{L_j}}& \cdots \end{array}} \right]^{\textbf T}}$ is the correction matrix of the unknown latitudes and longitudes. Here, the subscript t denotes tie points; the subscript j = 1, 2, …, nt denotes the jth tie point in the kth sub-image.

  • (4) According to Eqs. (3) and (4), a set of normal equations are established as follows.
    $$\left\{ \begin{array}{l} {{\textbf N}_{11}}{\textbf S} + {{\textbf N}_{12}}{\textbf T} = {{\textbf M}_1}\\ {{\textbf N}_{21}}{\textbf S} + {{\textbf N}_{22}}{\textbf T} = {{\textbf M}_2} \end{array} \right.$$
where ${{\textbf N}_{11}} = {\textbf C}_g^\textrm{T}{{\textbf C}_g} + {\textbf C}_t^\textrm{T}{{\textbf C}_t}$, ${{\textbf N}_{12}} = {\textbf C}_t^\textrm{T}{{\textbf D}_t}$, ${{\textbf N}_{21}} = {\textbf D}_t^\textrm{T}{{\textbf C}_t}$, ${{\textbf N}_{22}} = {\textbf D}_t^\textrm{T}{{\textbf D}_t}$, ${{\textbf M}_1} = {\textbf C}_g^\textrm{T}{{\textbf L}_g} + {\textbf C}_t^\textrm{T}{{\textbf L}_t}$, and ${{\textbf M}_2} = {\textbf D}_t^\textrm{T}{{\textbf L}_t}$.

  • (5) In Eq. (5), the unknown ${\textbf T}$ is eliminated with the Gauss elimination method, and then the unknown ${\textbf S}$ is estimated according to the least squares adjustment method as follows.
    $${\textbf S} = {({{{\textbf N}_{11}} - {{\textbf N}_{12}}{\textbf N}_{22}^{ - 1}{{\textbf N}_{21}}} )^{ - 1}}({{{\textbf M}_1} - {{\textbf N}_{12}}{\textbf N}_{22}^{ - 1}{{\textbf M}_2}} )$$
  • (6) The internal parameters of all the CCDs are updated with the estimated ${\textbf S}$, and steps (2), (3), (4), and (5) are repeated until the iteration converges. Generally, when the estimated corrections (0, 0) of all the CCDs are smaller than 0.1 pixel, the iteration can be considered converged.

In the presented calibration method, tie points are matched between the adjacent sub-images. Both the radiometric and geometric differences between the adjacent sub-images are often very small. Dense and highly precise tie points can be easily matched. Moreover, the matched tie points can distribute in the whole overlapped area between the adjacent sub-images rather than just in the selected narrow image block, as shown in Fig. 1. The reason is that satellite position and attitude errors have almost the same influences on a pair of tie points in establishing Eq. (2). With the tie constraints provided by tie points, the left and the right edges of all CCDs can be effectively constrained, and all CCDs can be logically connected into a complete CCD. The internal parameters of all CCDs can then be simultaneously and precisely estimated. It is expected that the estimation deviations of the internal parameters caused by lack of GCPs and uneven GCP distributions can be effectively eliminated. The geometric stitching accuracy of the adjacent sub-images can be accordingly improved.

3. Results and discussion

3.1 Experimental datasets

In this study, three GaoFen-6 panchromatic images and two ZiYuan3-02 nadir images were tested. The general characteristics of the tested images are listed in Table 1. In order to demonstrate the feasibility and effectiveness of the presented calibration method, two sets of reference DOMs and DEMs were used to evaluate the geometric calibration accuracies and the sensor orientation accuracies of images 1, 4, and 5, as listed in Table 1.

Tables Icon

Table 1. General characteristics of the tested images.

3.2 Geometric calibration accuracy analysis

In this section, a GaoFen-6 image (i.e., image 1) and a ZiYuan3-02 image (i.e., image 4) in Table 1 were tested. In order to comparatively evaluate the feasibility of the presented calibration method, we designed two experiments as follows.

  • (1) Experiment E1: The conventional geometric calibration method was performed, as described in section 2.2; that is, the internal parameters of each CCD were separately estimated with only GCPs in the corresponding sub-image.
  • (2) Experiment E2: The presented geometric calibration method was performed, as described in section 2.3; that is, the internal parameters of all CCDs were simultaneously estimated with both GCPs and tie points in all sub-images.
In the tested GaoFen-6 and ZiYuan3-02 sub-images, we selected two narrow image blocks in the valid overlapped area between the tested sub-images and the reference DOMs and DEMs. The distributions of the extracted dense GCPs and the matched dense tie points in the tested sub-images are respectively shown in Figs. 3 and 4. The red points denote GCPs, and the yellow points denote tie points. It is noted that the overlapped area between the adjacent sub-images of the GaoFen-6 and the ZiYuan3-02 images is very small in the column direction. Tie points in the left and the right edges of each sub-image in Figs. 3 and 4 are thereby not visually clear. In both selected narrow image blocks, both experiments E1 and E2 were performed. After geometric calibration, the ground point of each GCP was projected onto the corresponding sub-image with the estimated external and internal camera parameters, and a projected image point was obtained. The root mean square errors (RMSEs) of the image-space coordinate residual errors between the projected image points and the corresponding points in each sub-image were calculated and taken as the geometric calibration accuracy, as listed in Tables 2 and 3.

 figure: Fig. 3.

Fig. 3. GCP and tie point distributions in GaoFen-6 (a) sub-image 1, (b) sub-image 2, (c) sub-image 3, (d) sub-image 4, (e) sub-image 5, (f) sub-image 6, (g) sub-image 7, and (h) sub-image 8.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. GCP and tie point distributions in ZiYuan3-02 (a) sub-image 1, (b) sub-image 2, and (c) sub-image 3.

Download Full Size | PDF

Tables Icon

Table 2. Geometric calibration accuracies of the GaoFen-6 sub-images.

Tables Icon

Table 3. Geometric calibration accuracies of the ZiYuan3-02 sub-images.

In experiment E1, the geometric calibration accuracies of all the GaoFen-6 and ZiYuan3-02 sub-images achieved by the conventional calibration method reached better than 1.0 pixel, as listed in Tables 2 and 3. On the whole, the conventional calibration method is a mature method. The established geometric calibration model and the designed calibration procedures in the conventional calibration method are already proved to be feasible and effective by many ORSSs. As long as sufficient highly precise and evenly distributed GCPs in all sub-images are available, accurate external and internal parameters of all CCDs can often be estimated. The achieved calibration accuracy is mainly determined by the GCP extraction accuracy and the internal accuracy of the reference DOMs and DEMs. Here, the calibration errors of the GaoFen-6 and the ZiYuan3-02 sub-images caused by the DOM errors were approximately 0.3 pixel. The calibration errors caused by the DEM errors could be ignored. Moreover, land surface changes, lack of textures, radiometric differences, and GSD differences had a large negative influence on the GCP extraction. Generally, the geometric calibration accuracies achieved in experiment E1 were consistent with the expected.

In fact, when we estimate the internal parameters with GCPs in experiment E1, we can see from Eq. (1) that the internal parameters are used to mathematically model all the GCPs’ CCD-detector look angles. The achieved calibration accuracies mainly reflect the GCPs’ accuracies, whilst they are unable to reflect the GCPs’ distributions. When an image area in the selected narrow image block lacks of GCPs, we can often obtain a satisfied calibration accuracy. However, the estimated internal parameters are more likely to be deviated, because the internal parameters cannot be effectively constrained in the GCP-lacked area. Sub-image 3 of the GaoFen-6 image is an example. We could see from Fig. 3(c) that the left edge of sub-image 3 had not GCPs in narrow image block 2. The calibration accuracy of sub-image 3 still reached 0.805 pixel. In such a case, the geometric stitching accuracy except for the geometric calibration accuracy should be used to judge whether the estimated internal parameters were deviated.

In experiment E2, both GCPs and tie points in all the sub-images were used to simultaneously estimate the internal parameters of all the CCDs. The calibration accuracies achieved in experiment E2 were almost the same with those achieved in experiment E1. It demonstrates that the presented method can take full advantage of the absolute constraints provided by GCPs, just like the conventional calibration method. The introduced tie constraints provided by tie points does not bring in negative influences on the absolute constraints. The achieved calibration accuracies thereby reached better than 1.0 pixel. With respect to the effects of tie constraints, we will discuss in the next section.

3.3 Geometric stitching accuracy analysis

In this section, the GaoFen-6 image (i.e., image 1) and the ZiYuan3-02 image (i.e., image 4) used in section 3.2 continued to be tested. The geometric stitching accuracies of the adjacent sub-images achieved with the estimated external and internal parameters in experiments E1 and E2 were comparatively analyzed. Specially, each tie point in the left sub-image of a pair of adjacent sub-images was firstly projected onto the ground, and a ground point was obtained. Then, the ground point was further projected onto the right sub-image, and a projected image point was obtained. Finally, the RMSEs of the image-space coordinate residual errors between the projected image points and the corresponding points were calculated and taken as the geometric stitching accuracy, as listed in Tables 4 and 5.

Tables Icon

Table 4. Geometric stitching accuracies of the GaoFen-6 sub-images.

Tables Icon

Table 5. Geometric stitching accuracies of the ZiYuan3-02 sub-images.

In experiment E1, the geometric stitching accuracies of the adjacent sub-images achieved by the conventional calibration method differed from approximately 0.1 pixel to 1.1 pixels. It demonstrates that the estimated internal parameters of different CCDs had different levels of deviations, although the calibration accuracies of all the sub-images reached a satisfied level. In Table 4, a representative example is the stitching accuracy of GaoFen-6 sub-image 2 and sub-image 3 in narrow block 2. The left edge of sub-image 3 had no GCPs, as shown in Fig. 3(c). The estimated internal parameters of CCD 3 were not effectively constrained in the left edge, and they were likely to be deviated. The stitching accuracy thereby reached worse than 0.9 pixel, as could be expected. Another example is the stitching accuracy of GaoFen-6 sub-image 6 and sub-image 7 in narrow block 1. The left and the right edges of both sub-images 6 and 7 have sufficient GCPs, but the stitching accuracy reached worse than 1.0 pixel. The residual errors of tie points showed an obvious systematic characteristic, as shown in Fig. 5. Such a result should be associated with the GCP distributions. In fact, we are unable to guarantee that the extracted GCPs are exactly evenly distributed. It is also very difficult for us to theoretically define a standard to judge whether the GCPs are evenly distributed or not. Therefore, in practice, we often select several narrow image blocks in the valid image block shown in Fig. 1, so that an optimal calibration result (i.e., optimal calibration accuracy and optimal stitching accuracy) can be obtained. Such a selection method is often feasible and effective, when an ORSS has fewer linear-array CCDs. When the number of CCDs increases, it is expected that such a selection method will be out of operation. The reason is that it is difficult to guarantee that the calibration accuracies of all the sub-images and the stitching accuracies of all the adjacent sub-images can be simultaneously satisfied in a same selected image block.

 figure: Fig. 5.

Fig. 5. Residual error distributions of tie points between GaoFen-6 sub-images 6 and 7 in (a) x and (b) y directions in experiment E1.

Download Full Size | PDF

In experiment E2, tie constraints provided by tie points were introduced in the presented calibration method. With the help of tie points between the adjacent sub-images, all the CCDs could be logically connected into a complete CCD. Essentially, with the introduced tie constraints provided by tie points, the look angle models of all the CCDs were connected into a piecewise polynomial model. Hence, the estimation deviations in the internal parameters could be effectively eliminated, and the geometric stitching accuracies of all the adjacent sub-images were improved to better than 0.2 pixel. To be specific, the stitching accuracy of GaoFen-6 sub-image 2 and sub-image 3 in narrow block 2 was improved from 0.911 pixel to 0.082 pixel. The stitching accuracy of GaoFen-6 sub-image 6 and sub-image 7 in narrow block 1 was improved from 1.055 pixels to 0.105 pixel. It demonstrates that the introduced tie constraints could effectively constrain the left and the right edges of a sub-image, even when the selected narrow image block lacks GCPs or the extracted GCPs are not evenly distributed. Besides, the introduced tie constraints do not have negative influences on the geometric calibration accuracies, as listed in Tables 2 and 3. Optimal calibration accuracies and optimal stitching accuracies of the GaoFen-6 and the ZiYuan3-02 sub-images were thereby achieved in both the selected narrow image blocks.

3.4 Orientation accuracy analysis of stitched images

In this section, in order to comprehensively demonstrate the feasibility of the presented method, the GaoFen-6 image (i.e., image 1) and the ZiYuan3-02 image (i.e., image 4) used in section 3.2 continued to be tested. Firstly, the external and internal parameters estimated with narrow image block 1 were used to geometrically stitch all the sub-images, as performed in [12]. Then, dense GCPs in a narrow image block of the stitched image were extracted from the reference DOMs and DEMs. The extracted GCPs in the stitched GaoFen-6 image were taken as an example and shown in Fig. 6. Finally, the bias compensation based on a rational function model (RFM) in [23] was performed, and the sensor orientation accuracy was achieved. For convenient comparison, the orientation accuracies achieved with the estimated parameters in both experiment E1 and experiment E2 were listed in Table 6. The residual error distributions of GCPs in the stitched GaoFen-6 image were taken as an example and shown in Figs. 7 and 8.

 figure: Fig. 6.

Fig. 6. GCP distributions in the stitched GaoFen-6 image in (a) experiment E1 and (b) experiment E2.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Residual error distributions of GCPs in the stitched GaoFen-6 image in (a) x and (b) y directions in experiment E1.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Residual error distributions of GCPs in the stitched GaoFen-6 image in (a) x and (b) y directions in experiment E2.

Download Full Size | PDF

Tables Icon

Table 6. Sensor orientation accuracies of the stitched GaoFen-6 and ZiYuan3-02 images.

From the experimental results in Table 6, we could see that the sensor orientation accuracies of the stitched GaoFen-6 and ZiYuan3-02 images achieved in experiment E1 were almost the same with those in experiment E2. It seems that the geometric stitching errors of the adjacent sub-images did not have any negative influences on the sensor orientation accuracies of the stitched images. Even from the residual error distributions of GCPs in Figs. 7 and 8, it was difficult for us to see the stitching errors. The major reason should be that the GCP errors and the stitching errors were mixed together. Due to the GCP extraction errors, the DOM errors, and the DEM errors, it was inevitable that the extracted GCPs had random errors. The magnitude of these random errors was close to or even larger than the magnitude of the stitching errors, which could be proved by the results in Table 2 to Table 5. In such a case, we were unlikely to separate the systematic stitching errors from the random GCP errors.

In fact, the geometric stitching errors were undoubtedly propagated into the stitched images. Taking the stitched GaoFen-6 image as an example, we could see from Fig. 9 that the stitched image did have stitching errors. In order to clearly evaluate the stitching errors, a narrow image block in the stitched image in experiment E1was firstly selected. Then, dense tie points between the stitched image in experiment E1 and that in experiment E2 were matched. Finally, relative orientation between the two stitched images was performed, and the relative orientation error distributions of tie points were shown in Fig. 10. More specially, each tie point in the stitched image in experiment E1 was first projected onto the ground, and an object point was obtained. Second, the obtained object point was further projected onto the stitched image in experiment E2, and a projected image point was obtained. Then, an affine transformation model was used to describe the geometric relationship between the projected image points and the corresponding image points. The remained errors after the affine transformation compensation were considered as the relative orientation errors.

 figure: Fig. 9.

Fig. 9. Partial enlarged map of the stitched GaoFen-6 images in (a) experiment E1 and (b) experiment E2.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Residual error distributions of tie points in (a) x and (b) y directions in the stitched GaoFen-6 image.

Download Full Size | PDF

In Fig. 10, we could see that the relative orientation errors in the stitched GaoFen-6 images were obviously discontinuous. These orientation errors were theoretically caused by two factors. One was the stitching errors in the stitched image in experiment E1, and the other was the stitching errors in the stitched image in experiment E2. Comparing the relative orientation errors in Fig. 10 with the stitching accuracies in narrow block 1 in Table 4, we could see that the orientation errors in Fig. 10 were consistent with the stitching accuracies in experiment E1. We could thereby conclude that the vast majority of the relative orientation errors were caused by the stitching errors in experiment E1 rather than those in experiment E2. It demonstrates that the geometric stitching errors of the adjacent sub-images indeed propagated into the stitched images, although it was difficult to clearly detect these stitching errors. The propagated stitching errors will undoubtedly decrease the geometric quality of the stitched images. With the presented method, the geometric stitching errors of the adjacent sub-images could be effectively eliminated. All the sub-images could then be seamlessly stitched into a complete image. To be expected, the geometric quality of the stitched images could be improved.

3.5 Performance analysis of estimated camera parameters

The most important objective of in-orbit geometric calibration is to obtain accurate camera parameters. The estimated external and internal camera parameters should be able to improve the geometric quality of another images. On basis of this, two GaoFen-6 images (i.e., images 2 and 3) and one ZiYuan-3 image (i.e., image 5) in Table 1 were used to analyze the performance of the camera parameters estimated with narrow image block 1 in section 3.2. For convenient comparison, the following experimental results achieved with the estimated camera parameters in experiments E1 and E2 were respectively denoted as scenarios S1 and S2.

In this section, the performance of the estimated camera parameters was evaluated in two aspects: geometric stitching and sensor orientation. For the former, the estimated camera parameters were used to evaluate the geometric stitching accuracies of the tested GaoFen-6 and ZiYuan3-02 sub-images, as listed in Tables 7 and 8. We could see that the geometric stitching accuracies in Tables 7 and 8 were almost the same with those in Tables 4 and 5. It demonstrates that the estimated camera parameters in in-orbit geometric calibration directly affects the stitching accuracies of another sub-images. In experiment E1, the camera parameters estimated with the conventional calibration method were deviated. The stitching accuracies achieved in scenario S1 were thereby worse. Similarly, in experiment E2, the deviations of the estimated camera parameters were effectively eliminated by the presented method. The stitching accuracies achieved in scenario S2 were accordingly improved.

Tables Icon

Table 7. Geometric stitching accuracies of the GaoFen-6 sub-images

Tables Icon

Table 8. Geometric stitching accuracies of the ZiYuan3-02 sub-images

For the sensor orientation evaluation, the estimated camera parameters were firstly used to geometrically stitch all the sub-images. Dense GCPs in a narrow image block of the stitched image were then extracted from the reference DOMs and DEMs. Finally, the RFM-based sensor orientation was performed. It was a pity that we did not have the reference DOMs and DEMs covering the tested GaoFen-6 images (i.e., images 2 and 3). Only the ZiYuan3-02 image (i.e., image 5) were thereby tested, and the achieved sensor orientation accuracies were listed in Table 9. Similarly, in order to clearly demonstrate the negative influences of the geometric stitching errors, relative orientation between the two stitched images in scenarios S1 and S2 was performed. The residual error distributions of tie points were shown in Fig. 11. We could see from Table 9 that the achieved orientation accuracies in scenario S1 were almost the same with those in scenario S2. Such a conclusion was also consistent with the experimental results in Table 6. From the results in Fig. 11, we could see that the geometric quality of the stitched images was affected by the geometric stitching errors as well. Likewise, the geometric quality decrease was mainly caused by the stitching errors in scenario S1.

 figure: Fig. 11.

Fig. 11. Residual error distributions of tie points in (a) x and (b) y directions in the stitched ZiYuan3-02 image.

Download Full Size | PDF

Tables Icon

Table 9. Sensor orientation accuracies of the stitched ZiYuan3-02 images

From the above results, we could conclude that in-orbit geometric calibration plays a very important role in the ground processing of ORSSIs. The deviations of the camera parameters estimated in in-orbit geometric calibration will play the part of systematic errors and propagate into another images. To be expected, the geometric quality of these images will be unable to reach an optimal level. Therefore, in order to improve the geometric quality of ORSSIs, we should improve both the geometric calibration accuracy and the geometric stitching accuracy in in-orbit geometric calibration as possible as we can.

4. Conclusion

In in-orbit geometric calibration, the field-dependent geometric calibration methods are popular methods at present and widely used in practice. In such methods, sufficient highly precise and evenly distributed GCPs in all sub-images are often necessary to estimate accurate camera parameters of all CCDs. However, due to some uncontrollable factors such as land surface changes, lack of textures, radiometric differences, and GSD differences, we cannot always obtain sufficient evenly distributed GCPs in the valid image blocks for all sub-images, especially when more and more CCDs are placed on the camera focal plane. In this study, a feasible in-orbit geometric calibration method for multi-linear array ORSSs with tie constraints is presented. In the presented method, both GCPs in all sub-images and tie points between all adjacent sub-images are employed. GCPs are employed to provide absolute constraints, and tie points are employed to provide tie constraints. With the help of tie constraints, the left and the right edges of all sub-images could be effectively constrained. The internal camera parameters of all CCDs can be precisely and simultaneously estimated, even if sufficient evenly distributed GCPs in some sub-images are unavailable.

The presented calibration method was tested on three GaoFen-6 images and two ZiYuan3-02 images. Compared with the conventional method, the experimental results showed that the presented method could effectively constrain the left and the right edges of CCDs when we estimate the internal camera parameters. The deviations of the estimated internal parameters can be then eliminated. Consequently, the geometric stitching accuracy of the adjacent sub-images can be effectively improved. The geometric quality of the stitched images can be accordingly improved. Meanwhile, the introduced tie constraints in the presented method have no negative influences on the absolute constraints provided by GCPs; that is, the geometric calibration accuracy achieved by the presented method remains the same with that achieved by the conventional method. As such, the experimental results demonstrated the feasibility and effectiveness of the presented calibration method.

In this study, the tested GaoFen-6 and ZiYuan3-02 cameras are linearly designed. With the tested GaoFen-6 and ZiYuan3-02 images, the presented calibration method achieved satisfied experimental results. Theoretically, the presented method is also suitable for noncollinear satellite cameras. Of course, more ORSSIs collected by noncollinear cameras are needed to evaluate the feasibility and effectiveness of the presented method in the future.

Funding

Scientific Research Foundation of Hubei University of Technology (BSQD2020055); National Natural Science Foundation of China (61801331); Northwest Engineering Corporation Limited Major Science and Technology Projects (XBY-ZDKJ-2020-08).

Acknowledgments

The authors would like to thank the anonymous reviewers and members of the editorial team for their comments and contribution.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. R. Gachet, “SPOT5 in-flight commission: inner orientation of HRG and HRS instruments,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 35(B1), 535–539 (2004).

2. D. Mulawa, “On-orbit geometric calibration of the OrbView-3 high resolution imaging satellite,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 35(B1), 1–6 (2004).

3. P. V. Radhadevi and S. S. Solanki, “In-flight geometric calibration of different cameras of IRS-P6 using a physical sensor model,” Photogramm. Rec. 23(121), 69–89 (2008). [CrossRef]  

4. P. V. Radhadevi and R. Müller, “P. d‘Angelo, and P. Reinartz, “In-flight geometric calibration and orientation of ALOS/PRISM imagery with a generic sensor model,” Photogramm. Eng. Remote Sens. 77(5), 531–538 (2011). [CrossRef]  

5. S. Lee and D. Shin, “On-orbit camera misalignment estimation framework and its application to earth observation satellite,” Remote Sens. 7(3), 3320–3346 (2015). [CrossRef]  

6. M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014). [CrossRef]  

7. J. Cao, X. Yuan, and J. Gong, “In-orbit geometric calibration and validation of ZY-3 three-line cameras based on CCD-detector look angles,” Photogram Rec 30(150), 211–226 (2015). [CrossRef]  

8. Y. Cheng, M. Wang, S. Jin, L. He, and Y. Tian, “New on-orbit geometric interior parameters self-calibration approach based on three-view stereoscopic images from high-resolution multi-TDI-CCD optical satellites,” Opt. Express 26(6), 7475–7493 (2018). [CrossRef]  

9. Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “A new image mosaicking approach for the multiple camera system of the optical remote sensing satellite GaoFen1,” Remote Sens. Lett. 8(11), 1042–1051 (2017). [CrossRef]  

10. J. Cao, F. Wang, Y. Zhou, and Z. Ye, “In-orbit geometric calibration of HaiYang-1C coastal zone imager with multiple fields,” Opt. Express 29(12), 18950–18965 (2021). [CrossRef]  

11. M. Wang, Y. Cheng, B. Guo, and S. Jin, “Parameters determination and sensor correction method based on virtual CMOS with distortion for the GaoFen6 WFV camera,” ISPRS J. Photogramm. Remote Sens. 156, 51–62 (2019). [CrossRef]  

12. J. Cao, Z. Zhang, S. Jin, and X. Chang, “Geometric stitching of HaiYang-1C ultra violet imager with a distorted virtual camera,” Opt. Express 28(9), 14109–14126 (2020). [CrossRef]  

13. D. Greslou, F. de Lussy, J. M. Delvit, C. Dechoz, and V. Amberg, “Pleiades-HR innovative techniques for geometric image quality commissioning,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XXXIX-B1(B1), 543–547 (2012). [CrossRef]  

14. L. Lebègue, D. Greslou, F. deLussy, S. Fourest, G. Blanchet, C. Latry, S. Lachérade, J. M. Delvit, P. Kubik, C. Déchoz, V. Amberg, and F. Porez-Nadal, “Pleiades-HR image quality commissioning,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XXXIX-B1(B1), 561–566 (2012). [CrossRef]  

15. Y. Pi, B. Yang, M. Wang, X. Li, Y. Cheng, and W. Tang, “On-orbit geometric calibration using a cross-image pair for the linear sensor aboard the agile optical satellite,” IEEE Geosci. Remote Sens. Lett. 14(7), 1176–1180 (2017). [CrossRef]  

16. Y. Pi, B. Yang, X. Li, and M. Wang, “Study of full-link on-orbit geometric calibration using multi-attitude imaging with linear agile optical satellite,” Opt. Express 27(2), 980–998 (2019). [CrossRef]  

17. B. Yang, Y. Pi, X. Li, and Y. Yang, “Integrated geometric self-calibration of stereo cameras onboard the ZiYuan-3 satellite,” ISPRS J. Photogramm. Remote Sens. 162, 173–183 (2020). [CrossRef]  

18. L. Mao, “Jilin-1 KF 01 satellite,” Satellite Application 2, 1 (2020).

19. S. Liu, C. S. Fraser, C. Zhang, M. Ravanbakhsh, and X. Tong, “Georeferencing performance of THEOS satellite imagery,” Photogramm. Rec. 26(134), 250–262 (2011). [CrossRef]  

20. M. Wang, B. Guo, X. Long, L. Xue, Y. Cheng, S. Jin, and X. Zhou, “On-orbit geometric calibration and accuracy verification of GF-6 WFV camera,” Acta Geod. Cartogr. Sin. 49(2), 171–180 (2020). [CrossRef]  

21. T. Teo, L. Chen, C. Liu, Y. Tung, and W. Wu, “DEM-aided block adjustment for satellite images with weak convergence geometry,” IEEE Trans. Geosci. Remote Sens. 48(4), 1907–1918 (2010). [CrossRef]  

22. Y. Zhang, Y. Wan, X. Huang, and X. Ling, “DEM-assisted RFM block adjustment of pushbroom nadir viewing HRS imagery,” IEEE Trans. Geosci. Remote Sens. 54(2), 1025–1034 (2016). [CrossRef]  

23. C. S. Fraser and H. B. Hanley, “Bias-compensated RPCs for sensor orientation of high-resolution satellite imagery,” Photogramm. Eng. Remote Sens. 71(8), 909–915 (2005). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Sketch map of the field-dependent calibration methods.
Fig. 2.
Fig. 2. Sketch map of the look angles of CCD detectors.
Fig. 3.
Fig. 3. GCP and tie point distributions in GaoFen-6 (a) sub-image 1, (b) sub-image 2, (c) sub-image 3, (d) sub-image 4, (e) sub-image 5, (f) sub-image 6, (g) sub-image 7, and (h) sub-image 8.
Fig. 4.
Fig. 4. GCP and tie point distributions in ZiYuan3-02 (a) sub-image 1, (b) sub-image 2, and (c) sub-image 3.
Fig. 5.
Fig. 5. Residual error distributions of tie points between GaoFen-6 sub-images 6 and 7 in (a) x and (b) y directions in experiment E1.
Fig. 6.
Fig. 6. GCP distributions in the stitched GaoFen-6 image in (a) experiment E1 and (b) experiment E2.
Fig. 7.
Fig. 7. Residual error distributions of GCPs in the stitched GaoFen-6 image in (a) x and (b) y directions in experiment E1.
Fig. 8.
Fig. 8. Residual error distributions of GCPs in the stitched GaoFen-6 image in (a) x and (b) y directions in experiment E2.
Fig. 9.
Fig. 9. Partial enlarged map of the stitched GaoFen-6 images in (a) experiment E1 and (b) experiment E2.
Fig. 10.
Fig. 10. Residual error distributions of tie points in (a) x and (b) y directions in the stitched GaoFen-6 image.
Fig. 11.
Fig. 11. Residual error distributions of tie points in (a) x and (b) y directions in the stitched ZiYuan3-02 image.

Tables (9)

Tables Icon

Table 1. General characteristics of the tested images.

Tables Icon

Table 2. Geometric calibration accuracies of the GaoFen-6 sub-images.

Tables Icon

Table 3. Geometric calibration accuracies of the ZiYuan3-02 sub-images.

Tables Icon

Table 4. Geometric stitching accuracies of the GaoFen-6 sub-images.

Tables Icon

Table 5. Geometric stitching accuracies of the ZiYuan3-02 sub-images.

Tables Icon

Table 6. Sensor orientation accuracies of the stitched GaoFen-6 and ZiYuan3-02 images.

Tables Icon

Table 7. Geometric stitching accuracies of the GaoFen-6 sub-images

Tables Icon

Table 8. Geometric stitching accuracies of the ZiYuan3-02 sub-images

Tables Icon

Table 9. Sensor orientation accuracies of the stitched ZiYuan3-02 images

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

[ α 0 + α 1 n + α 2 n 2 + α 3 n 3 β 0 + β 1 n + β 2 n 2 + β 3 n 3 1 ] = λ ( R C a m e r a A D S ) T R J 2000 A D S R W G S 84 J 2000 [ [ ( a 1 e 2 sin 2 B + H ) cos B cos L ( a 1 e 2 sin 2 B + H ) cos B sin L ( a 1 e 2 sin 2 B ( 1 e 2 ) + H ) sin B ] [ X S Y S Z S ] ]
{ F x = a 11 ( X X S ) + a 12 ( Y Y S ) + a 13 ( Z Z S ) a 31 ( X X S ) + a 32 ( Y Y S ) + a 33 ( Z Z S ) ( α 0 + α 1 n + α 2 n 2 + α 3 n 3 ) F y = a 21 ( X X S ) + a 22 ( Y Y S ) + a 23 ( Z Z S ) a 31 ( X X S ) + a 32 ( Y Y S ) + a 33 ( Z Z S ) ( β 0 + β 1 n + β 2 n 2 + β 3 n 3 )
V g = C g S L g
V t = C t S + D t T L t
{ N 11 S + N 12 T = M 1 N 21 S + N 22 T = M 2
S = ( N 11 N 12 N 22 1 N 21 ) 1 ( M 1 N 12 N 22 1 M 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.