Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Sub-pixel target fine spatial feature extraction method based on aperture coding and micro-scanning imaging mechanism

Open Access Open Access

Abstract

The small imaging size of targets over long distances results in the loss of geometry and spatial features. Current methods are subject to sampling limitations and cannot accurately capture the spatial features of sub-pixel targets. This paper proposes a method to accurately locate and extract the fine spatial features of sub-pixel targets through aperture coding and micro-scanning imaging. First, the formation mechanism of imaging features for sub-pixel targets is analyzed. Second, the optical aperture is anisotropically coded in different directions to modulate the spreading spots of the target. The primary spreading direction and the center of the anisotropic spreading spots are extracted. The contour and the location of the target are determined from the spreading length and the intersections of the primary spreading directions. Then, the target is sampled by different detector units through various micro-scanning offsets. The pixel units containing different sub-pixel components of the target after offset are determined based on the location results. The fine spatial distribution of the sub-pixel target is reconstructed based on the intensity variations in the pixel units containing the target. Finally, the accuracy of the sub-pixel target fine spatial feature extraction method is validated. The results show a sub-pixel localization error of less than 0.02 and an effective improvement of the sub-pixel target spatial resolution. This paper provides significant potential for improving the ability to capture spatial features of targets over long distances.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The low recognition rate and high false alarm rate are bottlenecks that limit the performance of long-range infrared search and tracking [1]. In long-range imaging, targets typically occupy only a few pixels, making it difficult to capture detailed features such as shape, size, and texture [2]. Multi-spectral imaging is commonly used for feature extraction of small targets [3], but it faces the difficulty of spectral unmixing [4] and the lack of prior data [5]. The spectral characteristics of the small target change dynamically in time and space.

The acquisition of fine spatial features of weak and small targets has significant practical value for accurate long-range recognition and tracking [6]. Methods such as spatial or frequency domain filtering, multi-frame correlation, interpolation, and fitting techniques [7,8] provide very limited spatial feature information about the target, with poor adaptability [9] and significant uncertainty [10]. Existing methods rely on pixel-level features, with the difficulty of overcoming the spatial limitations of the sampling size. In case the imaging size of the target is smaller than the spatial resolution of the imaging system, the reconstruction of sub-pixel spatial features can effectively capture the fine feature distribution of the target.

Deep learning method has achieved significant success in detecting and recognizing small and weak targets, while facing challenges such as high computational cost, time consumption, inherent feature scarcity, and low signal-to-noise ratio [11]. Therefore, detailed spatial feature extraction is crucial for long-distance target recognition [12]. Su [13] and Mu [14] adopt the variations of multi-frame spatial-temporal features to suppress clutter and detect small targets in complex backgrounds, but fail to capture detailed target features, which makes it difficult to recognize small targets. Zhang [15] achieved higher spatial resolution and improved recognition and positioning accuracy by the small field of view detection system, but the obtained target features were not sufficient detailed.

There are various methods for target spatial feature extraction, including filtering, edge detection, deep learning, super-resolution imaging, etc. The traditional methods face difficulties in extracting the sub-pixel spatial features of the target. Meanwhile, the deep learning and super-resolution imaging methods obtain inaccurate or insufficient sub-pixel spatial features. To address the challenges in small target recognition, this paper proposes an innovative method to precisely locate and extract fine spatial features of sub-pixel targets by aperture coding and micro-scanning differential sampling. The sub-pixel level location of Target is achieved by anisotropic modulation of the point spread function (PSF) of the optical system. And the intensity changes in the sampling results of the target under various micro-scanning offsets are captured based on the location results, which is used to reconstruct the detailed spatial characteristics of the target at the sub-pixel level. A novel approach for long-range target localization and fine spatial feature extraction is provided.

First, the formation mechanism of imaging features for sub-pixel targets is analyzed and the effect of the optical system on the target characteristics is discussed. Second, anisotropic aperture coding masks in different directions are placed on the optical pupil to obtain the diffraction spots of the target under different anisotropic optical PSF. From the target diffraction spots, the longest spreading direction and length of the target in different coding directions are extracted. The intersection point of the major spreading directions determines the target centroid position, and the target contour is related to the spreading length in each direction. Then, the sampling results of the target under different sub-pixel offsets are obtained by multiple micro-scanning shifted. The pixel positions corresponding to different sub-pixel components of the target after each offset are determined based on the location results of the target. The variations of the sampling signals in the target position resulting from the sub-pixel offsets are transformed into spatial feature distributions of the sub-pixel target.

Finally, the extraction results of the sub-pixel target are simulated and calculated to verify the reliability of the proposed method. This paper provides significant potential for enhancing the ability to capture fine features of targets over long distances, and contributes greatly to the recognition and tracking of sub-pixel targets.

2. Analysis of target imaging characteristics over long range

At long ranges, the image size of the target is typically very small, and the target is concentrated in a few pixels. The spatial characteristics of the target are difficult to obtain due to the limitations of sampling size. Additionally, the imaging characteristics of targets vary significantly in time and space, leading to great uncertainty in feature extraction. To overcome the bottleneck of extracting fine spatial features from sub-pixel targets, this paper proposes a method for acquiring target features at the sub-pixel level.

First, it is essential to analyze the formation mechanism of sub-pixel target imaging features. At long distances, the imaging process of the target is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Long-distance target detection and imaging process.

Download Full Size | PDF

The signal from the target consists of self-radiation and reflections from the sun, sky, and surface [16,17]. The background mainly consists of sky and clouds. The atmospheric attenuation and turbulence, motion, optical diffraction, and aberration cause the target signal to be blurred or deformed during imaging. The spatial features of the target are weakened during long-range imaging. At the focal plane of the detector, the blur level of the spatial feature of the sub-pixel target is generally determined by the diffraction and aberration of the optical system.

According to the geometric imaging relation, the solid angles of the target and detector unit are expressed as,

$${\Omega _t} = \frac{{{A_t}}}{{{R^2}}}\begin{array}{{cc}} {}&{{\Omega _d} = \frac{{{A_d}}}{{{f^2}}}} \end{array}$$
where ${A_t}$ is the projection area of the target, ${A_d}$ is the area of the detector unit, $\begin{array}{{cc}} R&{} \end{array}$ is the imaging distance, f is the focal length. Since the size and distance of the target are usually unknown in practice. Assuming that $\alpha$ is the ratio of the target’s imaging area to the detector unit area, then the solid angle of the target is expressed as,
$${\Omega _t} = \frac{{\alpha {A_d}}}{{{f^2}}} = \alpha {\Omega _d}$$

For sub-pixel targets, $\alpha$ is typically less than 1. Therefore, the radiation in the pixel that contains the sub-pixel target consists of the target radiation, the path radiation, and the nearby background radiation, expressed as,

$${\Phi _t}(\lambda )\textrm{ = }{\tau _a}{L_t}(\lambda ){A_{opt}}{\Omega _t} + {\tau _a}{L_b}(\lambda ){A_{opt}}({{\Omega _d} - {\Omega _t}} )+ {L_{path}}{A_{opt}}{\Omega _d}$$
where ${\tau _a}$ is the transmittance, ${A_{opt}}$ is the pupil area of the optical system, ${L_t}$ is the target radiance, ${L_b}$ is the background radiance, ${L_{path}}$ is the path radiation.

As the radiation signal from the target passes through the optical system, optical aberrations and diffraction cause the target distribution to be blurred. The transmission characteristics of optical systems are typically described by the PSF. And the general expression of PSF for a linear space invariant optical system is [18],

$$psf(x,y) = \frac{4}{{\pi {\lambda ^2}{f^2}{D^2}}}{|{FT[{p(x,y)} ]} |^2}$$
where D is the pupil diameter, $p(x,y)$ is the pupil function, FT is the Fourier transform.

Therefore, the target radiation after passing through the optical system is expressed as,

$$\Phi _t^{opt}(\lambda )= {\tau _o}(\lambda )[{{\Phi _t}(\lambda )\ast psf({x,y} )} ]$$
where ${\tau _o}$ is the transmittance of the optical system.

To achieve accurate extraction of sub-pixel spatial features, the impact of optical blur characteristics on the target features must be considered, and the optical imaging blur may result from various sources such as diffraction, aberration, and turbulence. The optical PSF determines the level of the target spatial feature on the focal plane.

The imaging angle of the target is roughly the ratio of the target's equivalent size to the distance, expressed as,

$${\omega _t} = \frac{{\sqrt {{A_t}} }}{R} = \sqrt {{\Omega _t}}$$

Substitute Eq. (2) into Eq. (6), then,

$${\omega _t} = \sqrt {\alpha {\Omega _d}} = \frac{{\sqrt {\alpha {A_d}} }}{f}$$

According to the Rayleigh criterion, the angular resolution of the optical system is [19],

$$\theta = 1.22\frac{\lambda }{D}$$

To describe the remaining sub-pixel spatial features of the target after passing through the optical system. The ratio of the angular resolution to the target's imaging angle is defined as the coefficient of blur level, expressed as,

$$\eta = \frac{\theta }{{{\omega _t}}} = 1.22\frac{{\lambda F}}{{\sqrt {\alpha {A_d}} }}$$

The sub-pixel feature distribution of the target on the focal plane corresponding to different optical blur coefficients is shown in Fig. 2. The target is imaging in one detector unit and appears as a sub-pixel target under long-range imaging. Assume that the target's imaging size is 0.3421 of the detector unit, which is calculated from the target size and the imaging distance. With a wavelength of 4.25 µm and a detector unit size of 15 µm.

 figure: Fig. 2.

Fig. 2. Sub-pixel target detail features on the focal plane at different optical blur coefficients.

Download Full Size | PDF

The remaining sub-pixel spatial features of the target are classified into three levels by analyzing the results on the focal plane. When $\eta \le 0.21$, the fine spatial features of the target on the focal plane are available. When $\eta \le 0.45$, the contour features of the target are preserved. When $\eta > 0.45$, the sub-pixel spatial features of the target are lost.

Even if the sub-pixel spatial features of the target are available after passing through the optical system. Pixel-level imaging results, which are limited by sampling, struggle to capture the sub-pixel spatial information of the target. To overcome this limitation, this paper innovatively proposes a sub-pixel spatial feature and contour extraction method based on aperture coding and micro-scanning differential sampling.

3. Sub-pixel spatial feature extraction by aperture coding and micro-scanning

Existing imaging methods are subject to sampling limitations that prevent the acquisition of spatial features, positions, and contours of sub-pixel targets. To acquire the sub-pixel level spatial features of the target, as shown in Fig. 3. The aperture of the optical system is anisotropically coded to obtain the exact position of the sub-pixel target. Then, according to the target position, the spatial features of the target are extracted by differential sampling under multiple sub-pixel micro-scanning.

 figure: Fig. 3.

Fig. 3. The process of acquiring sub-pixel position and spatial features.

Download Full Size | PDF

An accurate sub-pixel spatial feature and position extraction method is proposed. First, an aperture coding mask is placed on the optical pupil as shown in Fig. 4. Anisotropic coding patterns in different directions are set on the mask, and the spread spots of the target under the anisotropic optical PSF are obtained. The maximum spreading length and direction of the target in different coding directions are extracted based on the target diffraction spots. The intersection points of the primary spreading directions are used to determine the centroid position of the target, and the spreading lengths in different coding directions are related to the target contour.

 figure: Fig. 4.

Fig. 4. Method for acquiring sub-pixel target spatial features and position.

Download Full Size | PDF

Then, according to the obtained target position, imaging results with different sub-pixel shifts of the target are obtained under different offsets of micro-scanning. In the sampling results of the target under sub-pixel offsets, multiple pixels sample different sub-pixel components of the target. The pixel signals of the target under different offsets are subtracted to remove overlapping sub-pixel areas, and the sub-pixel features of the target under each sub-pixel offset unit are obtained. The extraction results of the sub-pixel spatial features are shown in Fig. 4.

3.1 Precise sub-pixel location based on aperture coding

To acquire the fine sub-pixel level spatial features of the target, the precise position of the target must be determined. In long-range imaging, the uncertainty of the target position in the imaging results is typically large, and previous approaches with large errors to determine the precise sub-pixel position of a target directly from the imaging result.

Therefore, this paper proposes a sub-pixel location method based on anisotropic aperture coding. High-precision sub-pixel location of the target is achieved by enhancing the detail information through optical aperture coding.

After passing through the optical system, the diffraction spot of the target generally cannot reveal the geometric and spatial information of the target. However, the diffraction features of the target can be modulated by anisotropic coding of the optical aperture. In the anisotropic diffraction spots, the contour size of the target is amplified.

As shown in Fig. 5, a point source is presented as a circular spreading spot after passing through the optical system with a circular aperture. While the optical aperture is squeezed in a certain direction and coded as an anisotropic aperture, the point source is presented as a narrow elliptical spreading spot. And the length of the spread spot increases in the squeezed direction of the aperture. An irregular small target is presented as a blurred spreading spot after passing through an optical system with a circular aperture. The irregular targets have different spreading lengths in different squeezed directions after passing through anisotropically coded optical apertures. The spreading spots are vertical to the squeezed direction of aperture. The length and position of the spreading spot vary under different squeezed directions of the aperture. In the direction where the target size is larger, the spreading spot is also longer.

 figure: Fig. 5.

Fig. 5. Diffraction characteristics under different optical aperture coding.

Download Full Size | PDF

Therefore, the length and position of the spreading spots are determined by the aperture mask and the target's shape and position. According to this principle, the contour shape and centroid position of the target can be obtained.

An aperture coding mask is placed in the imaging system, and the coding pattern is anisotropic to spread the target diffraction spots over more pixels, which is beneficial for reducing errors. The circular aperture of the optical system is coded as a band-shaped aperture function, and the pupil function is expressed as,

$${P_\textrm{c}}(x,y) = \left\{ {\begin{array}{{ll}} 1&{\left( {\sqrt {{x^2} + {y^2}} < = \frac{D}{2}} \right) \cap \left( {|y |\le \frac{{{d_s}}}{2}} \right)}\\ 0&{\textrm{other}} \end{array}} \right.$$
where ${d_s}$ is the coding width of the band.

The ${P_\textrm{c}}$ is rotated to acquire anisotropic aperture functions in each direction, and the optical PSF size in the corresponding direction is amplified. To ensure sub-pixel location accuracy while reducing the amount of sampling and data processing of the aperture coding, the coding mask is rotated clockwise at angular intervals of 10 degrees, and the coding aperture in each direction is expressed as,

$${P_k}({x,y} )= {P_c}({x,y} )R({{\theta_k}} )$$
where $R({{\theta_k}} )$ is the rotation matrix, ${\theta _k}$ is the rotation angle corresponding to the k-th direction, and k is taken as 1, 2, …, 36.

The anisotropic optical PSF is derived from the anisotropic aperture function. Different coding apertures correspond to different optical transfer functions, and the optical transfer function is anisotropic after being modified by anisotropic aperture coding [20]. The incoherent transfer function of the optical system is proportional to the autocorrelation of the scaled aperture stop. The PSF is obtained by Fourier transforming of the incoherent transfer function. Therefore, the optical point spread after aperture coding is expressed as

$$ps{f_k}(x,y) = FT[{{P_k}({x,y} )\otimes {P_k}^ \ast ({x,y} )} ]$$

According to Section 2, the original target distribution is ${\Phi _t}({x,y} )$, then the target distribution after aperture encoding in the kth direction is expressed as,

$${T_k}^{code}({x,y} )= {\tau _{opt}}{\Phi _t}({x,y} )\ast ps{f_k}(x,y)$$

Different coding patterns are applied to the aperture, and targets show different distribution characteristics under different coding directions, as shown in Fig. 6. The spreading length of the target’s diffraction spots in different directions correlates with the size of the target in the corresponding directions.

 figure: Fig. 6.

Fig. 6. The target distributions under different coding directions.

Download Full Size | PDF

The stretched length of the target diffraction spot in different directions is determined by the contour and radiation distribution in each direction of the target. To extract the center position and maximum spreading direction of the diffraction spots, the diffraction spot distribution is converted into a feature vector. And the principal component analysis (PCA) method is used to determine the maximum spreading direction of the target after aperture coding by reducing the dimension of the feature vector.

The intersection point of the maximum spreading direction represents the centroid position of the target, and the maximum spreading length of the target in each direction is proportional to the contour length of the target in each direction.

In the imaging result of k-th aperture coding, the maximum spread direction of the target is ${\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over s} _k}$, and the center of the target diffraction spot is $C({{\textrm{x}_k},{y_k}} )$. Then, the intersection point of the maximum spreading directions is given by,

$$\left\{ {\begin{array}{{c}} {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over s} }_1}\ast {t_1}\textrm{ + }C({{\textrm{x}_1},{y_1}} )= {C_t}({x,y} )}\\ \vdots \\ {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over s} }_k}\ast {t_k}\textrm{ + }C({{\textrm{x}_k},{y_k}} )= {C_t}({x,y} )} \end{array}} \right.$$
where ${C_t}({x,y} )$ is the intersection point of the maximum spreading directions, and it is also the precise centroid position of the target. In practice, the intersection points are usually not unique due to sampling errors. Therefore, an acceptable approach is to calculate the intersection points between each two maximum spreading directions and then calculate the cluster point. The envelope of the target contour is obtained based on the maximum spreading length and direction.

In summary, a precise sub-pixel target localization approach based on aperture coding is formed. The algorithm process and steps are given in Table 1.

Tables Icon

Table 1. Sub-pixel targets centroid positioning algorithm based on aperture coding

The achievement of sub-pixel level target location relies on the anisotropic spreading of the PSF under aperture coding, so the design principles for anisotropic coding patterns should be considered. Aperture coding aims to increase the spreading size in one direction while maintaining the original spreading size in other directions to achieve anisotropic spreading modulation. Therefore, the slit pattern is chosen as the encoding mask. The anisotropic spreading property is to amplify the target contour size in a certain direction. Anisotropic coding offers two advantages: the coupling between different directions is reduced, and ensuring enough intensity of the target to be detectable.

3.2 Sub-pixel target spatial feature extraction based on micro-scanning imaging

The challenge for the long-range imaging is to capture the fine spatial characteristics of the target. As discussed in Section 2, the optical blur determines the remaining sub-pixel feature level of the target in the focal plane. While the target retains detailed texture information after passing through the optical system, but the detector size is insufficient to sample the sub-pixel features. According to the calculated target position, the sub-pixel differential sampling based on micro-scanning imaging achieves sub-pixel sampling of targets.

Typically, an appropriate detection band or technique is used to provide a temperature difference or contrast between the target and background. Thus, variations in the sub-pixel components of the target induce a signal variation within the target pixel, which provides the available sub-pixel information of the target.

When $\eta = 0.21$, sub-pixel offsets in micro-scanning imaging cause variations in the target sampling results, as shown in Fig. 7. The change in the imaging signal of the target reflects the signal intensity of the target's sub-pixel component corresponding to the offset size. Multiple micro-scanning sampling with varying sub-pixel offsets capture the fine feature distribution of each sub-pixel portion of the target.

 figure: Fig. 7.

Fig. 7. Differences in sampling results due to sub-pixel offsets of the target.

Download Full Size | PDF

The pixel containing the target is divided into N × N sub-pixel units, and part of the target's sub-pixel components are sampled by different detector units through micro-scanning offsets. The signal variation under each 1/N offset corresponds to a sub-pixel unit feature of the target. Then, the cross-pixel sampling results of the target under different offsets are obtained after multiple micro-scanning imaging. Finally, the fine sub-pixel feature distribution of the target is calculated based on the differences of the multiple offset sampling results.

The target is sampled separately in the horizontal and vertical directions at different offsets. The resolution of N × N fine sub-pixel spatial features requires N × N micro-scanning samplings, and the offset step size for each micro-scanning is given by,

$$\left\{ {\begin{array}{{c}} {x_i^{offset} = \left( {n + \frac{i}{N}} \right)d}\\ {y_j^{offset} = \left( {m + \frac{j}{N}} \right)d} \end{array}} \right.$$
where d is the pixel spacing, n and m are positive integers less than the number of pixels. $x_i^{offset}$ is the horizontal offset, and $y_i^{offset}$ is the vertical offset. The existence of n and m avoids confusion caused by target overlap. For sub-pixel targets, n and m are usually set to 0 or 1.

Finer sub-pixel features require a larger number of micro-scanning N, but with a smaller offset. The achievement of accurate tiny offset leads to a more complex system design, and a small control error is significant for the tiny micro-scanning. Therefore, an offset size larger than the pixel size is selected, as shown in Fig. 8(a). The micro-scanning offset size consists of two components: an integer multiple of the pixel size and various sub-pixel offsets. This strategy allows the fine target features to be captured without an extremely small offset.

 figure: Fig. 8.

Fig. 8. Offset step length and mode for micro-scanning.

Download Full Size | PDF

Figure 8(b) shows an optional micro-scanning mode where the target is sampled in two micro-scanning directions to traverse all subpixel components.

The initial radiation distribution on the focal plane is $\Phi _t^{opt}({x,y} )$, and the sampling result after the i × j micro-scanning is expressed as [21],

$${S_{i \times j}}(x,y) = \Phi _t^{opt}(x,y) \ast comb\left( {\frac{{x - x_i^{offset}}}{d},\frac{{y - x_i^{offset}}}{d}} \right)$$
where $comb({x,y} )$ is the two-dimensional comb function. The N × N sampling results of the target are obtained by micro-scanning offset sampling.

The position of the target $({{x_t},{y_t}} )$ is calculated according to the section 3.1. After the i × j micro-scanning offset sampling, a portion of the target signal from the original pixel $({{x_t},{y_t}} )$ enters the pixel of $({{x_i}^{t\arg et},{y_j}^{t\arg et}} )$, expressed as,

$${x_i}^{t\arg et} = {x_t} - \left\lceil {\frac{{x_i^{offset}}}{d}} \right\rceil \begin{array}{{cc}} {}&{} \end{array}{y_j}^{t\arg et} = {y_t} - \left\lceil {\frac{{y_j^{offset}}}{d}} \right\rceil$$

The signal variation on that pixel unit by micro-scanning is expressed as,

$$D(i,j) = {S_{i \times j}}({{x_i}^{t\arg et},{y_j}^{t\arg et}} )- {S_0}({{x_i}^{t\arg et},{y_j}^{t\arg et}} )$$
where ${S_0}$ is the initial sampling result. The variation in sampling results represents the target components shift in that pixel unit.

As shown in Fig. 9, the intensity of the sub-pixel unit region of target is obtained by subtracting the overlapping region from the adjacent sampling results. Consequently, the differential sampling result for the current sub-pixel unit is given by,

$$P(i,j) = D(i,j) - D(i,j - 1) - D(i - 1,j) + D(i - 1,j - 1)$$

The intensity of the N × N sub-pixel units of the target pixel is calculated sequentially based on the pixel-level intensity variation. The variations in the sampling signals resulting from sub-pixel offsets are transformed into spatial feature distributions of the sub-pixel target. If the optical blur is sufficiently small and the sampling number N is large enough, the finer sub-pixel features of the target can be effectively extracted. This approach significantly enhances the capability to extract fine spatial features of the target.

 figure: Fig. 9.

Fig. 9. Overlap removal and extraction of target sub-pixel region.

Download Full Size | PDF

4. Simulation and verification

To verify the acquisition method of the spatial features of the sub-pixel target, the aperture coding and sub-pixel micro-scanning mechanisms are introduced into the imaging transfer and conversion process. Then, the long-range imaging results are accurately simulated by the end-to-end imaging process. Finally, the proposed method is used to calculate the position and fine spatial features of the sub-pixel target, and the accuracy of the calculation results is analyzed.

4.1 Sub-pixel target positioning based on anisotropic aperture coding

To obtain the spreading distribution of the target in different aperture coding directions. First, the radiation distribution at the entrance pupil is derived from the target and background characteristics, as shown in Fig. 10(a). The PSF of the optical system is calculated based on the optical system parameters, and the radiation distribution on the focal plane is obtained after optical blur, as shown in Fig. 10(b). The focal plane radiation distribution is integrated based on the geometric sampling relations to obtain the sampling result of the original optical aperture, as shown in Fig. 10(c). Finally, the PSF under different optical aperture coding is substituted, and the sampling result under $0^\circ$ optical aperture coding is shown in Fig. 10(d).

 figure: Fig. 10.

Fig. 10. Imaging sampling results before and after aperture coding.

Download Full Size | PDF

An aperture mask is placed on the pupil to achieve aperture coding in different directions for the original target radiation distribution in Fig. 10(a), and the coding pattern is modified accordingly to achieve anisotropic aperture coding in different directions. When the coding band width is large, the spreading width of the target is small, and significant computational errors are introduced after sampling the diffraction spots. Conversely, if the coding band is too narrow, although the spreading size of the target is larger, the small aperture results in insufficient target energy on the focal plane, which can't be effectively detected. Therefore, the width of the coding band is set to 1/10 of the pupil diameter, with the coding directions rotated every 10 degrees within a circle. The imaging results of the target after aperture coding in each direction are calculated. For clarity and ease of observation, the results of 9 major directions are given in Fig. 11, which shows the aperture-coded mask and the imaging results at rotation angles of 0°, 40°, 80°, 120°, 160°, 200°, 240°, 280°, and 320°.

 figure: Fig. 11.

Fig. 11. Imaging results of aperture coding and extraction of maximum spread direction.

Download Full Size | PDF

According to the procedure shown in Table 1, the PCA method is used to extract the maximum spreading direction and length of the diffraction spot for the target in each encoding direction. Furthermore, the center position of the diffraction spot is determined based on the spatial distribution of the spreading spot, as shown in Fig. 11.

The optical aperture achieves anisotropic modulation of the target spread distribution. The anisotropically coded aperture is then rotated to determine the modulated target spread length in each direction, which contains the target contour size information in each direction.

The centroid position of the target is determined by the intersection of these maximum spreading directions. The calculation result of the intersection of the spreading directions is shown in Fig. 12(a). The mean clustering method is used to obtain the cluster center point of these intersection points, which represents the centroid position of the target. In the Fig. 12(a), the red point represents the cluster center with pixel coordinates (28.5414, 28.3227), while the blue point represents the original centroid of the target with pixel coordinates (28.5409, 28.3395). The edge contour of the sub-pixel target is outlined based on the maximum spreading length, as shown in Fig. 12(b).

 figure: Fig. 12.

Fig. 12. Target positioning and contour extraction results based on aperture coding.

Download Full Size | PDF

To validate the location accuracy, the calculated target position is compared to the original target position. The average sub-pixel positioning error is less than 0.02 pixel units. Sub-pixel location accuracy has been significantly improved over traditional methods. The calculated sub-pixel target contour is close to the original target contour, and the computed contour has central symmetry.

4.2 Sub-pixel target fine spatial feature extraction based on micro-scanning

According to the centroid position of the sub-pixel target, the spatial feature extraction result is calculated by corresponding micro-scanning. The micro-scanning sampling results are accurately simulated based on end-to-end imaging procedures.

The characteristics of the target and the optical blur are coupled to determine the focal plane radiation distribution, as shown in Fig. 13(a). Then, according to the micro-scanning mode described in Section 3.2 and the centroid position of the target, the sampling results of the target under different offsets are obtained, as shown in Fig. 13(b). Finally, the variation of sampling results after different micro-scanning offsets is used to derive sub-pixel unit area features. The result of the reconstructed sub-pixel target spatial feature distribution is shown in Fig. 13(c).

 figure: Fig. 13.

Fig. 13. Process for extracting sub-pixel spatial features by micro-scanning sampling.

Download Full Size | PDF

The extraction of finer sub-pixel spatial features requires larger micro-scanning steps N and an increasing number of sampling. Therefore, the influence of the number of micro-scanning sampling on the sub-pixel imaging resolution and the quality of the extracted spatial feature is discussed. When the value of η is 0.07, the extraction results for N values of 8, 10, 16, 20, 40 and 80 are shown in Fig. 14, with the number of micro-scanning samplings of 64, 100, 256, 400, 1600 and 6400. It is important to select an appropriate number of sub-pixel sampling to ensure that the spatial feature of the sub-pixel target is sufficiently extracted.

 figure: Fig. 14.

Fig. 14. Sub-pixel feature extraction result under different number of micro-scanning steps.

Download Full Size | PDF

The structural similarity index (SSIM) is used to measure the similarity of images in terms of detail and feature distribution. The quality and fineness of the extracted sub-pixel features are comprehensively reflected by calculating the SSIM between the extraction results and the original target distribution. The SSIM values of the extraction results under different micro-scanning step numbers are presented in Table 2. The SSIM value closer to 1 indicates finer detail features extracted by the differential micro-scanning imaging. The larger the number of sampling steps, the finer the extracted sub-pixel features.

Tables Icon

Table 2. SSIM values of the extraction results under different micro-scanning step numbers

The sampling results of the micro-scanning imaging are blurred by the diffraction, aberration, and turbulence. Therefore, the resolution of the reconstructed sub-pixel target is limited by the characteristics of the optical blur, and the optical blur limits the fine level of the acquired sub-pixel target features. According to the analysis results in section 2, for $\eta \le 0.21$, the optical system preserves the detailed spatial features of the sub-pixel target, while for $\eta \le 0.45$, only the profile information of the target is preserved. Consequently, when the value of N is 40 and the number of micro-scanning samplings is 1600, the sub-pixel feature extraction results of the target for the η values of 0.07, 0.30, and 0.50 are shown in Fig. 15.

 figure: Fig. 15.

Fig. 15. Sub-pixel feature extraction results under different optical blur characteristics.

Download Full Size | PDF

The SSIM values of the extraction results under different optical blur coefficients are presented in Table 3. The SSIM value closer to 1 indicates finer detail features extracted by the differential micro-scanning imaging. The larger the optical blur coefficients, the finer the extracted sub-pixel features.

Tables Icon

Table 3. SSIM values of the extraction results under different optical blur coefficients

The results show that when $\eta = 0.07$, the extraction results of sub-pixel targets provide fine sub-pixel spatial features. For $\eta = 0.30$, due to the blur of the optical system, only the target contour is available in the sub-pixel feature extraction results. For $\eta = 0.50$, the optical system blur leads to the loss of target features, the extraction result is a blurred spot.

Therefore, when $\eta \le 0.21$, the optical system preserves the detailed spatial features of the target, and the sub-pixel feature extraction method proposed in this paper can effectively extract sub-pixel target fine spatial features. At lower $\eta$, the sub-pixel target resolution is higher. With sufficient optical system resolution, high resolution sub-pixel target spatial features can be achieved by increasing the number of micro-scanning.

5. Conclusion

In response to the significant challenge in extracting fine spatial features of sub-pixel targets, this paper proposes a novel method for precise localization and fine spatial feature extraction of sub-pixel targets through aperture coding and micro-scanning imaging. Anisotropic modulation of the optical PSF is used to enhance target size differences in different directions, which provides accurate sub-pixel localization. And the spatial feature distributions of sub-pixel targets are associated with the sampling result variations under different micro-scanning. First, the formation mechanism of imaging characteristics for long-range targets is analyzed. Second, a series of modulated target imaging results are obtained under different anisotropic PSF coding. The intersection points of the maximum spreading direction in the modulated imaging results are calculated, and the centroid position of the target is determined by clustering the intersection points. Then, sub-pixel micro-scanning provides sampling results with different sub-pixel components of the target sampled by different detector units. Based on the target location result, the variations in the target sampling signals resulting from the sub-pixel offsets are transformed into spatial feature distributions of the sub-pixel target. Finally, the accuracy of the sub-pixel target fine spatial feature extraction method is verified. The results show a sub-pixel positioning error of less than 0.02 and accurate sub-pixel fine spatial feature extraction outputs. This paper significantly improves the ability to extract fine spatial features of the sub-pixel target.

Funding

National Natural Science Foundation of China (62005204, 62005206, 62075176); Fundamental Research Funds for the Central Universities.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. I. H. Lee and C. G. Park, “Infrared small target detection algorithm using an augmented intensity and density-based clustering,” IEEE Trans. Geosci. Remote Sens. 61, 1–14 (2023). [CrossRef]  

2. Z. Lin, Y. Ma, R. Ming, et al., “Deep asymmetric extraction and aggregation for infrared small target detection,” Sci. Rep. 13(1), 21017 (2023). [CrossRef]  

3. W. Han, J. Chen, L. Wang, et al., “Methods for small, weak object detection in optical high-resolution remote sensing images: A survey of advances and challenges,” IEEE Geosci. Remote Sens. Mag. 9(4), 8–34 (2021). [CrossRef]  

4. S. Sharifi Hashjin, A. Darvishi Boloorani, S. Khazai, et al., “Selecting optimal bands for sub-pixel target detection in hyperspectral images based on implanting synthetic targets,” IET Image Process. 13(2), 323–331 (2019). [CrossRef]  

5. C. Jiao, C. Chen, R. McGarvey, et al., “Multiple instance hybrid estimator for hyperspectral target characterization and sub-pixel target detection,” ISPRS Journal of Photogrammetry and Remote Sensing 146, 235–250 (2018). [CrossRef]  

6. Y. Wang, L. Cao, K. Su, et al., “Infrared Moving Small Target Detection Based on Space–Time Combination in Complex Scenes,” Remote Sens. 15(22), 5380 (2023). [CrossRef]  

7. M. Wan, G. Gu, E. Cao, et al., “In-frame and inter-frame information based infrared moving small target detection under complex cloud backgrounds,” Infrared Phys. Technol. 76, 455–467 (2016). [CrossRef]  

8. S. Qi, G. Xu, Z. Mou, et al., “A fast-saliency method for real-time infrared small target detection,” Infrared Phys. Technol. 77, 440–450 (2016). [CrossRef]  

9. H. Deng, X. Sun, M. Liu, et al., “Small infrared target detection based on weighted local difference measure,” IEEE Trans. Geosci. Remote Sensing 54(7), 4204–4214 (2016). [CrossRef]  

10. B. M. Quine, V. Tarasyuk, H. Mebrahtu, et al., “Determining star-image location: A new sub-pixel interpolation technique to process image centroids,” Comput. Phys. Commun. 177(9), 700–706 (2007). [CrossRef]  

11. X. He, Q. Ling, Y. Zhang, et al., “Detecting dim small target in infrared images via subpixel sampling cuneate network,” IEEE Geosci. Remote Sensing Lett. 19, 1–5 (2022). [CrossRef]  

12. W. Zhu, X. Yang, R. Liu, et al., “A new feature extraction algorithm for measuring the spatial arrangement of texture Primitives: Distance coding diversity,” Int. J. Appl. Earth Obs. Geoinf. 127, 103698 (2024). [CrossRef]  

13. Y. Su, C. Chen, C. Cang, et al., “A Space Target Detection Method Based on Spatial–Temporal Local Registration in Complicated Backgrounds,” Remote Sens. 16(4), 669 (2024). [CrossRef]  

14. J. Mu, J. Rao, R. Chen, et al., “Low-altitude infrared slow-moving small target detection via spatial-temporal features measure,” Sensors 22(14), 5136 (2022). [CrossRef]  

15. X. Zhang, Y. Liu, H. Duan, et al., “Weak Spatial Target Extraction Based on Small-Field Optical System,” Sensors 23(14), 6315 (2023). [CrossRef]  

16. H. Yuan, X. Wang, Y. Yuan, et al., “Space-based full chain multi-spectral imaging features accurate prediction and analysis for aircraft plume under sea/cloud background,” Opt. Express 27(18), 26027–26043 (2019). [CrossRef]  

17. U. A. Zahidi, P. W. T. Yuen, J. Piper, et al., “An end-to-end hyperspectral scene simulator with alternate adjacency effect models and its comparison with cameosim,” Remote Sens. 12(1), 74 (2019). [CrossRef]  

18. P. Y. Maeda, “Zernike polynomials and their use in describing the wavefront aberrations of the human eye,” Course Project, Applied Vision and Imaging Systems Psych 221, 362 (2003).

19. M. Born and E. Wolf, “Principles of optics: electromagnetic theory of propagation, interference and diffraction of light,” (Elsevier, 2013), Chap. 8.

20. B. Wang, C. Zuo, J. Sun, et al., “A computational super-resolution technique based on coded aperture imaging,” in Computational Imaging V, (SPIE, 2020) pp. 55–63.

21. M. Sun and K. Yu, “A sur-pixel scan method for super-resolution reconstruction,” Optik 124(24), 6905–6909 (2013). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Long-distance target detection and imaging process.
Fig. 2.
Fig. 2. Sub-pixel target detail features on the focal plane at different optical blur coefficients.
Fig. 3.
Fig. 3. The process of acquiring sub-pixel position and spatial features.
Fig. 4.
Fig. 4. Method for acquiring sub-pixel target spatial features and position.
Fig. 5.
Fig. 5. Diffraction characteristics under different optical aperture coding.
Fig. 6.
Fig. 6. The target distributions under different coding directions.
Fig. 7.
Fig. 7. Differences in sampling results due to sub-pixel offsets of the target.
Fig. 8.
Fig. 8. Offset step length and mode for micro-scanning.
Fig. 9.
Fig. 9. Overlap removal and extraction of target sub-pixel region.
Fig. 10.
Fig. 10. Imaging sampling results before and after aperture coding.
Fig. 11.
Fig. 11. Imaging results of aperture coding and extraction of maximum spread direction.
Fig. 12.
Fig. 12. Target positioning and contour extraction results based on aperture coding.
Fig. 13.
Fig. 13. Process for extracting sub-pixel spatial features by micro-scanning sampling.
Fig. 14.
Fig. 14. Sub-pixel feature extraction result under different number of micro-scanning steps.
Fig. 15.
Fig. 15. Sub-pixel feature extraction results under different optical blur characteristics.

Tables (3)

Tables Icon

Table 1. Sub-pixel targets centroid positioning algorithm based on aperture coding

Tables Icon

Table 2. SSIM values of the extraction results under different micro-scanning step numbers

Tables Icon

Table 3. SSIM values of the extraction results under different optical blur coefficients

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

Ω t = A t R 2 Ω d = A d f 2
Ω t = α A d f 2 = α Ω d
Φ t ( λ )  =  τ a L t ( λ ) A o p t Ω t + τ a L b ( λ ) A o p t ( Ω d Ω t ) + L p a t h A o p t Ω d
p s f ( x , y ) = 4 π λ 2 f 2 D 2 | F T [ p ( x , y ) ] | 2
Φ t o p t ( λ ) = τ o ( λ ) [ Φ t ( λ ) p s f ( x , y ) ]
ω t = A t R = Ω t
ω t = α Ω d = α A d f
θ = 1.22 λ D
η = θ ω t = 1.22 λ F α A d
P c ( x , y ) = { 1 ( x 2 + y 2 <= D 2 ) ( | y | d s 2 ) 0 other
P k ( x , y ) = P c ( x , y ) R ( θ k )
p s f k ( x , y ) = F T [ P k ( x , y ) P k ( x , y ) ]
T k c o d e ( x , y ) = τ o p t Φ t ( x , y ) p s f k ( x , y )
{ s 1 t 1  +  C ( x 1 , y 1 ) = C t ( x , y ) s k t k  +  C ( x k , y k ) = C t ( x , y )
{ x i o f f s e t = ( n + i N ) d y j o f f s e t = ( m + j N ) d
S i × j ( x , y ) = Φ t o p t ( x , y ) c o m b ( x x i o f f s e t d , y x i o f f s e t d )
x i t arg e t = x t x i o f f s e t d y j t arg e t = y t y j o f f s e t d
D ( i , j ) = S i × j ( x i t arg e t , y j t arg e t ) S 0 ( x i t arg e t , y j t arg e t )
P ( i , j ) = D ( i , j ) D ( i , j 1 ) D ( i 1 , j ) + D ( i 1 , j 1 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.