Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multiple-wavelength range-gated active imaging principle integrating spectral information for five-dimensional imaging

Open Access Open Access

Abstract

The combined multiple-wavelength range-gated active imaging (WRAI) principle is able to determine the position of a moving object in a four-dimensional space and to deduce its trajectory and its speed independently of the video frequency. By combining two wavelength categories, it determines the depth of moving objects in the scene with the warm color category and the precise moment of a moving object’s position with the cold color category. Therefore, since each object had the ability to transmit information from different wavelengths, related to the spectral reflectances, it became interesting to identify their spectral signatures from these reflectances. Using a conventional method of spectral classification, it was shown that it is possible to identify objects in a 3D scene from their a priori known spectral signatures and, thanks to this, to reveal especially the fifth dimension in the imaging of the WRAI principle. The experimental tests confirmed that it is possible to record moving objects in a five-dimensional space represented by a single image, thus validating this multi-wavelength imaging method.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. INTRODUCTION

Active imaging is a direct visualization technique using an image sensor array and its own illumination source (laser). In contrast with other methods such as laser scanning [1], this technique directly displays a two-dimensional image of the scene. In applications, active imaging is used to scan a scene in depth and restore this scene in three dimensions [2,3]. Each recorded space slice is selected according to the aperture delay of the camera and visualized according to the light pulse width and the aperture time of the camera [4]. Furthermore, as this method is range-gated, it is also possible to observe an object behind scattering environments [5]. Instead of using a single wavelength and of being dependent of the video frequency by recording a visualized slice per image, the multiple-wavelength range-gated active imaging (WRAI) in juxtaposed style allowed us to restore the 3D scene directly in a single image at the moment of recording with a video camera [68]. Thus, each wavelength corresponded to a visualized slice with a different distance in the scene. In the 3D restoration, the wavelengths are juxtaposed one behind the other in the scene. Recently, based on the rainbow volume velocimetry (RVV) method [9,10], this juxtaposed style has been modified at the level of the scene illumination to have a better depth resolution [11]. The second style of the WRAI principle is the superimposed style [12], where the wavelengths correspond to precise moments. Each moment is directly recorded in a single image in which each wavelength froze in time a position of the moving object such as in a stroboscopic recording system [13], but with the difference that every moment is recognized. By combining these two styles, it was shown that it was possible to know the position of a moving object in a four-dimensional space represented by a single image and to deduce its chronology, its trajectory, its speed [14], and its acceleration [11]. This combination also shows the capacity to avoid any confusion in the trajectories of several objects moving simultaneously in 3D space [11]. Besides, since the principle is independent of the video frequency, the object speed can be relativistic [15]. To differentiate the wavelengths of each style in the combined WRAI method, they were classified into two color categories [16]. The warm colors (H) correspond to the depth, and the cold colors (C) correspond to the time. The principle of this combination is based on the fact that each wavelength used must be differentiated vis-à-vis the other wavelengths used for a given image. The objects in the illumination space of these wavelengths thus respond to these wavelengths. Therefore, each object has the ability to transmit information from these different wavelengths, related to the spectral reflectances from which it should be possible to trace back to the spectral signature of this object. Consequently, we propose to study the possibility of extracting this fifth dimension of the WRAI principle from a single image independently of the video frequency.

In Section 2, we present the principle of the spectral signature extracting from multiple-wavelength range-gated active imaging. In Section 3, the experimental results based on this extraction principle are presented and evaluated to highlight all five dimensions from a single image.

2. PRINCIPLE

Each style of the WRAI principle uses a different spectral band that does not overlap on the spectral band of the other style, which has the effect of leaving no information the central spectral band. Besides, not all wavelengths are used in each spectral band. In the juxtaposed style, the number of wavelengths corresponds to the number of visualized slides crossed by the moving object, and in the superimposed style, the number of wavelengths corresponds to the number of exposures in the time. For that reason, it is difficult to directly know the complete spectral signature of objects without any preliminary information. On the other hand, if the spectral signature of objects is known beforehand, it is possible to identify these objects by identifying their spectral signatures from their reflectance values at the used wavelengths. In this case, the difficulty is to find the spectral signature of the object avoiding any confusion with the spectral signatures of the other objects in the scene. There are several algorithms to identify and to classify the spectral signature of pixels in a hyperspectral environment [17,18]. These algorithms can be divided into two main families: non-supervised classification and supervised classification. In non-supervised classification, the pixels are grouped according to their spectral signatures without any prior knowledge of the object’s nature [19,20]. The object classes are marked a posteriori. The advantage of these non-supervised classification methods is that they allow a first approach in an unknown situation, but they are not always very significant, and the identification can be difficult. In supervised classification, the pixels are grouped according to the spectral signatures previously known [21,22]. These signatures can come from a data library [23,24] or from an area defined in the image [25]. The advantage of this method is that the obtained classes are clearly significant, but the processing time is long, especially in the choice and in the boundary of learning zones [17,18]. In our case, since the spectral information of objects is assumed to be known, the classification method is supervised. It is defined in terms of statistical values and described in relation to a training zone having the spectral signatures of each class. To verify their accuracy, the definitions are controlled in test zones using the confusion matrix (or the contingency table) [26] that is a comparison table between classified data and test data. It is used to calculate the total accuracy, which corresponds to the ratio between the sum of the correctly identified pixel numbers of each class and the total number of pixels. The different algorithms used can be divided into two categories: parametric and non-parametric ones [17,18]. Parametric ones associate the known statistical distributions with the spectral signatures. Thus, each pixel has an object class membership probability as in the maximum likelihood method [27,28]. Non-parametric ones [29] do not use distributions, but just the similarity (spectral distance [30,31]) that is the category closest to our case. As a result, two classes have been defined: the objects to be identified ${C_i}$ and the objects identified ${C_j}$. From there, we took into account ${r_{i,k}}$, the average reflectance of the object «$i$» to be identified at the wavelength $k$, and ${r_{j,k}}$, the average reflectance of the identified object «$j$» at the wavelength $k$. In relation to the spectral signature of the object to be identified, the complete reflectance in this spectral band belonging to this class is noted:

$${{r}}_{{i}}\in {{C}}_{{i}}.$$

The identification condition is defined according to the spectral distance $d$ (Manhattan distance [32]) between the average reflectance of the object to be identified and the average reflectance of the identified object, having the lowest value:

$${{r}}_{{j}}\in {{C}}_{{i}}\quad {\rm if}\quad {d}\left({{r}}_{{i}},{{r}}_{{j}}\right)={\min}\left(\sum _{{k}=1}^{{N}}\left|{{r}}_{{i},{k}}-{{r}}_{{j},{k}}\right|\right).$$

Two options were considered: with and without a white element in the scene used as a spectral reference with respect to the average gray level ${g_{j,k}}$ of the identified object at the wavelength $k$ in the image. Concerning the first option, with the average gray level of the white element ${g_{w,k}}$ at the wavelength $k$, the value of the gray levels has been normalized according to the white reference element to trace back to the reflectance of the identified object:

$${{g}}_{{{\rm Norm}}_{{j},{k}}}=\frac{{{g}}_{{j},{k}}}{{{g}}_{{w},{k}}}.$$

From there, a linear regression was performed to identify the director coefficient ${{{\alpha}}_{{k}}}$ and the y-intercept ${{{\beta}}_{{k}}}$ at the wavelength $k$ giving subsequently the equation

$${{r}}_{{j},{k}}={{g}}_{{{\rm Norm}}_{{j},{k}}}\cdot {{\alpha}}_{{k}}+{{\beta}}_{{k}}=\frac{{{g}}_{{j},{k}}}{{{g}}_{{w},{k}}}\cdot {{\alpha}}_{{k}}+{{\beta}}_{{k}}.$$

Thus, modifying the identification condition,

$$\begin{split}&{{r}}_{{j}}\in {{C}}_{{i}}\quad {\rm if}\quad {d}({{r}}_{{i}},{{r}}_{{j}})\\&= {\min}\left[\sum _{{k}=1}^{{N}}\left|{{r}}_{{i},{k}}-\left(\frac{{{g}}_{{j},{k}}}{{{g}}_{{w},{k}}}\cdot {{\alpha}}_{{k}}+{{\beta}}_{{k}}\right)\right|\right].\end{split}$$

In the second option, without the white reference element, the process was carried out in two steps, starting with the cold colors and after with the warm colors. In the first step, since the wavelengths of cold colors were always present in the image, the determination of the linear equation could be made from the classification of the spectral signature reflectances as a function of the grayscale. To find the director coefficient ${{\alpha}}_{{{kc}}}^\prime$ and the y-intercept ${{\beta}}_{{{kc}}}^\prime$ at the cold color wavelength $kc$, the set of grayscale values of different objects identified at the wavelength $kc$ has been classified in order, as well as the set of spectral signature reflectances of the objects to be identified at the wavelength $kc$. This provision comes from the fact that the progression in these two sets goes in the same direction according to the objects. After the linear regression, the identification condition was defined:

$$\begin{split}&{{r}}_{{j}}\in {{C}}_{{i}}\quad {\rm if}\quad{d}({{r}}_{{i}},{{r}}_{{j}}) \\ &={\min}\left[\sum _{{k}{c}=1}^{{N}{c}}\left|{{r}}_{{i},{k}{c}}-\left({{g}}_{{j},{k}{c}}\cdot {{\alpha}}_{{k}{c}}^\prime+{{\beta}}_{{k}{c}}^\prime\right)\right|\right],\end{split}$$
with $Nc$ the number of cool colors.

To take into account of the uncertainty in the results, the probability has also been defined and added so that in the ${{m}}$ first smaller obtained spectral distances, the object to be identified is

$${P}({{r}}_{{i}})=\frac{1}{{m}}.$$

Thus, from these ${{m}}$ values, the ${{m}}$ identified objects were proposed. The second step with the warm colors was based solely on these proposed objects. To obtain a spectral reference, one object “$h$” among the other objects was assumed to be identified to determine the ${K_{h,kw}}$ ratio between the spectral signature reflectance and the gray level at a given warm color $kw$ wavelength. Then this ratio was applied to the gray levels of other objects at this wavelength, giving in addition [Eq. (6)] the identification condition:

$$\begin{split}{{r}}_{{j}}\in {{C}}_{{i}}\quad {\rm if}\quad{d}({{r}}_{{i}},{{r}}_{{j}})&={\min}\left(\sum _{{k}{w}=1}^{{N}{w}}\left|{{r}}_{{i},{k}{w}}-{{g}}_{{j},{k}{w}}\cdot {K}_{{h},{k}{w}}\right|\right.\\&\quad+\left.\sum _{{k}{c}=1}^{{N}{c}}\big|{{r}}_{{i},{k}{c}}-\big({{g}}_{{j},{k}{c}}\cdot {{\alpha}}_{{k}{c}}^\prime+{{\beta}}_{{k}{c}}^\prime\big)\big|\right)\!,\end{split}$$
with $Nw$ being the number of warm colors.

3. EXPERIMENTAL RESULTS

To validate five-dimensional imaging through the combined WRAI principle, we used the spectral signature of objects representing this fifth dimension to identify them. The other dimensions were used to determine the trajectory and the speed of moving objects in the scene from the space–time provided by the combined method. For that, we used the optical setup of [11] with the only difference being that the illumination source for the warm colors was a supercontinuum laser and that the separation of wavelengths, limited between 560 and 610 nm, was carried out with an Amici prism. Initially (training zone), the identification principle was tested and validated with a pattern of different color patches (Fig. 1) by approaching two aspects. The first aspect took the white patch as the reference to normalize the results. The second aspect no longer considered the white patch, but considered the spectral signature of each patch as the reference in turn. In a second time (test zone), some tests with moving balls of different colors were carried out to confirm the addition of this fifth dimension to the other dimensions.

 figure: Fig. 1.

Fig. 1. Reference pattern with different color patches.

Download Full Size | PDF

A. Identification Principle

The image of the color pattern in the optical setup is multiplexed in four paths. Since there are two images available per color category, the spectral signature of an object is obtained from the average of the spectral response of this object at each wavelength from the two images in each category. As a result, for each wavelength, two images will be automatically processed before performing this average. The values given by the images are grayscale. In the test with the pattern, the spectral response of the white patch was supposed to be 1. Thus, the gray level of each color patch was divided by the gray level of the white patch, giving a normalized value between 0 and 1. After this, for each image at a given wavelength, a linear regression was carried out (Fig. 2) from the normalized values set of the pattern color patches giving a director coefficient and a y-intercept for each image used (Table 1).

 figure: Fig. 2.

Fig. 2. Linear regression from the set of normalized values of the pattern color patches as a function of the measured gray levels of the color patches at 570 nm.

Download Full Size | PDF

Tables Icon

Table 1. Director Coefficient and Y-Intercept for Linear Regressions from Normalized Values and Measured Gray Levels of Color Patches for Both Images of Each Color Category at Different Wavelengths

By applying the linear equation with the director coefficient and the y-intercept directly to the normalized images, the reflectances obtained for each patch corresponded well to the theoretical reflectances of the pattern as in Fig. 3.

 figure: Fig. 3.

Fig. 3. Theoretical reflectance of each pattern patch as a function of the calculated reflectance for each patch at 590 nm (a) for the color patches and (b) for the gray level patches.

Download Full Size | PDF

From there, the reflectance values obtained at different wavelengths for each patch were used to try to identify the spectral signature of the corresponding patch among the other spectral signatures of the pattern. The aim was to know the minimum number of reflectance values to identify all spectral signatures. In any case, since four cold color illuminations were used to highlight the time, the identification test started with these four illuminations, but without any warm color illumination. To identify the correct spectral signature, the spectral distance was determined by subtracting the reflectance value at each wavelength from the gray levels to the reflectance value at the same wavelengths from the spectral signature of each patch. Then these differences for each spectral signature were added together. From there, the spectral signature with the smallest sum was retained. Ten of the 18 color patches of the pattern were identified in this first configuration [Table 2 and Fig. 4(a)] with only the cold colors. This number corresponds to a success rate of 55.6% (10/18).

Tables Icon

Table 2. Estimation Errors for Some Pattern Patches Using Only the Cold Colors, the Cold Colors with the Warm Color at 570 nm, and the Cold Colors with the Warm Color at 590 nm

 figure: Fig. 4.

Fig. 4. Misidentified patches (a) with only the cold colors, (b) with the cold colors and the warm color at 570 nm, and (c) with the cold colors and the warm color at 590 nm.

Download Full Size | PDF

Concerning the warm color illumination, at least one reflectance value should be added to the number, since the object present in the scene, even if it does not move, remains located at a place. Thus, a warm color illumination at 570 nm was added enabling the identification of 16 patches out of the 18 patches [Table 2 and Fig. 4(b)], giving a success rate of 88.9%. By using another warm color illumination (590 nm), the results [Table 2 and Fig. 4(c)] gave the same success rate. Supposing that the moving object has changed locations in the scene, the new warm color on the object will increase the number of reflectances taken into account. Thus, a set of six illuminations (430, 450, 470, 490, 570, and 590 nm) allows us to take into account six reflectances, and the success rate to identify all the patches has been increased to 100%. Therefore, this first set of tests normalizing the gray scale value of the pattern patches according to the gray scale value of the white patch confirmed a 100% success rate to identify all patches by using at least two warm color illuminations, in addition to those of cold colors. This means that it is necessary to know at least six reflectance values per spectral signature to ensure a correct identification. However, the normalization requires a white element in the scene.

To correctly identify the pattern patches from their spectral signatures without using the white patch, the reflectances from the spectral signature of each patch were first ordered from largest to smallest for a given wavelength. Similarly, at the same wavelength, the gray levels of each patch were ordered from largest to smallest. From these two subsets, a linear regression of the reflectances on the gray levels was carried out to obtain a linear line whose director coefficient and y-intercept were retrieved (Fig. 5). By applying the gray levels in the equation of this line, the obtained reflectances should thus be close to those of the spectral signatures. Thus, the more the gap is close to zero, the better is the concordance. Moreover, by increasing the used wavelength number, the different spectral signatures should be better discriminated.

 figure: Fig. 5.

Fig. 5. Linear regression of the ordered reflectances of color patches on the ordered gray levels from color patches at 490 nm.

Download Full Size | PDF

The advantage in the cold color category is that all the used wavelengths are normally found in the object image of the scene. This situation facilitates the classification of the spectral signature reflectances according to the gray levels for each cold color wavelength and the determination of the corresponding linear equation. On the other hand, in the warm color category, since the position of the objects is often not identical, the determination of a valid linear equation from a linear regression becomes more difficult. The selection of spectral signatures was performed in two steps. The first step was carried out with the cold colors giving a number of possibilities. Among these possibilities, the second step used the warm color information to confirm the identification of spectral signatures. Concerning the selection with the cold colors using a number of four-illumination wavelengths, the identification of each patch was performed by subtracting the spectral signature reflectance of patches from the result of the linear equation, with the average gray level of the patch image as a variable. The case giving the minimum gap had to indicate the patch to be identified, which was not always observed. From having two images per wavelength in the cool color category, the best results were obtained when the smallest difference between the two images was taken into account and not the average difference between the two images. This is due to the fact that the smallest difference corresponds to the searched minimum possible gap. By classifying, according to the gray levels, the patches from the smallest difference to the greatest difference, it was observed that the patch to be identified was still in the top three of the classification (Table 3). By comparing this classification with the reflectance differences of spectral signatures (Table 3), some combinations appear identical in the two classifications at the level of the same patch to be identified. This could suggest a direct link between the obtained combination and the patch to be identified. However, this link was not always respected, and it even caused confusion as in Table 3 between the patch 3 and the patch 8. The classification according to the reflectances, by using the spectral distance, showed that its first column was identical to the column of the patches to be identified. This observation confirmed that the method for the determination of the spectral distance, which subtracts the reflectances of spectral signatures from the reflectances obtained with the gray levels, allowed us to correctly identify the patches. The success rate in the first column of the classification according to the gray levels corresponded to 66.7% (12/18).

Tables Icon

Table 3. Classification in Ascending Order of Differences with the First 3 Patches in the Cold Color Category According to the Gray Levels and to the Spectral Signature Reflectances

The different combinations given in the three classification columns for the gray levels served as the basis for the second step with the warm colors. As seen above, two warm colors are needed to correctly identify the spectral signatures. In these new tests, the reference white patch was not used to normalize the gray level value of objects. Thus, in order to still have a spectral reference, a patch was assumed to be identified to determine the ratio between the spectral signature reflectance and the gray level value at a given wavelength. This ratio was then applied to the gray levels of other objects illuminated with this wavelength. By respecting this requirement, the spectral signature of each patch was used in turn as a reference to determine the ratio between the spectral signature reflectance and the gray level value at a given wavelength. From there, each spectral signature was used and tested as a reference compared to all the patches. The patches identified with their numbers were annotated in (Table 4), where two ways of interpreting them are possible. By considering the lines of the patches to be identified, in this case, we can see that the patches to be identified were identified in the majority of cases (except for the patches to be identified 11 and 16) based on the repetition maximum number. By considering the columns of the identified patches, in this case, we can see that the repetition maximum number of the identified patches always appears at the same level of the patch to be identified. This last way gives a success rate of 100%.

Tables Icon

Table 4. Identified Patches with their Repetition Numbers Compared to all Tests with the 18 Reference Spectral Signatures

To see the behavior of each spectral reference signature in detail, the number of classification errors was recorded (Fig. 6).

 figure: Fig. 6.

Fig. 6. Number of classification errors for each reference patch taking into account duplicates (direct identification) and after carrying out the optimization program (optimized identification).

Download Full Size | PDF

It appeared that one of the main causes of these errors was the presence of duplicates. In our case, since there were as many different objects as there were different spectral signatures, this presence of duplicates was an error. In other cases, if no information is given about the nature and the number of objects in the scene, there can easily be several duplicates. For our part, to eliminate these duplicates, a computer program has been carried out using the other positions in the rows of the duplicates of the classification according to the gray levels (Table 3). If, in most cases, the duplicates could be eliminated, in three cases a duplicate remained, making loops in the computer program (Table 5). Since this finding would not interfere with the rest of the study, this duplicate was left in place in the combinations (the last two lines of Table 5). The selected combination was the one that appeared most often (Table 5). It corresponded exactly to the original classification of the patches to be identified.

Finally, from the information given by some wavelengths of cold and warm colors, it was shown that it was possible to find the different spectral signatures of objects in the scene. Although some portions in spectral signatures were similar, the fact of using spaced wavelengths still allowed us to obtain information outside these portions and to make the difference between the different spectral signatures. Even if the fact of having a white reference patch facilitated the identification of spectral signatures, it was shown that it was possible to do without it provided that, a priori, one of the objects in the scene was already identified. The information provided by this object from the different wavelengths used and from it’s a priori known spectral signature, served as a reference for the identification of the other objects.

Tables Icon

Table 5. Number of Patches Having Obtained the Same Combination after the Optimization Program

B. Integration of Five-Dimensional Imaging

After this first series of tests, which allowed us to implement the entire identification process and to validate it, the tests with moving objects in the scene were carried out to highlight five-dimensional imaging. For that, three balls of different colors were mounted on a rotating axis at a certain distance from each other (Fig. 7). The colors were classically chosen in blue, green, and red. Before the tests, the calibration and the control of the optical setup were performed as in [11], and the spectral signature of the different balls was recorded (Fig. 8).

 figure: Fig. 7.

Fig. 7. Mounting of the balls.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Spectral signatures of the balls.

Download Full Size | PDF

The result of the tests showed that, from a single image composed of the four paths (Fig. 9), it was possible to know the position in space of the different balls at precise moments and to identify them from their spectral signatures.

 figure: Fig. 9.

Fig. 9. Test results with the three balls.

Download Full Size | PDF

The in-depth position of the balls could be deduced from the values of the warm color ratio. The XY coordinates were given with the position of the balls in the image. The recording moment of each position of the balls was deduced from the values of the cold color ratio. Finally, we found the spectral signature of each ball, thanks to information given by the used wavelengths, thus identifying each ball. This last step was done by applying the previous identification process without using a white reference color in the scene. By classifying the reflectance from the spectral signatures of each ball and the gray levels of each ball for a given wavelength in cold colors, it appeared that the classification always remained identical (Table 6). This can be explained by the fact that, in contrast to the 18 patches of the previous reference pattern, there are only three different spectral signatures which are clearly distinct from each other. Thus, the balls could be directly identified already in the first phase of the process with cold colors with a 100% success rate for the classification relative to the grayscale. As a result, there is no more need to add warm colors.

Tables Icon

Table 6. Classification of Balls in the Cold Color Category According to the Gray Levels and to the Spectral Signature Reflectance

Thus, by adding all the obtained results, the trajectory and the speed of each identified ball could be determined (Fig. 10).

 figure: Fig. 10.

Fig. 10. Trajectory of the identified balls as a function of time giving their speeds in the 3D space of the scene.

Download Full Size | PDF

The speed of each identified ball given with the WRAI principle is very close to the measured speed of the motor with an error smaller than 1% (${\rm{Er}}{{\rm{r}}_{\rm blue}} = {0.6}\%$, ${\rm{Er}}{{\rm{r}}_{\text{green}}} = {0.9}\%$, ${\rm{Er}}{{\rm{r}}_{\rm red}} = {0.1}\%$).

Consequently, the experimental results confirmed that it is possible to know at each moment both the position and the speed of moving objects from a single image in the 3D space, independently of the video frequency. Furthermore, these different objects can be identified by estimating their spectral signatures, thus adding a fifth dimension in the WRAI.

4. CONCLUSION

From the spectral information transmitted by the objects in the scene during the use of the combined multiple-wavelength range-gated active imaging WRAI principle, we showed the possibility to identify the spectral signature of these objects from a single image, in addition to already knowing their trajectories, the chronology of their passages, and their speeds in the 3D space of the scene. By using only the warm color wavelengths for the depth determination and the cold color wavelengths to determine the number of exposures in time, it is not possible to know the full spectral signature of the objects. On the other hand, by knowing a priori the spectral signature of the objects, the reflectance values at the wavelengths used with the WRAI principle allow us to find them and to identify the objects. To discern the different objects, a spectral classification method based on the minimization of the spectral distance was used. The aim of this paper was not to conduct a complete study on the spectral classification but to show that, with a classical spectral classification method, it is possible to identify objects in a 3D scene from their spectral signature. At the same time, we demonstrated five-dimensional imaging through the WRAI principle. The tests were carried out in two steps using the information from some wavelengths of cool colors and of warm colors. The first step was used to test and to validate the identification principle with a pattern of different color patches, taking into account first the white color patch and secondly the spectral signature of each patch as a reference. Because the used wavelengths were spaced apart, identification errors were avoided, especially when some portions of different spectral signatures were similar. Concerning the reference spectral signature, if initially the white patch facilitated the identification, by taking any color patch subsequently, the identification could be achieved correctly as long as its spectral signature was known a priori. The second step of the tests was carried out with moving balls of different colors to reveal the full five dimensions. According to the positions of each ball at different moments, their speeds and the chronology of their passages were evaluated, in addition to the identification of their spectral signatures. Note also that the difference between the speed assessed by the WRAI method and the measured speed of the motor remained below 1%.

Consequently, the results demonstrated the five dimensional capability of this multiple-wavelength range-gated active imaging from a single image recorded independently of the video frequency.

Funding

Institut Franco-Allemand de Recherches de Saint-Louis.

Disclosures

The author declares no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the author upon reasonable request.

REFERENCES

1. B. W. Schilling, D. N. Barr, G. C. Templeton, et al., “Multiple-return laser radar for three-dimensional imaging through obscurations,” Appl. Opt. 41, 2791–2799 (2002). [CrossRef]  

2. M. A. Albota, R. M. Heinrichs, D. G. Kocher, et al., “Three-dimensional imaging laser radar with a photon-counting avalanche photodiode array and microchip laser,” Appl. Opt. 41, 7671–7678 (2002). [CrossRef]  

3. National Research Council, “Appendix C: laser sources and their fundamental and engineering limits,” in Laser Radar: Progress and Opportunities in Active Electro-Optical Sensing (The National Academies, 2014), p. 310.

4. D. Bonnier and V. Larochelle, “A range-gated active imaging system for search and rescue, and surveillance operations,” Proc. SPIE 2744, 134–145 (1996). [CrossRef]  

5. O. Steinvall, H. Olsson, G. Bolander, et al., “Gated viewing for target detection and target recognition,” Proc. SPIE 3707, 432–448 (1999). [CrossRef]  

6. A. Matwyschuk, “Direct method of three-dimensional imaging using the multiple-wavelength range-gated active imaging principle,” Appl. Opt. 55, 3782–3786 (2016). [CrossRef]  

7. A. Matwyschuk, “Multiple-wavelength range-gated active imaging principle in the accumulation mode for three-dimensional imaging,” Appl. Opt. 56, 682–687 (2017). [CrossRef]  

8. A. Matwyschuk, “Principe d’imagerie active à crénelage temporel multi-longueurs d’onde pour l’imagerie 3D,” Instrum. Mesure Métrol. 16, 255–260 (2017). [CrossRef]  

9. J. P. Prenel, Y. Bailly, and M. Gbamele, “Three dimensional PSV and trajectography by means of a continuous polychromatic spectrum illumination,” in 2nd, Pacific Symposium on Flow Visualization and Image Processing (PSFVIP-2) (1999), pp. 77.

10. T. J. McGregor, D. J. Spence, and D. W. Coutts, “Laser-based volumetric colour-coded three-dimensional particle velocimetry,” Opt. Laser Eng. 45, 882–889 (2007). [CrossRef]  

11. A. Matwyschuk and N. Metzger, “Multiple-wavelength range-gated active imaging applied to the evaluation of simultaneous movement of millimeter-size objects moving in a given volume,” Appl. Opt. 62, 2874–2882 (2023). [CrossRef]  

12. A. Matwyschuk, “Multiple-wavelength range-gated active imaging in superimposed style for moving object tracking,” Appl. Opt. 56, 7766–7773 (2017). [CrossRef]  

13. H. E. Edgerton, “Motion-picture apparatus,” U.S. patent 2186013A (9 January 1940).

14. A. Matwyschuk, “Combination of the two styles of the multiple-wavelength range-gated active imaging principle for four-dimensional imaging,” Appl. Opt. 59, 7670–7679 (2020). [CrossRef]  

15. A. Matwyschuk, “Doppler effect in the multiple-wavelength range-gated active imaging up to relativistic speeds,” J. Opt. Soc. Am. A 39, 322–331 (2022). [CrossRef]  

16. C. Hayter, “Some systematic directions for the application of colours,” in An Introduction to Perspective (Black, Parry and Co., 1813), pp. 142–143.

17. R. Caloz and C. Collet, Traitements Numériques d’Images de Télédétection, Précis de Télédétection (Presses de l’Université du Québec/AUF Agence Universitaire de la Francophonie Canada, 2001), Vol. 3.

18. J. A. Richards and X. Jia, Remote Sensing Digital Image Analysis An Introduction, 4th ed. (Springer, 2006).

19. J. M. Bioucas-Dias and J. M. P. Nascimento, “Hyperspectral subspace identification,” IEEE Trans. Geosci. Remote Sens. 46, 2435–2445 (2008). [CrossRef]  

20. N. Jain and S. Ghosh, “An unsupervised band selection method for hyperspectral images using mutual information based dependence index,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (2022), pp. 783–786.

21. V. Joevivek, T. Hemalatha, and K. P. Soman, “Determining an efficient supervised classification method for hyperspectral image,” in International Conference on Advances in Recent Technologies in Communication and Computing (2009), pp. 384–386.

22. S. Pattem and S. Thatavarti, “Hyperspectral image classification using machine learning techniques—a survey,” in IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS) (2023), pp. 1–14.

23. W. L. Wolfe and G. J. Zissis, The Infrared Handbook (Environmental Research Institute of Michigan, 1989), Vol. 3.

24. F. Bonn, Applications Thématiques, Précis de Télédétection (Presses de l’Université du Québec/AUF Agence Universitaire de la Francophonie Canada, 1996), Vol. 2.

25. F. Goudail, N. Roux, I. Baarstad, et al., “Some practical issues in anomaly detection and exploitation of regions of interest in hyperspectral images,” Appl. Opt. 45, 5223–5236 (2006). [CrossRef]  

26. K. Pearson, On the Theory of Contingency and Its Relation to Association and Normal Correlation, Drapers’ Company Research Memoirs (Dulau and Co., 1904).

27. J. Xiuping and J. A. Richards, “Efficient maximum likelihood classification for imaging spectrometer data sets,” IEEE Trans. Geosci. Remote Sens. 32, 274–281 (1994). [CrossRef]  

28. J. R. Otukei and T. Blaschke, “Land cover change assessment using decision trees, support vector machines and maximum likelihood classification algorithms,” Int. J. Appl. Earth Observ. Geoinf. 12, S27–S31 (2010). [CrossRef]  

29. T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE Trans. Inf. Theory 13, 21–27 (1967). [CrossRef]  

30. W. Sun, X. Zhang, B. Zou, et al., “Exploring the potential of spectral classification in estimation of soil contaminant elements,” Remote Sens. 9, 632 (2017). [CrossRef]  

31. Deepthi, B. M. Devassy, S. George, et al., “Classification of forensic hyperspectral paper data using hybrid spectral similarity algorithms,” J. Chemom. 36, e3387 (2022). [CrossRef]  

32. H. L. Garner and J. S. Squire, “Iterative circuit computers,” in Workshop on Computer Organization (1962), pp. 156–181.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Reference pattern with different color patches.
Fig. 2.
Fig. 2. Linear regression from the set of normalized values of the pattern color patches as a function of the measured gray levels of the color patches at 570 nm.
Fig. 3.
Fig. 3. Theoretical reflectance of each pattern patch as a function of the calculated reflectance for each patch at 590 nm (a) for the color patches and (b) for the gray level patches.
Fig. 4.
Fig. 4. Misidentified patches (a) with only the cold colors, (b) with the cold colors and the warm color at 570 nm, and (c) with the cold colors and the warm color at 590 nm.
Fig. 5.
Fig. 5. Linear regression of the ordered reflectances of color patches on the ordered gray levels from color patches at 490 nm.
Fig. 6.
Fig. 6. Number of classification errors for each reference patch taking into account duplicates (direct identification) and after carrying out the optimization program (optimized identification).
Fig. 7.
Fig. 7. Mounting of the balls.
Fig. 8.
Fig. 8. Spectral signatures of the balls.
Fig. 9.
Fig. 9. Test results with the three balls.
Fig. 10.
Fig. 10. Trajectory of the identified balls as a function of time giving their speeds in the 3D space of the scene.

Tables (6)

Tables Icon

Table 1. Director Coefficient and Y-Intercept for Linear Regressions from Normalized Values and Measured Gray Levels of Color Patches for Both Images of Each Color Category at Different Wavelengths

Tables Icon

Table 2. Estimation Errors for Some Pattern Patches Using Only the Cold Colors, the Cold Colors with the Warm Color at 570 nm, and the Cold Colors with the Warm Color at 590 nm

Tables Icon

Table 3. Classification in Ascending Order of Differences with the First 3 Patches in the Cold Color Category According to the Gray Levels and to the Spectral Signature Reflectances

Tables Icon

Table 4. Identified Patches with their Repetition Numbers Compared to all Tests with the 18 Reference Spectral Signatures

Tables Icon

Table 5. Number of Patches Having Obtained the Same Combination after the Optimization Program

Tables Icon

Table 6. Classification of Balls in the Cold Color Category According to the Gray Levels and to the Spectral Signature Reflectance

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

r i C i .
r j C i i f d ( r i , r j ) = min ( k = 1 N | r i , k r j , k | ) .
g N o r m j , k = g j , k g w , k .
r j , k = g N o r m j , k α k + β k = g j , k g w , k α k + β k .
r j C i i f d ( r i , r j ) = min [ k = 1 N | r i , k ( g j , k g w , k α k + β k ) | ] .
r j C i i f d ( r i , r j ) = min [ k c = 1 N c | r i , k c ( g j , k c α k c + β k c ) | ] ,
P ( r i ) = 1 m .
r j C i i f d ( r i , r j ) = min ( k w = 1 N w | r i , k w g j , k w K h , k w | + k c = 1 N c | r i , k c ( g j , k c α k c + β k c ) | ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.