Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Correction scheme for close-range lidar returns

Open Access Open Access

Abstract

Because of the effect of defocusing and incomplete overlap between the laser beam and the receiver field of view, elastic lidar systems are unable to fully capture the close-range backscatter signal. Here we propose a method to empirically estimate and correct such effects, allowing to retrieve the lidar signal in the region of incomplete overlap. The technique is straightforward to implement. It produces an optimized numerical correction by the use of a simple geometrical model of the optical apparatus and the analysis of two lidar acquisitions taken at different elevation angles. Examples of synthetic and experimental data are shown to demonstrate the validity of the technique.

© 2011 Optical Society of America

1. Introduction

The elastic LIDAR (LIght Detection And Ranging) technique has long been used in the study of aerosol particles in the planetary boundary layer [1, 2, 3]. However, in the close range the laser beam is not completely within the field of view (FOV) of the telescope for common biaxial systems. Therefore, problems are often encountered in retrieving physical parameters of interest from the first ten to a few hundred meters from the instrument. In addition to this incomplete overlapping problem, the laser signal backscattered from the close range is not focused on the focal plane of the telescope. This is also the case for coaxial systems. This defocusing of the close-range atmospheric targets additionally contributes to the nonlinearity of the lidar signal up to several hundreds of meters from the instrument, when typical telescope diam eters and focal lengths are of the order of several tens of centimeters and the FOV is typically smaller than 1mrad.

In this work we present a technique to retrieve an optimized correction for the incomplete overlap. We introduce an original method to experimentally determine the correction and then we apply a numerical method in order to optimize the retrieved correction. The numerical optimization can be applied to overlap corrections retrieved also with different methods proposed in the past [4, 5, 6, 7, 8]. In contrast to these methods, our experimental approach to retrieve the incomplete overlap correction does not need the knowledge of any instrumental parameter and appears to be easy to implement for scanning or portable lidar systems. The proposed technique is not bound to a particular lidar system, as long as the system can be oriented at different angles besides 90° and could be considered optically and mechanically stable during its operation at such different elevation angles.

Excluding [8], the previous methods that are broadly used [7] are bound to the idea of horizontal acquisition assuming horizontal homogeneity [4, 7], assuming only statistical homogeneity [5] or without any assumptions on the atmospheric conditions [6]. Those methods can produce a correction for the unwanted incomplete overlap, but in general an optical system when tilted by 90° experiences a strong deformation and a consequent modification of the overlap. Our method reduces the need of horizontal homogeneity to the area close to the instrument and introduces a reduced mechanical deformation of the optics because the system will be tilted by smaller angles.

2. Close-Range Lidar Returns

Referring to Fig. 1 we can see the classical geo metrical approach defining S0min and S0max as the extremes of the range interval of partial or incomplete overlap between the laser beam and the FOV of the receiving optics. At ranges closer than S0min we do not expect to collect any signal apart from some possible stray light. The effect of the incomplete overlap is an underestimation of the backscattered signal arriving from that region. To estimate the range in which the beam is entering the FOV we can use the formulas proposed in [9]:

S0min=2ddtdrθr2α+θt,
S0max=2d+dtdrθr2αθt.
The symbols used are the same as in Fig. 1: d is the separation between transmitter and receiver centers; dt is the transmitter beam diameter at the output of the laser box; dr is the receiver aperture diameter; θr is the receiver FOV; θt is the transmitter divergence; α is the angle between receiver and transmitter optical axes. In [9] the convention for the sign of α, so their formulas are formally different. Our convention is that the angle α is positive when anticlockwise and negative otherwise. In general we look at the instrument in a way that emitter stays at the left of the instrument, as in Fig. 1; in this way a negative angle α looks toward the optical axis.

In addition to nonlinearities induced by this incomplete overlap between laser beam and FOV, one should also consider the effects of close-range de focusing. In fact, the light backscattered from portions of the atmosphere at finite distances from the receiver, which is furthermore not along its optical axis, is neither focused on the telescope focal plane nor along the optical axis. It is rather displaced sideways and farther from the pinhole, which is usually placed on the focal plane of the telescope, acting as the field stop of the optical system. Consequently, the light backscattered from short distances is only partially intercepted by the pinhole due to its displaced and larger beam transverse section (BTS) on the focal plane. All these arguments are deeply discussed in the literature. In particular we refer to T. Halldorsson and J. Langerholc [10] and to the more recent [11] to develop a simple model of the lidar systems.

2A. Defocusing

We assume the telescope as a thin lens with a pinhole centered along the optical axis on the telescope focal plan, as described in Fig. 2. We introduce the concept of defocusing as a measure of the energy observed beyond the pinhole with respect to the total energy collected by the telescope. We define the defocusing of a point P in the object space, at distance S from the lens plane and displaced at distance d from the optical axis, as the ratio between the portion of the lens area that forms the image of P in the image space and the total area of the lens. A visual representation of defocusing from different regions of the object space of the lens is provided in Fig. 2.

The effect of defocusing is to decrease the light intensity received from points that are at finite distance and displaced from the optical axis. Such reduction of the light collected by the receiving optics depends on the lidar construction parameters. For the finite size laser beam section, the defocusing can be considered as an integral property of the beam section points, so that those points not completely focused will contribute less than those closer to the optical axis. The calculations of defocusing are straightforward and can be done considering the thin lens equation

1S+1Si=1f,
where S is the distance of the object from the lens along the optical axis and Si is the distance of its image from the lens. The magnification of the lens is given by
M=fSf.
Using Eqs. (3, 4) we can find the image of a point. Using the coordinates of the image we can project the pinhole onto the plane of the lens. The overlap area of the projection on the lens plane, divided by the area of the lens is the defocusing factor of the point. For an ideal pinhole placed on the focal plane and centered along the optical axis, its projection from the image point Pi=(Si;di) of the object point P=(S;d) is described by the following equations:
yc(S,d)=fSifdi=d,
R(S)=siSifrh=Sfrh,
where yc(S,d) is the position of the center, R(S) is the radius of the projection of the pinhole and rh is the pinhole radius. To better describe the formalism used we refer to Fig. 2. The position of the center depends both on the range and displacement from the optical axis of the object point. The radius of the projection depends only on the range. This information can be used in the equation proposed in the appendix of [10] to get the intersection area of two circumferences replacing r1, r2, and r from the conventions of [10], respectively, with R(S), dr the radius of the lens, and |yc(S,d)| as measure of the distance of the two circumferences. The result is a function γ(S,d) that describes the defocusing of an object point P depending on its distance S and d from, respectively, the lens plane and the lens axis, as reported in the following equation:
γ(S,d)=A[dr2,R(S),|yc(S,d)|]πdr24,
where A[dr2,R(S),|yc(S,d)|] is the overlap area of two circumferences and well described in the appendix of [10].

We computed this function for a system whose parameters are reported in Table 1. The result is depicted in Fig. 3. The geometric overlap is expected at a range of approximately 144m according to Eq. (2) and considering α=0rad. After these considerations it is clear that Eq. (2) should be reconsidered in order to obtain the real altitude at which we expect to collect the full backscattered energy. The results obtained are detailed in Appendix A with Eq. (A2). The resulting value is approximately 1345m using α=0rad.

To get the integral defocusing at a defined distance from a finite size laser beam section of unevenly distributed energy density, we integrate the product of energy density and defocusing for each point of the beam section and divide the result for the full energy within the beam section. The situation is even more complicated in practice since lidar systems may be imperfectly aligned or the pinhole may not be properly placed on the telescope focal plane, as we assumed in the ideal situation presented above. These experimental uncertainties add complexity to the study. Because of the difficulties inherent in finding an a priori correction that is always valid, a correction function experimentally retrieved should be pursued on a case-by-case basis. Different approaches have been reported in the literature [7]. We have elaborated a model that not only takes into account the incomplete overlap and defocusing effects for a perfectly aligned system but also the effects of a partial misalignment of the laser–telescope axes along the sagittal and meridional angles α and β, as well as the possible transverse displacement dx of the pinhole, which might not be well centered along the optical axis.

2B. Lidar Model

Following [10, 11], we implemented a simple lidar model. In order to describe the formulation of the model we assume cylindrical symmetry around the optical axis. We assume also that the distribution of energy of the laser beam is symmetrical with respect to the center of the BTS and that it is zero outside. Within the coordinate system of the BTS (r,θ) we express our distribution of energy dE(r,S) as a normalized distribution

dE(r,S)={1/(π*Rd(S)2),if rRd(S)0,if r>Rd(S),
(flat hat beam) where r is the distance from the center of the BTS and Rd(S) is the radius of the BTS at a range S. So we can express the normalized energy of the BTS as
E(S)=02π0dE(r,S)drdθ=1.
Equations (8, 9) are dimensionless because we are considering a normalized distribution. An idea of the errors introduced in to the model considering a uniform distribution instead of a real one can be seen in [11]. For a point of coordinate (S,d) we can calculate the defocusing using the considerations developed in Section 2A, in particular Eq. (7). We remember that dr is the diameter of the receiver, R(S) is the radius of the projection of the pinhole on the lens plane, and yc(S,d) is the center of this projection. To better understand the coordinates we used please refer to Fig. 2.

The defocusing for the BTS of the laser at range S can be calculated by integrating the product of the normalized distribution of energy and multiplied for the defocusing of each point:

Γ(S)=02π0Rd(S)γ(S,d(r,θ,S))dE(r,S)drdθ.

The function d(r,θ,S) expresses the distance of a point of the BTS at a range S from the optical axis. Remembering the conventions used for defining the instrumental parameters and assuming small angles, we can express

Rd(S)dt+θtS2
as the radius of the BTS at a distance S. The distance of the center of the BTS at a range S can be obtained from the following equation:
dc(S)[(d02+αS)2+S2β2]1/2,
where d0 is the distance between the lens center and the laser beam when S=0 and β is the meridional angle. The distance from the optical axis of a point of the laser BTS in the coordinate system of the BTS can be expressed as
d(r,θ,S)2=dc(S)22rdc(S)cos(θ)+r2.

In order to add further realism let us consider displacement of the pinhole along the optical axis of the telescope. This can be achieved by modifying the results of Eqs. (5, 6). However, for a generic displacement dx of the pinhole along the optical axis we can give a modified version of Eqs. (5, 6). We consider dx so that f>|dx|. We assume it to be positive when the pinhole is placed between the focal plane and lens plane and negative when it is placed otherwise.

yc[S,d(r,θ,S)]=d(r,θ,S)(f+dx)|fSdx/f+dx|,
R(S)=Srh|fSdx/f+dx|.
Substituting those results in Eq. (7) and then in Eq. (10) we obtain the lidar model used in this article. Displacement of the pinhole from the optical axis is more complex to consider because the assumption of cylindrical symmetry is broken. An approach to such a problem can be seen in [12]. In general, the complexity of a lidar is not reduced to those few param eters. A more realistic model of a lidar system was proposed in [13], but for our purpose the idealized version of [11] is enough in order to produce a fit of the experimental data.

3. Correction for Close-Range Returns

We propose here a technique that involves the acquisition of two consecutive profiles of the atmosphere with the lidar system tilted at two different elevation angles. We assume that the vertical stratification of the atmosphere remains constant over time between the two consecutive acquisitions and is homogeneous over horizontal distances comparable to the altitude range of our sampling. This reduces the assumption of horizontal uniformity used in [4, 7]. The principal idea is that the profile taken at the smaller elevation angle will sample the same stratified atmosphere reaching the condition of full overlap at a lower altitude with respect to the profile acquired at the greater elevation angle. Moreover, at any given altitude, the profile taken at the smaller elevation angle will suffer less from defocusing than the profile acquired at the larger elevation angle. Thus, we use the acquisition at the lower elevation to correct the defocusing of the higher one to obtain a correction for the vertical profile. This correction is applied to the profile with lower elevation. The new corrected profile is used with the vertical noncorrected to evaluate a new correction. By iteratively applying this method, we reconstruct the true atmospheric profile down to the minimum sampling altitude. In fact this empirically determined correction often cannot be used because it is affected by noise that would introduce, on the reconstructed profiles, artifacts and artificial features. This noise results from horizontal inhomogeneities in the atmosphere, whose existence is violating what we assumed so far. We get rid of the need to invoke the atmospheric horizontal homogeneity by using such empirically determined function to constrain the free parameters α, β, and dx in our model. Such retrieved parameters are then used to calculate a modeled function Γt that is unaffected by small horizontal inhomogeneities.

3A. Description of the Method

A rigorous approach to our technique is proposed in Appendix B. Here we present the operational description of the iterative method to retrieve the experimental correction for close range. Let us consider two lidar range corrected signals (RCSs) after background subtraction at two different elevation angles, and let us define X1 as the acquisition taken at the lowest elevation angle ω1 and X2 as the one with the highest ω2. In the following example we take ω2=90° and thus consider X2 as a vertical profile. We refer to the range using the symbol S and to the altitude using the symbol Z. We remark that the altitudes of a profile can be obtained from the range and the elevation angle ω by multiplying the range with sinω.

  1. Both acquisitions are calibrated to an assumed molecular profile at the same altitude range.
  2. For a well aligned system the two profiles should have the same values in the full overlap region, at altitudes above Z0=S1max*sin(ω2), as obtained from Eq. (A2) in Appendix A.
  3. Inside the altitudes from zero to Z0, X1 is interpolated over the altitude grid of X2, obtaining the regridded profile X110 where we added the extra subscript 1 to explain that this is an interpolated profile, we added also the apex to define the order of iteration, that in this case is zero because no correction was applied to X1.
  4. The ratio Γ1=X110X2 will be our first step correction. It should be equal to unity in the region of full overlap while it starts increasing at lower altitudes. Deviations from unity in the full overlap region are due to fluctuations of the signals caused by small atmospheric variability and noise. The deviations can be both positive and negative, but of small entity if the measurements are taken closely in time and angle.
  5. We apply the Γ1 correction over the range of X1, in order to obtain a first corrected acquisition X11 that we consider to correctly reproduce the close- range lidar returns from a lower altitude than the original X1.
  6. We again interpolate X11 over the altitude grids of X2 to obtain the quantity X111. The ratio Γ2=X111X2 is now our second step correction. Again we use this ratio to calculate a correction function for X1 over its range.

By iterating through steps 3 to 5, we progressively reduce the altitude above which we consider X1i as a reliable acquisition of the slant atmospheric profile. The number of iterations n needed to converge to a stable value for the profile depends on the angles ω1 and ω2. The closer they are, the more iterations are needed. The number of iterations can be calculated using Eq. (A2) and can be obtained looking for the iterations needed to correct a hypothetical profile that reaches full overlap at S0 to a range equal to S0min:

n=int[ln(S0minS0)ln(sinω1sinω2)1].

The range S0 is generally unknown, but an estimation can be obtained using nominal parameters and Eq. (A2). Performing more iteration than the needed number will not affect the function obtained. The Γn function is the empirical correction function for overlap and defocusing effect. However, before using it to correct the profiles we need to clarify some points: it is important to realize that the difference between the two angles should be large enough to ensure that when the complete focus and overlap is reached for X2 at a given altitude, there will be a significant portion of the vertical range where the acquisition X1 could be considered reliable. On the other hand, if the angle between the two acquisitions is too large, the differences in the sampled air masses might be significant and would induce errors in the estimated correction. As noted earlier, such experimentally retrieved correction function Γ will suffer of “noise” due to experimental errors and lack of horizontal homogeneity. We have to use the experimentally retrieved correction function Γ to constrain the free parameters in our lidar model and thus find a new, modeled correction function Γt, unaffected by experimental errors or lack of horizontal homogeneity in the atmosphere.

3B. Numerical Simulations

A numerical simulation and validation of this technique and of our model has been carried out by using a known atmospheric vertical profile and simulating two lidar acquisitions taken at two different elevation angles.

We used an atmospheric profile of molecular backscatter using the standard atmospheric profiles as stated in [14]. The profile used is plotted in Fig. 4. We created two synthetic RCSs corresponding to ω1=40° and ω2=90° in nighttime, which are displayed in Fig. 4. For each profile we applied a random noise from a Gaussian distribution with a standard deviation of 15% at 1500m. This noise was applied to both profiles after the multiplication for the same overlap function.

The algorithm proposed in Section 3A was applied to these two virtual acquisitions. The “experimental” correction function Γ that we obtained from our iterative algorithm is shown in Fig. 5 (black dots). It is compared to the function computed by using Eq. (7) with the nominal parameters of our system as reported in Table 1 (dashed line) and alternatively by choosing these parameters in Eq. (7) in order to fit with the “experimental” data the modeled correction function Γt (solid line). The fitting procedure is accomplished by a least-square minimization using the Levenberg–Marquardt algorithm. It is noteworthy that the dashed and solid lines originating from an “a priori” and “a posteriori” assessment of the param eters in Eq. (7) superimpose well.

In order to assess the robustness of our estimation of the parameters α, β, and dx of our model described in Section 2B, we performed the calculations above up to 1500 times and applied noise that was 15% of the signal at 1500m. The results of our Monte Carlo simulations are shown in Fig. 6. The noise we applied was strong compared to typical systems with integration time of 1min. The average overlap curve superimposes well with the one used to generate the profiles. Its relative errors could be used as an estimation of the real errors of the fit. We can see from the scatter plots A, B, and C that different triplets of the parameters give the same overlap function compatible with the one used to create the profiles within the noise limits. The reason for this is that the problem is ill-posed, because the solution is not unique, so the different results depend on the fact that similar overlap functions can be obtained with different experimental setups. The aim of the inversion of Eq. (7) is to obtain a realistic overlap function that can correct in a better way. The use of such a method for estimating the state of alignment of the system needs further studies.

3C. Experimental Results

To check the validity of our correction algorithm we should apply it to a measured lidar profile and compare it with a real profile measured by an instrument that does not suffer from defocusing effects: for example an in situ backscatter sonde on board of an ascending balloon. However, this would be very difficult and expensive to set up. An alternative way to check the correctness of our approach is to use a system that can also measure the atmospheric N2 Raman signal and to use it to calibrate the elastic signal [8]. Since both acquisitions suffer from the same incomplete overlapping and defocusing, the ratio of the two quantities would be unaffected. Unfortunately not many systems, especially the more portable ones, are powerful enough to acquire a Raman signal. In our case, our system is capable to perform Raman measurements only during nighttime, while the high sky background does not allow such measurements during daylight. Hence, we tested our method on the field during nighttime, and compared the results of our correction with the Raman calibration. The characteristics of the system used in the present work are summarized in Table 1.

We acquired three successive lidar profiles at an elevation angle of 90°, 38.9°, and again 90° with an acquisition time of 60s to ensure the constancy of the atmospheric characteristics in the time frame of our test. With those profiles we performed the calculations as described in Section 3 for both measured couples. We finally acquired also a Raman and an elastic profile for a period of 10min with the aim of verifying the correction obtained with our method.

Figure 7 displays the three lidar acquisitions as RCS after background subtraction and calibrated on the expected molecular profile as function of the altitude. The calibration was performed correcting for the extinction using 40sr of lidar ratio. This choice was made taking into account the most likely characteristics of the aerosol present in our measurement site according to [15]. Different choices of the lidar ratio did not alter the course of our test.

In Fig. 7, a comparison of the RCS acquired at 38.9° with the one taken at 90°, before and after it, is shown. The profiles superimpose well from 1300m upward, suggesting that full overlap was reached at that altitude. This result was further confirmed by Eq. (A2) in Appendix A, which yields a nominal value of 1342m. Considering the nominal parameters of our system, we chose the altitude 1500m as the point where to start our iterative algorithm.

Figure 8 shows our correction functions Γn obtained by applying the iterative algorithm using the two profiles taken at 90° and the one taken at 38.9°. The gray solid line represent their average.

It is clear from Fig. 8 that the two correction functions differ significantly and do not have a smooth shape as desirable, because of the lack of horizontal homogeneity in the atmospheric stratification during our test. We used their average to fit our model, using the angles α, β, and the displacement dx as free fitting parameters. As noted before, the inversion of Eq. (10) is an ill-posed problem. Hence, the parameters obtained from the fitting procedure are not unique and may not represent realistically the system; hence, we are not particularly interested in them. What is of importance is that the fitting procedure produces an estimate of the overlap function that is effective in describing the behavior of the system in the region of incomplete overlap. The result of the fitting is displayed in Fig. 9 as a dotted gray line. In the same figure we plotted the overlap function estimated from the Raman channel and the nominal curve obtained by using the lidar specifications reported in Table 1. The Raman channel was used as proposed in [8] and we expect it to be the true overlap function of the system. It departed from the nominal curve as we would expect for a real system. Our retrieval reproduced the Raman curve rather than the nominal curve. Therefore, we can assume that our method provides a good approximation of the overlap function.

3D. Error Estimations

To estimate the systematic error present in our retrieval we compared our overlap function with the one obtained from the simultaneous use of the Raman channel that allowed us a Raman calibration, considered hereafter as the reference. The solid black line reported in Fig. 10, represents the relative systematic error, computed as the absolute value of the differences between our proposed corrections and the Raman calibrated signal, divided by the Raman calibrated. We can see from the figure that the systematic errors remain below 10% almost down to bottom of the incomplete overlap range but increased quickly as the retrieved signal approached the S0min32m range. In the same figure, the gray line represents the relative random error of the lidar detected signal itself, computed as suggested in [16]. We observe that it has a minimum around 0.2km, this is due to the fact that the non-range-corrected signal has a maximum where the beam enters the FOV of the telescope, which produces a minimum of the relative error. We can see that the systematic error induced by our correction scheme remains at acceptable levels throughout the reasonable range of exploitation of the lidar detected signal, i.e., approximately down to 100m. Appendix B reports a detailed computation of the propagation of random errors affecting the experimentally retrieved correction function Γ (as the two reported as gray dotted-dashed in Fig. 8). The results there presented, obtained by using up to nine iterations to correct the signal down to 100m, are reported in Fig. 10. There, the dashed black line represents the errors to be attributed to the experimentally retrieved correction function Γ, calculated using Eq. (B7), propagating the random errors of the lidar detected signal reproduced by the solid gray line. Finally, the dashed gray line in Fig. 10 represents the number of iterations needed, step by step, to obtain the results presented. It is interesting to note that the discontinuities in the error profile (dashed black line) are related to the number of iterations (dashed gray line). Each subsequent iteration keeps memory of the relative error of the previous 10 iterations, as can be seen from Eq. (B1). If the percentage error were constant the error would be a series of equal steps. However, the relative errors in a lidar profile are inverse to the magnitude of the signals and this leads to the saw shape we can observe in Fig. 10. Such determination of the errors affecting the experimentally retrieved correction function Γ can be used as weighting factor when constraining Γ to the modeled Γt.

4. Conclusions

Following the approach outlined in [10] we obtained Eq. (A2) as a definition of the range at which a lidar signal can be considered not underestimated by incomplete overlap effects. This equation can be used in the evaluation of the minimum altitude of full overlap for lidar systems. A new iterative scheme employing lidar acquisition at two elevations was proposed to experimentally estimate an overlap correction function Γ. Such scheme was described and applied to retrieve Γ. It has then been shown that such empirical correction suffers from withdrawals due to the lack of horizontal homogeneities in the atmosphere. A simple geometric optical model of a lidar was then introduced, taking into account both the incomplete overlap and close-range defocusing effects, as well as possible misalignments of the lidar system. The application of the model to constrain the experimentally determined overlap function Γ produced a modeled Γt, which is not suffering from the hassles affecting Γ. The proposed method, employing two lidar acquisitions at different elevations, an iterative procedure to retrieve an experimental correction and a fitting procedure to obtain a modeled correction, was validated using a Monte Carlo approach, leading to promising results. Finally, the method to correct for incomplete overlapping and defocusing in the detection of close-range lidar returns has been experimentally implemented and validated on a real lidar system with Raman N2 detection capabilities. The method appears to be robust and of easy implementation. It allows to extend the lidar profiles well below the altitude of full overlap. The estimated errors showed that correction of the signal in the incomplete overlap range shall be considered reliable (i.e., with systematic errors below 10%) for a large part of the incomplete overlap range. In principle, the method can also give information about the state of alignment of the system. Subsequent developments should be done in order to use it as a diagnostic tool.

Appendix A

Equations (1, 2) describe the ranges where the laser BTS starts entering and where it is fully inside the FOV. On the basis of the considerations on defocusing we define the ranges where the beam enters in the region of full focus. In other words, considering Fig. 3 we want to obtain the range where the beam is completely within the black region. To obtain this result we can use simple geometrical argumentations. Let us consider Eqs. (5, 6). We can estimate the lowest range s0 in the FOV where γ(S,d) is 1. This range can be estimated using Eq. (6) assuming the radius of pinhole projection from the point into the image space on the lens plane is equal to the lens radius:

s0=drfrh.

The point in the object space on the optical axes and distant s0 from the receiver is the vertex of the black cone, as in Fig. 3, and it is at 300m on the optical axis. The interception of this cone with the beam will define the ranges between which the beam is entering inside the completely focused region of the FOV.

Assuming small angles, we can use the approximation tanαα. The maximum range where the external part of the beam enters in the fully focused region Smax1 is given by the following equation:

S1max=2d+dr+dtθrθt2α.

For a system as in Table 1, this equation gives as result S1max1343m and can be used when the angles of the optical systems respect the following relation:

θr<2α+θt2drf(d+dt/2)rh.

The entrance of the internal part within the full overlap region is more complex and beyond the scope of this work.

Appendix B

Here we present the analytical treatment to the iterative technique described in Section 3A considering X2(r) the signal from the vertical profiling and X1(r) as the one with elevation angle ω<90°, both function of the range r and range corrected. In order to relate this appendix with Section 3A the reader must take into account that the previously described method is applied to gridded data, considering r as a continuous variable, no interpolations are needed in order to obtain the corresponding values at grid points. We simply note that the range r for X2 and the range r/sinω for X1 correspond to signals from the same atmospheric altitude. In general we can say that

X2(r)X1(rsinω),
and it is equal when rS0max. The first correction is
Γ1(r)=X1(rsinω)X2(r).
This correction is used to correct X1:
X11(r)=X1(r)Γ1(r)=X1(r)X1(rsinω)X2(r),
then we implemented our second-order correction:
Γ2(r)=X11(rsinω)X2(r)=X1(rsinω)Γ1(rsinω)X2(r)=X1(rsinω)X1(rsin2ω)X2(r)X2(rsinω).

The generalization of this formula for a correction of order n is

Γn(r)=i=0nX1(rsini1ω)X2(rsiniω).

By reordering the terms we get the most conve nient form:

Γn(r)=X1(rsinn1ω)X2(r)i=1nX1(rsiniω)X2(rsiniω).
We estimated the errors for our correction scheme. We assumed that the angle and the range have no errors in order to simplify the calculation. It is clear that the relative errors of our signal at each range can be expressed as the sum of all the relative errors used to bring the correction to that specific range. These arguments finally produced the formula to estimate the relative error:
ϵΓn(r)=i=0n[ϵX1(rsini1ω)+ϵX2(rsiniω)].
We have to remark that ϵX1(r)=0 when rS0maxsinω and the relative error ϵX2(r)=0 when rS0max. An example of the results of this formula is shown in Fig. 10 as the analytical error curve.

The measurements were performed in the framework of the project AEROCLOUDS (Study of the Direct and Indirect Aerosol Effects on Climate) supported by the Italian Ministry of Education, University and Research (MIUR).

Tables Icon

Table 1. Lidar System Specifications

 figure: Fig. 1

Fig. 1 Conventional lidar design approach. Taken from Roberts and Gimmestad [9].

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Three schematic views of a lidar system are proposed. In all, the views are the lens, the pinhole, the lens plane, and the optical axis represented. On the right side of the lens the regions of different defocusing are presented and on the left side, using the same colors, their images. The region focused totally outside the pinhole is filled with diagonal lines. The region focused partially within the pinhole is filled with horizontal lines. The region focused inside the pinhole is filled with vertical lines. In the different views points and their images are considered. From each point we projected the pinhole extremes and the center onto the lens plane to show the intersection with the lens: in A there is full intersection, in B there is partial intersection, and in C there is no intersection.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Description of the defocusing effect within the FOV using the lidar parameters collected in Table 1. The solid lines represent the laser beam borders (light gray) and the FOV borders (dark gray). The nonwhite area is the geometrical FOV, the gray from 0 (white) to 1 (black) shows the region where the effect of defocusing is present, so that points in that area are only partially imaged through the pinhole. In this example, defocusing effects affects the laser returns from near the ground up to 1.34km.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Simulation of molecular backscatter coefficient profiles measures by a lidar at two different elevation angles (solid gray line, 90° elevation angle, solid black line 40° elevation angle), using the nominal overlap function depicted in Fig. 5. A random noise is applied to each profile according to the model of noise plotted in Fig. 10 and discussed in Appendix B. The noise level has been chosen to produce a relative error of 5% at 1500m of range. The profiles corrected according to our method have also been plotted as dashed gray lines. The solid thicker gay line represents the atmospheric backscatter coefficient, from standard atmosphere model.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Comparison of the theoretical function (dashed gray) used to generate profiles in Fig. 4. The result of our algorithm (black dots) of Section 3A and the overlap function estimated fitting with Eq. (10) (solid black).

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Distributions of the results obtained using our technique on 1500 couple of synthetic profiles generated with elevations 40° and 90° applying noise as in Fig. 4, using a relative error of 15% at 1500m. A Angles α and β; B angle α and pinhole displacement dx; C sagittal angle β and pinhole displacement dx; D distribution of the 1500 overlap curves. Solid gray line represents the average overlap function while the dashed gray line represents its relative error.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Calibration of the three acquired range corrected signals over molecular backscatter coefficient profiles for 532nm. The calibration was performed using an iterative scheme to correct for extinction assuming a lidar ratio value of 40 sr.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Different overlap functions Γ: the black solid line results from the lidar optical model, with the system parameters as in Table 1, i.e., the nominal overlap function. The gray dotted and dashed lines are two different experimental determinations of Γ obtained by our iterative technique. The light gray solid line is the average of the two experimental ones.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Comparison between the different overlap functions: the black solid line is the nominal overlap, as in Fig. 8; the black dashed line is the reference overlap correction curve obtained from the Raman calibration; the dotted gray line is Γt, the result of the fitting procedure of the free parameters of our lidar optical model, constrained by the average of two experimental Γ, depicted as a light gray solid line in Fig. 8.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 The solid black line is the relative systematic error, computed as the absolute value of the differences between our proposed corrections and the Raman calibrated signal, divided by the latter. The solid gray line is the relative random error of the lidar signal. The dashed black line is the relative uncertainty attributable to the experimentally retrieved correction function Γ, computed by propagating the relative random error of the lidar signals in the iterative procedure as described in the text. The dashed gray line represents the number of iterations needed, step by step, to obtain the results presented.

Download Full Size | PDF

1. S. H. Melfi, J. D. Spinhirne, S.-H. Chou, and S. P. Palm, “Lidar observations of vertically organized convection in the planetary boundary layer over the ocean,” J. Climate Appl. Meteor. 24, 806–821 (1985). [CrossRef]  

2. D. Cooper and W. Eichinger, “Structure of the atmosphere in an urban planetary boundary layer from lidar and radiosonde observations,” J. Geophys. Res. 99, 22937–22948 (1994). [CrossRef]  

3. V. Matthias and J. Bosenberg, “Aerosol climatology for the planetary boundary layer derived from regular lidar measurements,” Atmos. Res. 63, 221–245 (2002). [CrossRef]  

4. Y. Sasano, H. Shimizu, N. Takeuchi, and M. Okuda, “Geometrical form factor in the laser radar equation: an experimental determination,” Appl. Opt. 18, 3908–3910 (1979) [CrossRef]   [PubMed]  

5. K. Tomine, C. Hirayama, K. Michimoto, and N. Takeuchi, “Experimental determination of the crossover function in the laser radar equation for days with a light mist,” Appl. Opt. 28, 2194–2195 (1989). [CrossRef]   [PubMed]  

6. S. W. Dho, Y. J. Park, and H. J. Kong, “Experimental determination of a geometric form factor in a lidar equation for an inhomogeneous atmosphere,” Appl. Opt. 36, 6009–6010 (1997). [CrossRef]   [PubMed]  

7. T. A. Berkoff, E. J. Welton, V. S. Scott, and J. D. Spinhirne, “Investigation of overlap correction technique for the micro-pulse lidar NETwork (MPLNET),” in Proceedings of IEEE Geoscience and Remote Sensing Symposium (IGARSS) (IEEE, 2003), Vol. 7, pp. 4395–4397. [CrossRef]  

8. U. Wandinger and A. Ansmann, “Experimental determination of the lidar overlap profile with Raman lidar,” Appl. Opt. 41, 511–514 (2002). [CrossRef]   [PubMed]  

9. D. W. Roberts and G. G. Gimmestad, “Optimizing lidar dynamic range by engineering the crossover region,” Proc. SPIE 4723, 120–129 (2002). [CrossRef]  

10. T. Halldorsson and J. Langerholc, “Geometrical form factors for the lidar function,” Appl. Opt. 17, 240–244 (1978). [CrossRef]   [PubMed]  

11. K. Stelmaszczyk, M. DellAglio, S. Chudzyski, T. Stacewicz, and L. Woste, “Analytical function for lidar geometrical compression form-factor calculations,” Appl. Opt. 44, 1323–1331 (2005). [CrossRef]   [PubMed]  

12. G. M. Ancellet, M. J. Kavaya, R. T. Menzies, and A. M. Brothers, “Lidar telescope overlap function and effects of misalignment for unstable resonator transmitter and coherent receiver,” Appl. Opt. 25, 2886–2890 (1986). [CrossRef]   [PubMed]  

13. R. Velotta, B. Bartoli, R. Capobianco, L. Fiorani, and N. Spinelli, “Analysis of the receiver response in lidar measurements,” Appl. Opt. 37, 6999–7007 (1998). [CrossRef]  

14. U.S. Standard Atmosphere, 1976, U.S. Government Printing Office, Washington, D.C. (1976).

15. C. Cattrall, J. Reagan, K. Thome, and O. Dubovik, “Variability of aerosol and spectral lidar and backscatter and extinction ratios of key aerosol types derived from selected Aerosol Robotic Network locations,” J. Geophys. Res. 110, D10S11 (2005). [CrossRef]  

16. P. B. Russell, T. J. Swissler, and M. P. McCormick, “Methodology for error analysis and simulation of lidar aerosol measurements,” Appl. Opt. 18, 3783–3797 (1979). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Conventional lidar design approach. Taken from Roberts and Gimmestad [9].
Fig. 2
Fig. 2 Three schematic views of a lidar system are proposed. In all, the views are the lens, the pinhole, the lens plane, and the optical axis represented. On the right side of the lens the regions of different defocusing are presented and on the left side, using the same colors, their images. The region focused totally outside the pinhole is filled with diagonal lines. The region focused partially within the pinhole is filled with horizontal lines. The region focused inside the pinhole is filled with vertical lines. In the different views points and their images are considered. From each point we projected the pinhole extremes and the center onto the lens plane to show the intersection with the lens: in A there is full intersection, in B there is partial intersection, and in C there is no intersection.
Fig. 3
Fig. 3 Description of the defocusing effect within the FOV using the lidar parameters collected in Table 1. The solid lines represent the laser beam borders (light gray) and the FOV borders (dark gray). The nonwhite area is the geometrical FOV, the gray from 0 (white) to 1 (black) shows the region where the effect of defocusing is present, so that points in that area are only partially imaged through the pinhole. In this example, defocusing effects affects the laser returns from near the ground up to 1.34 km .
Fig. 4
Fig. 4 Simulation of molecular backscatter coefficient profiles measures by a lidar at two different elevation angles (solid gray line, 90 ° elevation angle, solid black line 40 ° elevation angle), using the nominal overlap function depicted in Fig. 5. A random noise is applied to each profile according to the model of noise plotted in Fig. 10 and discussed in Appendix B. The noise level has been chosen to produce a relative error of 5% at 1500 m of range. The profiles corrected according to our method have also been plotted as dashed gray lines. The solid thicker gay line represents the atmospheric backscatter coefficient, from standard atmosphere model.
Fig. 5
Fig. 5 Comparison of the theoretical function (dashed gray) used to generate profiles in Fig. 4. The result of our algorithm (black dots) of Section 3A and the overlap function estimated fitting with Eq. (10) (solid black).
Fig. 6
Fig. 6 Distributions of the results obtained using our technique on 1500 couple of synthetic profiles generated with elevations 40 ° and 90 ° applying noise as in Fig. 4, using a relative error of 15% at 1500 m . A Angles α and β; B angle α and pinhole displacement d x ; C sagittal angle β and pinhole displacement d x ; D distribution of the 1500 overlap curves. Solid gray line represents the average overlap function while the dashed gray line represents its relative error.
Fig. 7
Fig. 7 Calibration of the three acquired range corrected signals over molecular backscatter coefficient profiles for 532 nm . The calibration was performed using an iterative scheme to correct for extinction assuming a lidar ratio value of 40 sr.
Fig. 8
Fig. 8 Different overlap functions Γ: the black solid line results from the lidar optical model, with the system parameters as in Table 1, i.e., the nominal overlap function. The gray dotted and dashed lines are two different experimental determinations of Γ obtained by our iterative technique. The light gray solid line is the average of the two experimental ones.
Fig. 9
Fig. 9 Comparison between the different overlap functions: the black solid line is the nominal overlap, as in Fig. 8; the black dashed line is the reference overlap correction curve obtained from the Raman calibration; the dotted gray line is Γ t , the result of the fitting procedure of the free parameters of our lidar optical model, constrained by the average of two experimental Γ, depicted as a light gray solid line in Fig. 8.
Fig. 10
Fig. 10 The solid black line is the relative systematic error, computed as the absolute value of the differences between our proposed corrections and the Raman calibrated signal, divided by the latter. The solid gray line is the relative random error of the lidar signal. The dashed black line is the relative uncertainty attributable to the experimentally retrieved correction function Γ, computed by propagating the relative random error of the lidar signals in the iterative procedure as described in the text. The dashed gray line represents the number of iterations needed, step by step, to obtain the results presented.

Tables (1)

Tables Icon

Table 1 Lidar System Specifications

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

S 0 min = 2 d d t d r θ r 2 α + θ t ,
S 0 max = 2 d + d t d r θ r 2 α θ t .
1 S + 1 S i = 1 f ,
M = f S f .
y c ( S , d ) = f S i f d i = d ,
R ( S ) = s i S i f r h = S f r h ,
γ ( S , d ) = A [ d r 2 , R ( S ) , | y c ( S , d ) | ] π d r 2 4 ,
d E ( r , S ) = { 1 / ( π * R d ( S ) 2 ) , if r R d ( S ) 0 , if r > R d ( S ) ,
E ( S ) = 0 2 π 0 d E ( r , S ) d r d θ = 1.
Γ ( S ) = 0 2 π 0 R d ( S ) γ ( S , d ( r , θ , S ) ) d E ( r , S ) d r d θ .
R d ( S ) d t + θ t S 2
d c ( S ) [ ( d 0 2 + α S ) 2 + S 2 β 2 ] 1 / 2 ,
d ( r , θ , S ) 2 = d c ( S ) 2 2 r d c ( S ) cos ( θ ) + r 2 .
y c [ S , d ( r , θ , S ) ] = d ( r , θ , S ) ( f + d x ) | f S d x / f + d x | ,
R ( S ) = S r h | f S d x / f + d x | .
n = int [ ln ( S 0 min S 0 ) ln ( sin ω 1 sin ω 2 ) 1 ] .
s 0 = d r f r h .
S 1 max = 2 d + d r + d t θ r θ t 2 α .
θ r < 2 α + θ t 2 d r f ( d + d t / 2 ) r h .
X 2 ( r ) X 1 ( r sin ω ) ,
Γ 1 ( r ) = X 1 ( r sin ω ) X 2 ( r ) .
X 1 1 ( r ) = X 1 ( r ) Γ 1 ( r ) = X 1 ( r ) X 1 ( r sin ω ) X 2 ( r ) ,
Γ 2 ( r ) = X 1 1 ( r sin ω ) X 2 ( r ) = X 1 ( r sin ω ) Γ 1 ( r sin ω ) X 2 ( r ) = X 1 ( r sin ω ) X 1 ( r sin 2 ω ) X 2 ( r ) X 2 ( r sin ω ) .
Γ n ( r ) = i = 0 n X 1 ( r sin i 1 ω ) X 2 ( r sin i ω ) .
Γ n ( r ) = X 1 ( r sin n 1 ω ) X 2 ( r ) i = 1 n X 1 ( r sin i ω ) X 2 ( r sin i ω ) .
ϵ Γ n ( r ) = i = 0 n [ ϵ X 1 ( r sin i 1 ω ) + ϵ X 2 ( r sin i ω ) ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.