Abstract
It is well known to system engineers that speckle imposes a limitation on active-tracking performance, but scaling laws that quantify this limitation do not currently exist in the peer-reviewed literature. Additionally, existing models lack validation through either simulation or experimentation. With these points in mind, this paper formulates closed-form expressions that accurately predict the noise-equivalent angle due to speckle. The analysis separately treats both well-resolved and unresolved cases for circular and square apertures. When compared with the numerical results from wave-optics simulations, the analytical results show excellent agreement to a track-error limitation of $(1/3)\lambda /D$, where $\lambda /D$ is the aperture diffraction angle. As a result, this paper creates validated scaling laws for system engineers that need to account for active-tracking performance.
© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
1. INTRODUCTION
Finding the centroid of an image-plane irradiance pattern is necessary in many cases of object tracking and wavefront sensing. In an idealized noiseless detection scheme, this calculation represents the true geometric center of a resolved object or unresolved spot (assuming uniform illumination and object reflectivity). The presence of any source of noise, however, introduces uncertainty in this measurement that is typically quantified as a noise-equivalent angle (NEA). The general definition of NEA is an offset in angular position that produces unit signal-to-noise ratio (SNR), such that an actual offset by this angle would be indistinguishable from noise. With that in mind, another term for NEA is one-axis, one-sigma track error (denoted mathematically as ${\sigma _\theta}$).
In the absence of coherent illumination, Tyler and Fried’s foundational work on passive NEA offers a gauge for positional uncertainty of emissive or reflective objects under incoherent illumination [1]. Their treatment assumes the photon and/or sensor noise is what limits performance, giving rise to an NEA that grows without bound for increasingly large well-resolved objects. Though they were studying quad-cell detectors rather than centroid trackers, their results turn out to provide reasonable estimates of the NEA in either scenario [2,3].
Active electro-optical systems must further contend with the effects of scintillation and speckle associated with coherent-light propagation. Scintillation refers to a spatially varying irradiance return arising from distributed-volume phase aberrations. The nonuniformity leads to centroid-tilt, gradient-tilt anisoplanatism, inducing a jitter term that negatively impacts Strehl ratios. Holmes has quantified this track error as a function of log-amplitude variance, among other system-level parameters [4]. Speckle is an unrelated nonuniformity that arises from scattering off an extended, optically rough, and coherently illuminated object. As its effect on received irradiance is statistically independent from that of scintillation, it gives rise to a separate jitter term that is the focus of this paper—namely, the NEA due to speckle.
Fried previously studied the NEA due to speckle in an unpublished technical report [5], which has led over time to an engineering rule of thumb that tracking precision cannot exceed $({1/2})\lambda /D$ in the presence of fully developed speckle. A common mistake, however, is to treat this metric as a one-axis track error when in fact it describes two-axis track error. Shellan later carried out similar analysis for a Shack–Hartmann wavefront sensor, which is essentially a collection of centroid trackers distributed over a lenslet array of square subapertures [6]. Fried and Shellan both relied on the assumption that laser power is sufficiently scalable for other noise sources to become negligible, and we make that same working assumption here. We also follow suit in defining NEA as a function of the object Fresnel number [7–10], which is a normalization of the object angular extent that gives rise to two distinct imaging conditions: well-resolved and unresolved objects. Only the former condition is considered in the Fried and Shellan reports.
Since neither document is published in the peer-reviewed literature, we set out to provide a rederivation of the NEA due to speckle complete with validation through wave-optics simulations. Along the way, we make several other notable improvements: (1) the results are greatly simplified by leveraging the irradiance correlation coefficient that we have previously reported for dynamic speckle [11,12]; (2) no radiometry is required to arrive at a mathematically complete result; (3) additional theory is needed to properly account for unresolved objects; and (4) the Strehl ratio due to Gaussian jitter derived by Merritt et al. links the NEA due to speckle to an intuitive scaling law [2,13]. We find in the well-resolved regime that our results agree with Baribeau’s asymptotic analysis [14]. We also take a related approach to Allan et al. in the unresolved regime [15] but make modified statistical arguments that further simplify our expressions. Before validating our theory through wave-optics simulations, we fit saturation curves to the numerical integration of our analytical results and report closed-form expressions for NEA that depend only on the object Fresnel number when speckle is fully developed.
Our end goal in deriving and validating these expressions is to provide scaling laws that predict a track-error limitation due to fully developed speckle. These scaling laws represent both circular- and square-aperture imaging geometries, as commonly found in object-tracking and wavefront-sensing systems, respectively. They are also valid over a full range of object Fresnel numbers, starting from the unresolved into the well-resolved limit. With these goals in mind, Section 2 first analyzes NEA in the well-resolved and unresolved limits, and then bridges the gap by fitting a single curve to both asymptotic limits. The resultant closed-form expressions lead to the proposed scaling laws in each geometric scenario. Section 3 then introduces a wave-optics simulation framework for validation of our proposed scaling laws, and Section 4 discusses the agreement between theory and Monte Carlo simulation trials. Section 5 summarizes the key findings of this paper, while Appendices A and B contain step-by-step derivations of the analytical overlap integrals introduced in Section 2.A.
2. THEORETICAL ANALYSIS
The subsections that follow are concerned with active tracking of both circular and square object–aperture pairings, with the objects being either well resolved or unresolved from a diffraction standpoint. In each case the ultimate goal is to develop an expression for NEA (${\sigma _\theta}$) along one dimension. As we will see, however, even a single-axis centroid depends upon two-dimensional geometry of both the object and aperture. The analysis thus begins in two dimensions and downconverts in later steps, with the two solutions known to differ by a factor of $\sqrt 2$ [16].
Going forward, a handful of simplifying assumptions help to guide the analysis toward closed-form solutions (cf. Fig. 1): first, a black-box optical system is fully described by its entrance- and exit-pupil sizes and positions relative to the object and image planes; second, the optical system is in focus such that imaging condition $1/{Z_1} + 1/{Z_2} = 1/f$ is satisfied; third, all analysis takes place in the image plane with transverse magnification ratio $M = {Z_2}/{Z_1}$ exactly relating object and image sizes; fourth, the paraxial approximation holds true, such that angular measurements are related to lateral displacements via ${\boldsymbol \theta} = {\boldsymbol r}/{Z_2}$; fifth, active illumination is purely monochromatic and linearly polarized (in addition to being uniform over the entire object just before reflection and backscattering from its rough surface); and sixth, well-resolved objects subtend multiple diffraction angles while unresolved objects subtend less than one diffraction angle at range (i.e., the imaging system is diffraction limited rather than detector-sampling limited).
Before moving on, we define the following special functions for reference throughout our analysis [17]:
We also point out to the reader that, although analytical curves are referenced throughout this section, they do not appear as plots until Section 4, where we present our wave-optics simulation results concurrently.
A. Well-Resolved Objects
An optically rough object that is well resolved by an imaging system presents many independent phase contributions to its entrance pupil, giving rise to irradiance fades called speckles upon propagation to the image plane. A centroid tracker integrates over this entire irradiance pattern, so the image-plane speckle statistics determine uncertainty in the measurement. With reference to Fig. 1, the analytical vector expression for an ordinary image-plane intensity centroid (i.e., the first moment of the image) is [18]
Our immediate goal is to compute a variance (i.e., the second central moment) of the random intensity centroid position, which in general is
As the two irradiance factors are the only randomly varying quantities in Eq. (8), the orders of integration and averaging are interchangeable [19], such that
We recognize the quantity enclosed in angle brackets as the statistical autocorrelation function
$U(\circ)$ being a complex optical field component and ${U^*}(\circ)$ its complex conjugate. Assuming the rough-surface scattering process generates enough independent phase contributions that the central-limit theorem is satisfied, $U({{\boldsymbol r}_{1}})$ and $U({{{\boldsymbol r}_2}})$ obey circular complex Gaussian statistics and
For any uniform irradiance pattern that is both finite in extent and symmetric about the optical axis, the first term in Eq. (12) integrates to zero and thus
This result makes the additional assumption of a linear shift-invariant (LSI) system such that the correlation coefficient depends only on scalar differences between vector magnitudes [21].
To proceed, a flat circular object of diameter $W$ and mean irradiance $\bar I$ gives
Making this substitution, along with $M = {Z_2}/{Z_1}$, $\theta = r/{Z_2}$ and
the final normalized result for a circular object–aperture pairing isHere, we have also divided the result by 2 for one-axis variance and taken its square root for standard deviation. In Eq. (17), $\lambda /D$ is the full-width-at-half-maximum (FWHM) aperture diffraction angle and ${N_{{\rm obj}}}$ is the object Fresnel number, which not only normalizes the object angular extent but also counts the estimated number of speckles across one dimension of the aperture. Moreover, ${N_{{\rm obj}}} \gt 1$ indicates that the object is well resolved with multiple resolution cells spanning the object at range. Equation (18) has an asymptotic limit of $1/\pi \approx 0.318$ as ${N_{{\rm obj}}}$ tends to infinity.
Pairing a square object of width $W$ with a square aperture of width $D$ gives
Referring the reader to Appendix B and making the appropriate substitutions into Eq. (B12), we find that our final normalized result for a square object–aperture pairing is
Equations (18) and (22) result from integration only over the windowed image irradiance, which assumes a full field of view (FOV) equal to the object angular extent. In turn, any presence of background noise or clutter would naturally increase uncertainty in the measurement. The dashed curves of Fig. 4 plot these results (i.e., the well-resolved NEA due to speckle for both circle and square objects) over a range of object Fresnel numbers. We note that the asymptotic well-resolved limits of these curves (where ${N_{{\rm obj}}} \gg 1$) remain constant with object Fresnel number due to aperture-limited speckle sizes [12], while their linear predictions are unphysical in the unresolved limits (for which ${N_{{\rm obj}}} \ll 1$) because as an object decreases in angular subtense its image cannot shrink infinitesimally as a consequence of diffraction. This understanding calls for separate treatment of unresolved objects in the analysis that follows.
B. Unresolved Objects
When an extended, optically rough, and coherently illuminated object is not resolved by the imaging system, it becomes an effective point source producing a single diffraction spot in the image plane. Phase is nonetheless random in the pupil plane, however, which means the centroid position still fluctuates to some degree. Equation (17) predicts that speckle width exceeds aperture diameter in the unresolved regime, so the pupil effectively sees a constant phase slope in one direction [22] and the image-plane diffraction spot shifts accordingly. The path forward, then, is to describe this behavior by studying the phase statistics of a fully developed speckle field.
The localized 1D phase slope (${\phi ^\prime} = {\rm d}\phi /{\rm d}x$) of a fully developed speckle field follows the probability-density function (PDF) [20]
In the above, $g$ is a scalar value that depends on the object geometry. Equation (23) is defined in theory over an infinitely wide domain, but setting bounds of integration from ${-}\infty$ to $\infty$ produces the unphysical result that variance is also infinite. Instead truncating its support to a more realistic interval gives rise to a variance calculation of
Assuming a circular object [20], $g = 2\lambda {Z_1}/({\pi W})$ and $\phi _{{\rm max}}^\prime = 21.3W/({\lambda {Z_1}})$. Noting that division by wavenumber $k = 2\pi /\lambda$ converts a phase slope to a tilt angle under the paraxial approximation, this is equivalent to integrating over roughly $7 \times$ the object angular extent. Equation (25) then evaluates to $\sigma _{{\phi ^\prime}}^2 = 5.70{[{W/({\lambda {Z_1}})}]^2}$. From here we simply take the square root for standard deviation and divide by $k$ for an angular uncertainty of
in terms of the aperture diffraction angle and object Fresnel number.Assuming a square object [20], $g = \sqrt 3 \lambda {Z_1}/({\pi W})$ and $\phi _{{\rm max}}^\prime = 24.6W/({\lambda {Z_1}})$ or roughly $8 \times$ the object extent in angular space. In turn, $\sigma _{{\phi ^\prime}}^2 = 7.61{[{W/({\lambda {Z_1}})}]^2}$ and
by the same approach. Comparing Eqs. (26) and (27) to Eqs. (18) and (22), we see that the NEA is greater for square geometry in the unresolved limit but greater for circular geometry in the well-resolved limit.The dotted curves of Fig. 4 plot these results (i.e., the unresolved NEA due to speckle for both circle and square objects) over the same range of object Fresnel numbers as in Section 2.A. Here, we observe a linear dependence on object Fresnel number in the unresolved limit (where ${N_{{\rm obj}}} \gg 1$), while a constant slope is now unphysical in the well-resolved limit (for which ${N_{{\rm obj}}} \gg 1$) because an image does in fact grow larger with increasing object size and pointwise measurements of phase in the aperture no longer apply.
C. Scaling Laws
In an effort to develop scaling laws for active tracking that include the NEA due to speckle, we bridge the linear lower limits defined in Section 2.B with the constant upper limits defined in Section 2.A through curve fitting. Synthesizing the lower limit of Eq. (26) with the upper limit of Eq. (18) in TableCurve 2D yields a saturation-curve fit of
Of further interest is quantifying how the NEA due to speckle will ultimately impact system performance. Defining the Strehl ratio according to Merritt’s formulation for Gaussian jitter as [2,13]
3. NUMERICAL SIMULATION
To produce numerical results that accurately represent active-centroid tracking with coherent illumination, we take the standard wave-optics approach of propagating from plane to plane via the Fresnel diffraction integral as a solution to the Helmholtz wave equation. After selecting a realistic illumination wavelength $\lambda$, propagation distance ${Z_1} = {Z_2} = Z$ (for unit magnification), and aperture diameter $D$, we vary the object width $W$ by controlling the object Fresnel number [cf. Eq. (17)]. The next step is to define either a square or circular shape of this width in the object plane with constant amplitude and $\delta$-correlated random phase distributed uniformly over $[{- \pi ,\pi})$. Propagating this field a distance ${Z_1}$ to the pupil plane, applying a thin lens of focal length $f = {Z_1}$ to collimate the entrance-pupil field, applying a second thin lens of focal length $f = {Z_2}$ to focus the exit-pupil field, propagating by a second distance ${Z_2}$ to the image plane, and taking the field’s squared magnitude produces the image-plane irradiance for analysis (cf. Fig. 1). Although our simulations represent a two-lens imaging system for the sake of simplicity [24], we remind the reader that any black-box system with known pupil positions relative to the object and image planes would give equally valid results. After windowing out a region of interest that is consistent with the image size, we calculate an $x$-axis centroid estimate as
An example speckled and windowed image is shown in Fig. 3 with red crosshairs marking the estimated centroid position. A Monte Carlo average of this estimate over 100 independent speckle realizations for each object Fresnel number increases the robustness of the estimates, and finally dividing the average results by $\lambda {Z_2}/D$ allows for comparison to the appropriate scaling law [cf. Eq. (29) or (28)]. These results are plotted as circles in Fig. 4, noting that the general approach remains valid over the full range of object sizes. Because a minimum of ${\sim}10$ phase samples are required across the object width to generate proper speckle statistics, sampling requirements become much more constrained in the unresolved limit where object sizes grow smaller against the aperture size and propagation distance [25]. With that said, Table 1 highlights the critical inputs to an example simulation in the well-resolved limit with unity scaling between each pair of planes.
4. RESULTS AND DISCUSSION
Figure 4 plots all circular and square integral expressions, curve fits, and simulation data for NEA over a wide range of object Fresnel numbers assuming fully developed speckle. It is clear that the curve fits in general provide decent estimates of their rigorous analytical counterparts, and the numerical results provide validation with strong agreement. Furthermore, rigorous analysis cannot account for the transition region from unresolved to well-resolved objects where curve fitting offers the only viable closed-form solutions. Lower object Fresnel numbers tend to show greater variation in the data, which stands to reason since the centroid of a well-resolved object that projects many phase slopes across the aperture represents more of an ensemble average than does a single phase contribution from an unresolved object. Monte Carlo averaging of more than 100 datasets would help to reduce this noise, but overall trends in the data provide meaningful insight nonetheless.
Recalling that our circular and square scaling laws saturate at ${\sim}0.318$ and ${\sim}0.289$, respectively, we propose a new rule of thumb that says tracking precision of well-resolved objects cannot exceed $({1/3})\lambda /D$. We make the argument that a one-axis definition is more intuitive when considering the idea of a NEA. If one considers instead a two-axis definition, the estimate more closely approaches the familiar $({1/2})\lambda /D$ metric: $\sqrt 2 /\pi \approx 0.450$ and $1/\sqrt 6 \approx 0.408$, respectively, for circular and square geometries. We also point out that any discrepancy in the transition region from unresolved to well-resolved objects is a conservative overestimate of the NEA, though this discrepancy appears somewhat exaggerated on the log–log scale of Fig. 4.
As a matter of interest, comparing the scaling laws presented here to the passive SNR-limited results of Ref. [1] reveals essentially opposite trends. Our active NEA due to speckle increases linearly with object Fresnel number in the unresolved limit. As the object Fresnel number ${N_{{\rm obj}}}$ increases, the phase ramps grow steeper over decreasing speckle sizes, until multiple speckles appear across the aperture with ${N_{{\rm obj}}} \gt 1$ and the effect of ensemble averaging over image-plane speckles of constant size (as set by the exit pupil) saturates the NEA to a constant value. Conversely, in the passive case an incoherent point-spread function fixes the NEA at a constant value in the unresolved regime. Such is the case until the object becomes well resolved for ${N_{{\rm obj}}} \gt 1$ and the NEA increases linearly with object Fresnel number while the image grows without bound (as allowed by the FOV) without total destructive interference to limit the centroid calculation area. This comparison highlights an inherent tradeoff between active and passive tracking, subject to available SNR from natural illumination.
In order to quantify the on-axis intensity reduction associated with speckle-induced jitter, Eq. (30) together with Eqs. (28) and (29) leads to the plots of Strehl ratio in Fig. 5. These plots also illustrate the benefit of reducing NEA through speckle averaging, with Eqs. (29) and (28) divided by the square root of ${N_{{\rm avg}}} = 2$ and 4 before substitution into Eq. (30). In addition to the $({1/3})\lambda /D$ tracking limit for well-resolved objects, a key result of this paper from Fig. 5 is that the one-axis jitter Strehl ratio falls below the Maréchal criterion for nominally diffraction-limited imaging ($\langle S \rangle \gtrsim 80\%$) in the well-resolved limit (without speckle averaging). In particular, $\langle {{S_{\rm j}}} \rangle = 2/3$ and ${\sim}0.709$ for circular and square geometries, respectively. A two-axis definition would decrease these numbers, respectively, to 1/2 and ${\sim}0.549$, potentially overestimating the severity of unmitigated speckle and its impact on performance in an active-tracking system.
5. CONCLUSION
Rough-surface scattering from coherently illuminated objects introduces an interference phenomenon known as speckle, which randomly shifts the centroid of the image. As such, we formulated closed-form expressions that accurately predict the NEA due to speckle. In an effort to make our results broadly applicable, we separately treated the cases of well-resolved and unresolved objects for both circular and square apertures. Overall, the analytical results showed excellent agreement when compared with the numerical results from wave-optics simulations. Both sets of results also showed a track-error limitation of $(1/3)\lambda /D$, where $\lambda /D$ is the diffraction-limited half angle. Because the Strehl ratio due to Gaussian jitter links the NEA to an intuitive scaling law, system engineers can now use these validated, closed-form expressions to account for active-tracking performance in a straightforward way.
APPENDIX A: FULL DERIVATION FOR WELL-RESOLVED OBJECTS WITH CIRCULAR GEOMETRY
Starting from Eq. (13), we define sum and difference vectors as
and respectively, for a Jacobian determinant of 1. This choice implies thatNotice that the use of sum and difference vectors here provides us with an iterated integral that we can treat with respect to one vector quantity at a time. A circular object of diameter $W$ now gives us
Equation (A4) then becomes
Decomposing ${\boldsymbol u}$ into Cartesian $a$ and $b$ coordinates while choosing to align ${u_a}$ with ${\boldsymbol v}$ allows us to assign a single component of magnitude $v$ to vector ${\boldsymbol v}$. In other words, ${\boldsymbol u} = \langle {{u_a},{u_b}} \rangle$ and ${\boldsymbol v} = \langle {v,0} \rangle$. Then according to Eq. (1)
Similarly,
Figure 6 shows that our overlap integral calls for bounds on ${u_b}$ in Eq. (A9) from the lower bound of Eq. (A10) to ${u_a} = 0$, as well as those on ${u_b}$ in Eq. (A12) from ${u_a} = 0$ to the upper bound of Eq. (A13). Thus, the interior integral of Eq. (A8) becomes
Plugging back into Eq. (A8),
Converting to polar coordinates, integrating azimuthally, and setting appropriate radial bounds on $v$ [cf. Eq. (A6)],
Bearing in mind that a circular aperture gives us
and that $v = q/({MW})$, the abbreviated analysis of Section 2.A picks up from this point with Eq. (16).APPENDIX B: FULL DERIVATION FOR WELL-RESOLVED OBJECTS WITH SQUARE GEOMETRY
For the square case, we convert from polar to Cartesian coordinates using the relationships $x = r\cos (\theta)$ and $y = r\sin (\theta)$. By invoking separability, this conversion allows us to work in only one dimension and arrive at
Then we have
With a square object of width $W$ we now have
Establishing limits of integration,
Given that both ${v_x}$ and ${v_y}$ can range from ${-}1$ to 1, Fig. 7 provides a visual for this set of integral bounds when $0 \lt \{{{v_x},{v_y}} \} \lt 1$. In turn, Eq. (B5) becomes
Acknowledgment
The authors thank A. McDaniel and M. Cubillos at the Air Force Research Laboratory (AFRL), Directed Energy Directorate, for their guidance on the statistical methods that this paper employs. D. Burrell thanks the AFRL Scholars Program for their internship support. M. Spencer thanks the Air Force Office of Scientific Research for sponsoring this research under the auspices of an AFRL Science and Engineering Early Career Award. Approved for public release; distribution is unlimited. Public Affairs release approval #AFRL-2022-5874. The views expressed are those of the authors and do not necessarily reflect the official policy or position of the Department of the Air Force, the Department of Defense, or the U.S. government.
Disclosures
The authors declare no conflicts of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
REFERENCES
1. G. A. Tyler and D. L. Fried, “Image-position error associated with a quadrant detector,” J. Opt. Soc. Am. 72, 804–808 (1982). [CrossRef]
2. P. Merritt and M. Spencer, Beam Control for Laser Systems, 2nd ed. (Directed Energy Professional Society, 2018).
3. D. Burrell, J. Garretson, J. Vorenberg, and R. Driggers, “Active vs. passive tracking: when to illuminate?” Proc. SPIE 12106, 121060F (2022). [CrossRef]
4. R. B. Holmes, “Scintillation-induced jitter of projected light with centroid trackers,” J. Opt. Soc. Am. A 26, 313–316 (2009). [CrossRef]
5. D. L. Fried, “Speckle effects in target position measurement,” Tech. Rep. TR-452 (Optical Sciences, 1982).
6. J. B. Shellan, “An analysis of the impact of speckle on the reconstructed phase for a Hartmann wavefront sensor,” Tech. Rep. TR-1645 (Optical Sciences, 2004).
7. N. R. Van Zandt, J. E. McCrae, M. F. Spencer, M. J. Steinbock, M. W. Hyde, and S. T. Fiorino, “Polychromatic wave-optics models for image-plane speckle. 1. Well-resolved objects,” Appl. Opt. 57, 4090–4102 (2018). [CrossRef]
8. N. R. Van Zandt, M. F. Spencer, M. J. Steinbock, B. M. Anderson, M. W. Hyde, and S. T. Fiorino, “Polychromatic wave-optics models for image-plane speckle. 2. Unresolved objects,” Appl. Opt. 57, 4103–4110 (2018). [CrossRef]
9. N. R. Van Zandt, M. F. Spencer, and S. T. Fiorino, “Speckle mitigation for wavefront sensing in the presence of weak turbulence,” Appl. Opt. 58, 2300–2310 (2019). [CrossRef]
10. N. R. Van Zandt and M. F. Spencer, “Improved adaptive-optics performance using polychromatic speckle mitigation,” Appl. Opt. 59, 1071–1081 (2020). [CrossRef]
11. D. J. Burrell, M. F. Spencer, N. R. V. Zandt, and R. G. Driggers, “Wave-optics simulation of dynamic speckle: I. In a pupil plane,” Appl. Opt. 60, G64–G76 (2021). [CrossRef]
12. D. J. Burrell, M. F. Spencer, N. R. Van Zandt, and R. G. Driggers, “Wave-optics simulation of dynamic speckle: II. In an image plane,” Appl. Opt. 60, G77–G90 (2021). [CrossRef]
13. P. H. Merritt and J. R. Albertine, “Beam control for high-energy laser devices,” Opt. Eng. 52, 021005 (2012). [CrossRef]
14. R. Baribeau and M. Rioux, “Centroid fluctuations of speckled targets,” Appl. Opt. 30, 3752–3755 (1991). [CrossRef]
15. G. W. Allan, R. Allured, J. Ashcom, L. Liu, and K. Cahoy, “Temporally averaged speckle noise in wavefront sensors for beam projection in weak turbulence,” Appl. Opt. 60, 4723–4731 (2021). [CrossRef]
16. T. J. Brennan and P. H. Roberts, AOTools: The Adaptive Optics Toolbox for Use with MATLAB: User's Guide (The Optical Sciences Company, 2010).
17. J. D. Gaskill, Linear Systems, Fourier Transforms, and Optics, 1st ed. (Wiley, 1978).
18. R. K. Tyson, Principles of Adaptive Optics, 3rd ed. (CRC Press, 2010).
19. J. W. Goodman, Statistical Optics, 2nd ed. (Wiley, 2015).
20. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications, 2nd ed. (SPIE, 2020).
21. J. W. Goodman, Introduction to Fourier Optics, 4th ed. (W. H. Freeman, 2017).
22. H. Helmers and J. Burke, “How knowledge about speckle intensity and phase gradients can improve electronic speckle pattern interferometry,” Proc. SPIE 3749, 216–217 (1999). [CrossRef]
23. J. L. Miller, E. J. Friedman, J. N. Sanders-Reed, K. Schwertz, and B. K. McComas, Photonics Rules of Thumb, 3rd ed. (SPIE, 2020).
24. M. F. Spencer, “Spatial heterodyne,” in Encyclopedia of Modern Optics, B. D. Guenther and D. G. Steel, eds., 2nd ed. (Elsevier, 2018), pp. 369–400.
25. D. Burrell, J. Beck, M. Beason, and B. Berry, “Wave-optics sampling constraints in the presence of speckle and anisoplanatism,” Proc. SPIE 11836, 1183603 (2021). [CrossRef]