Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Polychromatic wave-optics models for image-plane speckle. 2. Unresolved objects

Open Access Open Access

Abstract

Polychromatic laser light can reduce speckle noise in many wavefront-sensing and imaging applications. To help quantify the achievable reduction in speckle noise, this study investigates the accuracy of three polychromatic wave-optics models under the specific conditions of an unresolved object. Because existing theory assumes a well-resolved object, laboratory experiments are used to evaluate model accuracy. The three models use Monte-Carlo averaging, depth slicing, and spectral slicing, respectively, to simulate the laser–object interaction. The experiments involve spoiling the temporal coherence of laser light via a fiber-based, electro-optic modulator. After the light scatters off of the rough object, speckle statistics are measured. The Monte-Carlo method is found to be highly inaccurate, while depth-slicing error peaks at 7.8% but is generally much lower in comparison. The spectral-slicing method is the most accurate, always producing results within the error bounds of the experiment.

1. INTRODUCTION

Optical-speckle effects occur when partially coherent laser light reflects off of a surface which is rough compared to the light’s wavelength. Here, the rough surface randomizes the light’s phase, and after propagation, regions of constructive and destructive interference form, causing bright and dark spots known as speckles. Typically, only laser light is coherent enough to create such speckles, and these speckles are generally undesirable, because they cause noise in the measurements.

Because a wide variety of systems use laser illumination and suffer from speckle noise, many researchers have studied speckle mitigation. Some applications for laser illumination include optical metrology [13], remote sensing [48], laser projectors [9], active target tracking [1013], and adaptive optics [1416]. Methods to mitigate speckle in such applications include angle/spatial diversity [1719], polarization diversity [19,20], spatial and temporal integration [1922], and polychromatic illumination [2334]. The angle diversity and polychromatic illumination methods often use partially coherent sources, and the generation of such sources remains an active area of research [3539]. This work focuses on speckle reduction via polychromatic illumination, which is also known as wavelength diversity, linewidth broadening, and temporal coherence reduction. The polychromatic light can be created in many ways, and one common method is phase modulation via an electro-optic modulator (EOM) [4042].

It is the combination of polychromatic illumination and object depth that causes speckle mitigation. Such mitigation happens if the object’s depth within a single resolution cell (the area seen by one pixel) is comparable to or greater than the coherence length [43]. In that case, multiple coherence regions form within the resolution cell, as shown in Fig. 1. Each coherence region contributes an independent speckle pattern, and those patterns add incoherently to reduce speckle strength [43]. Though not rigorous, this description concisely explains polychromatic speckle mitigation.

 figure: Fig. 1.

Fig. 1. Polychromatic illumination of a planar object (a.k.a. target) at 45° slope. Image-plane speckle is mitigated due to the multiple coherence regions within a single resolution cell on target.

Download Full Size | PDF

While some researchers treated polychromatic speckle mitigation analytically, the solutions that they created are limited. For example, they do not allow for atmospheric turbulence effects or fine-scale variations in illumination, object shape, or object reflectance. Numerical wave-optics simulation can overcome those limitations. Several wave-optics methods can handle polychromatic light, including Monte Carlo averaging, depth slicing, and spectral slicing; however, these methods are not well-documented in the literature.

This paper, which is part two of a two-part study, investigates the accuracy of three polychromatic wave-optics methods for unresolved objects (a.k.a. targets). So, this work is relevant to adaptive optics and wavefront-sensing applications such as remote sensing, target tracking, free-space optical communications, and power beaming. Because the theory used in Part 1 (this issue; see Ref. [44]) does not apply when the object is unresolved, we now turn to a laboratory experiment to quantify accuracy. This work is largely based on a previous conference paper [43], which outlined some challenges and preliminary results. Now, we report full results, including many additional data points over a wider range of conditions. These full results more clearly define the accuracy of the three wave optics methods. They show that the accuracy of the Monte Carlo method decreases rapidly with target Fresnel number, becoming inadequate when the object is unresolved. Further, while the depth-slicing method is usually accurate, it does exhibit some error when the speckle is strong. The spectral-slicing method proves to be the most accurate model, because it produces results within the error bounds of the experiment in all cases tested. In addition to the new results, we also discuss several details of the final experiment, including object material selection, source spectrum, beam shapes, vibration reduction, and postprocessing techniques.

Section 2 starts with a discussion of the metric known as the target Fresnel number and its relevance to wavefront sensing. That section also gives a brief review of experimental methods for speckle reduction with a focus on electro-optic modulators, the type of device used in this paper to reduce temporal coherence. Section 3 then explores the details of the experimental setup. Last, Section 4 presents results which quantify the accuracy of the simulation methods.

2. BACKGROUND

In adaptive-optics and wavefront-sensing applications, the system typically relies on a beacon to provide a reference wave for wavefront measurement [45,46]. Often, that beacon is a point source. However, when the object is uncooperative (i.e., it does not provide a point-source beacon), then the system creates an extended beacon by focusing a laser onto the object, producing an illumination spot of finite size which introduces both anisoplanatism and speckle noise [47].

Perhaps the most common type of wavefront sensor is the Shack Hartmann wavefront sensor (SHWFS) [46,48]. It uses an array of small subapertures within the full aperture to break up the incoming light [49]. Each subaperture contains a lens which properly images the object onto a portion of the focal plane. In that layout, the subapertures define the entrance and exit pupils. Because the system keeps the beacon size as small as possible to minimize anisoplanatism, each subaperture typically does not resolve the extended beacon on target. Such unresolved imaging scenarios are the focus of this paper. In what follows, we discuss target Fresnel number and polychromatic illumination as they relate to wavefront sensing.

A. Target Fresnel Number

The target Fresnel number, NF, provides insight into both the speckle noise and the resolvability of the extended beacon. In particular,

NF=DTλR,
where D is the aperture (or subaperture) diameter, T is the effective target width, λ is the wavelength, and R is the range (distance to the target). Notably, T is either the physical target width or the width of the illumination beam on target, whichever is smaller. Target Fresnel number defines both the number of diffraction–limited resolution cells across T and the number of speckles across D, as we showed in Part 1 (see [44]).

Because it defines the number of diffraction-limited resolution cells across T, it measures the theoretical limit of the system’s ability to resolve T. For NF1, T is well-resolved. However, the focus here is on unresolved T, for which NF<1.

The target Fresnel number also helps to gauge the strength of two types of speckle noise. The first type involves drop outs in signal strength. For NF<1, each speckle is larger than the aperture. If the speckle covering the aperture happens to be a dark speckle, then signal strength will be far below average, potentially causing a poor signal-to-noise ratio (SNR) condition known as a drop out. On the other hand, when NF1, there are many speckles across the aperture, and the probability of low signal strength is greatly diminished. So, frequent drop outs due to speckle correspond to low NF.

The other source of speckle noise involves the phase measurement of the wavefront sensor. The sensor attempts to measure phase distortions due to turbulence. However, speckle causes phase distortions which are essentially noise in the measurements. When NF1, the numerous speckles across the aperture can lead to strong noise in the phase measurements. In contrast, with NF<1, the phase due to speckle is nearly constant across the aperture. In other words, the speckle phase is mostly piston phase. Because a SHWFS does not measure piston phase, such speckle has little impact on the measurements. As such, the target Fresnel number provides insight into the strength of speckle noise in phase measurements.

B. Polychromatic Illumination

Adaptive-optics and wavefront-sensing systems keep the beacon size as small as possible to minimize anisoplanatism. Thus, speckle mitigation methods which increase the minimum spot size (i.e., spatial integration and angle/spatial diversity) are undesirable. Further, temporal integration reduces bandwidth, which is undesirable for high-speed applications, such as target tracking. The remaining methods are polarization diversity and polychromatic illumination. They neither increase spot size nor reduce bandwidth, making them good for use in wavefront sensing. However, polarization diversity can only reduce speckle strength by a factor of 2 at most, while polychromatic illumination can provide a much greater reduction [11]. Thus, polychromatic illumination is the best approach for many applications, and it is our focus in this work.

Sources of beacon illumination must achieve high spatial coherence to allow a very small beam size on target. Further, applications involving large distances between the target and the imaging system require relatively high power levels. Thus, lasers are often used as beacon illuminators. However, many lasers inherently have narrow linewidth resulting in long coherence lengths of 1 cm or greater [50]. To reduce speckle, the coherence length should be as short as possible. Some lasers naturally produce short coherence length, such as fiber and diode lasers which operate with many longitudinal modes or which experience certain nonlinear effects [50].

It is also possible to broaden the linewidth of the seed light before it enters the laser-gain media. One common method is phase modulation via an electro-optic modulator (EOM). High-speed phase modulation via an EOM grew out of a need to suppress the nonlinear effects of stimulated Brillouin scattering (SBS) in optical fibers. Much research was done toward that end in the 1980s, when power levels available for long-range communication fibers rose high enough to induce SBS [51,52]. EOMs allow electronic control of the spectrum of the light, and recent research showed that modulation schemes such as pseudorandom bit sequence (PRBS) and pseudo-rectangular lineshape offer certain advantages over the more straight-forward schemes of white noise modulation and modulation with a series of pure frequencies [4042]. However, in this work, we use white noise modulation, because it is easy to implement in the laboratory.

Throughout this work, we use Goodman’s definition of coherence length. It relates coherence length, lc, to the complex degree of coherence, γ(τ), according to

lc=c|γ(τ)|2dτ.

The complex degree of coherence is itself related to the normalized power spectral density, G(ν), through

γ(τ)=G(ν)exp(j2πντ)dν,
where G(ν) is normalized so as to integrate to unity [20].

3. EXPERIMENTAL SETUP

All existing theory assumes a resolved object. So, it does not apply when the object is unresolved, as is the case in most wavefront-sensing scenarios. To validate the wave-optics models for such scenarios, we conducted a laboratory experiment. In what follows, we discuss the layout, the amplification of the light, and the scaling of the target Fresnel number associated with this laboratory experiment.

A. Layout

Figure 2 shows the experimental layout. Here, the master oscillator (MO) laser (a 1064 nm JDSU 126 nonplanar ring oscillator laser) provides highly coherent light. For coherent measurements, that light illuminates the rough target. Alternately, for polychromatic measurements, we couple the light into a single-mode fiber where an EOM (an EOSpace PM-0K1-12-PFU-PFU-106-UL) broadens the linewidth, reducing the coherence length to 6.5 mm. That polychromatic light then illuminates the target. Next, we take speckle measurements in the image plane (using an Anacapa InGaAs SWIR 640-25), while the 0.5 mm pinhole aperture ensures unresolved imaging conditions.

 figure: Fig. 2.

Fig. 2. Validation experiment layout. A highly coherent master oscillator (MO) laser provides light for illumination. Optionally, an electro-optic modulator broadens the linewidth to 29.5 GHz. The light then scatters off of the rough target, which we image through a 0.5 mm aperture such that the beam on target is unresolved. We then measure the speckle statistics for comparison with results from several different wave-optics models.

Download Full Size | PDF

In this experiment, we drive the EOM using a white Gaussian noise generator (Noisecom NC1128A) operating at 10 GHz. That signal passes through a 5 GHz low pass filter (Mini-Circuits VLF-5000+) before entering two low-power amplifiers (Mini-Circuits ZX60-14012L-S+) in series to boost the voltage and power. Next, the signal enters a medium-power amplifier (Pasternack PE15A3007) to further boost the power before it passes through another low pass filter (Mini-Circuits VLF-5000+). It then drives the EOM, which ultimately produces a nearly triangular spectrum of 29.5 GHz width (as shown in Fig. 3).

 figure: Fig. 3.

Fig. 3. Measured illuminator spectra. Plot (a) shows the spectrum immediately after the EOM, while (b) shows that after the fiber amplifier. The similarity between (a) and (b) indicates that the spectrum hardly changes during amplification. Also, the two curves in (b) show the spectrum 2 min after turn on and 10 min after turn on, respectively. They are almost identical, indicating that the spectrum is stable over time.

Download Full Size | PDF

1. Parameter Space

We designed the experimental layout to mimic the conditions typically found in wavefront sensing. To do this, we matched two key parameters of the experiment to those found in a realistic system, namely the target Fresnel number and target depth. The target Fresnel number, NF, quantifies both the resolvability of the target and the maximum speckle noise, as discussed in Section 2.A, while target depth is a key factor in polychromatic speckle mitigation (see Section 1).

Table 1 compares those two parameters between a nominal system and the experiment. Here, the nominal system involves an aperture of 30 cm diameter, a range to target of 5 km, a wavelength of about 1 μm, and a linewidth of 30 GHz. Notably, that bandwidth is near the limit of modern EOMs. We assume that the system uses the full aperture to focus the beacon laser onto the target, which allows a minimum spot size of 2.5 cm if we also assume that the effective diffraction-limited width is 1.5 λR/D. To compute the maximum depth, we use a slope angle of 60°, which produces about 9 cm of depth across a diffraction-limited resolution cell after accounting for the double-pass of the reflection geometry. To compute the resolution cell width, we again assume an effective diffraction-limited width of 1.5 λR/D. We base the smallest target Fresnel number on the minimum spot size of 2.5 cm. Further, we assume that there are 50 SHWFS subapertures across the full aperture, yielding an NF of 0.03. To compute the largest NF of 1.5, we assume that there are only five subapertures across the full aperture and that the beacon is five times larger than the diffraction limit due to uncompensated turbulence effects.

Tables Icon

Table 1. Comparison of a Nominal System with the Experiment

The experiment’s parameters compare favorably with this nominal system. The beam width on target is either 0.545 cm or 1.27 cm full width half-maximum (FWHM). Those sizes, coupled with the experiment’s 0.5 mm aperture diameter and 4.35 m range to target, produce an NF of 0.59 and 1.37, respectively. Both of those values fall within the regime of the nominal system. We could achieve smaller NF values by reducing the spot size, but we avoid this option due to possible irradiance damage to the target material, which is Spectralon. Regarding target depth, the larger spot size of 1.27 cm achieves a depth of 4.4 cm at 60° slope, which is about half the maximum depth of the nominal system. We could further increase spot size to increase the maximum depth, but we would then need a smaller aperture to achieve unresolved imaging. A smaller aperture reduces the SNR, which increases the size of the error bounds on the final results. Because our goal is experimental validation of the models, we desire to keep the error bounds as small as possible. So, we decided on an aperture diameter of 0.5 mm, yielding a mean SNR of 80. Although an SNR of 80 is quite high, the SNR is greatly reduced in some measurements when a dark speckle covers the aperture. Thus, while the lab experiment’s target depth and NF are both well within realistic ranges, they do not span the whole of those ranges.

2. Target and Mount

The target is a 12-inch square made out of Spectralon, which is a highly Lambertian material with up to 99% reflectance. Because we need 60° slope to achieve 4.4 cm of depth, the target must be very diffuse. Otherwise, the signal levels will drop off too severely as the slope increases. Spectralon is among the most diffuse of the widely available high-reflectance materials. Unfortunately, it introduces volume scattering, which we must quantify and account for when comparing the experimental results to the simulation results. It also completely depolarizes the light, so we add a linear polarizer in front of the aperture.

We mount the target on a biaxial rotation stage. One axis allows precise control of the slope angle relative to the illumination and imaging vectors, thus controlling the depth. The other axis rotates the target about its normal. Such rotation causes the speckle field at the aperture to move. When the illuminated portion of the target moves by one aperture width, a new portion of the speckle field illuminates the aperture, thus randomizing the image-plane speckle. By illuminating swaths near the edges of the target, we achieve a new speckle realization with every 0.25° of rotation, thus allowing 1,440 speckle realizations per 360° rotation. Depending on the width of the illumination beam, we fit up to seven different swaths within 4 cm of the target’s edges, producing up to 10,080 independent speckle realizations for estimation of speckle statistics.

Because the imaging system cannot resolve the beam on target, the image from each rotational position provides only one speckle measurement. Thus, we only use the measurements from the on-axis pixel of the imaging system to estimate statistics. We compute both the mean and the standard deviation of the measurements from the on-axis pixel. Then, we quantify speckle strength using the metric known as speckle contrast, C, which is defined as

C=σII¯,
where σI is the irradiance standard deviation, and I¯ is the mean irradiance [19].

3. Vibration Reduction and Postprocessing

Due to the small aperture and relatively long distance to the target, vibration is quite significant in this experiment. To reduce vibration, we floated the optical table, removed all moving parts from the table (except the target mount), disabled the ventilation system during measurements, added vibration isolation washers between the target and its mount, improved mount rigidity, and reduced the detector’s integration time to 500 μs. Even so, vibration effects significantly reduced the measured speckle contrast.

We remove the effects of vibration (a form of temporal integration) by taking measurements with both coherent and polychromatic light. First, we set the target to a particular slope angle and illuminate it with highly coherent light (i.e., the EOM is not activated) while taking a full set of measurements to estimate speckle contrast in the presence of vibration. Next, we activate the EOM and take a second set of measurements, now with polychromatic light. Then, we convert the two measured contrast values to degrees of freedom according to

N=1C2,
where N is the number of degrees of freedom [19]. We assume that the vibration effects are independent of polychromatic speckle mitigation, allowing us to compute the vibration-removed degrees of freedom, NVibFree, as
NVibFree=NpolyNcoh,
where Npoly is the polychromatic result, and Ncoh is the coherent result [19]. We then convert NVibFree back into contrast by applying the inverse form of Eq. (5). Because the vibration levels vary depending on the slope angle of the target, we repeat this process at each slope angle.

Notably, such postprocessing not only removes the vibration effects, but it also removes any other mitigation effects which do not rely on polychromatic illumination, including polarization diversity, angle diversity, and spatial integration. Polarization diversity is negligible due to the linear polarizer in front of the aperture. Further, because the light which exits the single-mode fiber is highly spatially coherent, the angle diversity is also negligible. However, the 25 μm width of the detector’s pixels does cause some spatial integration which is subsequently removed by Eq. (6). In view of this fact, we run the simulations with very small numerical grid spacing to produce negligible spatial averaging, thus providing a fair comparison between the numerical and experimental results.

Finally, we account for the effects of volume scattering in the Spectralon target. To do so, we make use of the coherent measurements taken for the removal of vibration effects. At 0° slope, the macroscopic target depth is zero. Therefore, any difference between the coherent and polychromatic measurements is due to the depth caused by microscopic roughness and volume scattering. So, when the slope is 0°, NVibFree is in fact the degrees of freedom due to volume scattering (which dominates over roughness for this target). We assume that the volume scattering introduces optical path differences (OPDs) which follow a Gaussian distribution. This allows us to compute the equivalent Gaussian-distributed surface roughness empirically by using either the depth-slicing or spectral-slicing method in simulation. Doing so reveals that a surface roughness standard deviation of 1.46 mm produces the correct speckle reduction. So, the simulations use that level of roughness to allow direct comparisons between numerical and experimental results. Notably, we use this approach due to the dependence between volume scattering and macroscopic depth for speckle mitigation. Because of that dependence, we cannot simply remove the volume scattering effects in postprocessing by using an equation similar to Eq. (6).

B. Amplification

We create the polychromatic light using phase modulation via an EOM. However, the fiber-based EOM can only handle 10s of milliwatts of power, while we need about 1 W of power to take low-noise measurements. So, we use a two-stage fiber amplifier to boost the power to 1.3 W after the EOM.

We measured the light’s spectrum at multiple points using an optical spectrum analyzer (OSA). Figure 3 shows several of those measurements. In particular, Fig. 3(a) shows the spectrum just after the EOM, while Fig. 3(b) shows the spectrum after the amplifier stages. Due to equipment availability, we took the data in Fig. 3(a) with a Yokogawa AQ6317C, while we took that in Fig. 3(b) with an AQ6370B model. Even so, the two plots agree quite well, indicating that the spectrum hardly changes during the amplification process. Further, Fig. 3(b) shows the spectrum both 2 min after turn on and 10 min after turn on. The two curves agree almost exactly, indicating that the spectrum is stable over time, as is necessary to allow repeatable measurements.

C. Scaling Target Fresnel Number

In this experiment, we scale the target Fresnel number to two different values by adjusting the width of the beam on target. Figure 4 shows the irradiance patterns associated with the two target Fresnel numbers. The pattern in Fig. 4(a) produces an NF of 1.37, and it is roughly Gaussian. On the other hand, the pattern in Fig. 4(b) is far from Gaussian. So, not only does it provide a smaller NF of 0.59, but it also provides an example of non-Gaussian illumination. Later, we will see that the simulations agree well with the experiments in both cases. To measure these patterns, we moved the detector to only 0.500±0.005m from the target, while we increased the aperture size to 1.0 cm. The shorter distance and larger aperture allowed higher resolution imaging of the patterns on target.

 figure: Fig. 4.

Fig. 4. Irradiance patterns on the target. Image (a) is the pattern used for the larger target Fresnel number, while (b) is used for the smaller one. Note that (a) is close to Gaussian, but (b) is quite different.

Download Full Size | PDF

4. RESULTS AND DISCUSSION

Figure 5 compares the results of the simulations to those of the experiment for the larger target Fresnel number of 1.37, which corresponds to a marginally resolved target. It shows the speckle contrast as a function of the macroscopic target depth, which does not include microscopic surface roughness and volume scattering. In fact, that volume scattering causes the speckle contrast to stay below unity even when the macroscopic target depth is zero. Because the imaging system only marginally resolves the beam on target, each pixel sees the entire macroscopic target depth across the whole beam. Thus, the macroscopic target depth within each resolution cell is equal to the total macroscopic depth. The bars about the experimental results represent the 95% confidence intervals (CIs). The primary factor affecting the size of those intervals is the number of independent speckle measurements taken for each data point. For the spectral-slicing and depth-slicing simulation curves, the 95% CIs are indicated by the size of the markers at the data points. Those sizes are dominated by the number of simulation realizations. The Monte Carlo results do not include markers because their confidence intervals are too small to be visible in the plot.

 figure: Fig. 5.

Fig. 5. Comparison of experimental and simulation results for the larger beam size. Results from all three polychromatic wave-optics methods are shown. The marker size on those curves indicates the 95% confidence intervals (CIs) due to the finite number of simulation realizations. The bars about the experimental results also represent 95% CIs, which are now dominated by the number of independent speckle measurements. The Monte Carlo method appears to be inaccurate, while both spectral-slicing and depth-slicing results are accurate.

Download Full Size | PDF

We selected the settings for the numerical simulations such that the error due to any single cause is less than 1%. To achieve that goal, we considered a number of sampling requirements, including some which depend on the particular polychromatic simulation method. The interested reader may wish to consult Part 1 of this two-part study (see [44]) for the details of those sampling requirements. Further, to provide the best possible match between the simulations and the experiment, we used the measured spectrum and irradiance profiles from Figs. 3 and 4 in the simulations.

Figure 5 indicates that the Monte Carlo method may be inaccurate, which is not surprising given that it assumes a resolved target [44]. On the other hand, both the depth-slicing and spectral-slicing methods produce results which fall within the confidence intervals of the experiment at all points. That said, the two methods disagree slightly when the depth is less than 1 cm. In Part 1, we found that the depth-slicing method sometimes exhibits small errors when the target depth is either about equal to or smaller than the coherence length, so we expect that the spectral-slicing method is more accurate in that regime [44].

Figure 6 shows the results for the smaller target Fresnel number of 0.59. This case is well within the unresolved regime. Now, we clearly see that the Monte Carlo method is inaccurate. By comparing the Monte Carlo curves in Fig. 5 to those in Fig. 6, we see that the Monte Carlo method quickly loses accuracy as the target Fresnel number drops. In Fig. 6, depth slicing also appears to be slightly inaccurate at a depth of 2.93 mm, where its result falls outside of the 95% confidence intervals of the experiment. At that point, its error is 7.8%±5.4%. This result provides further support to the theory that depth slicing is sometimes inaccurate when the target depth is small. On the positive side, the spectral-slicing results again fall within the confidence intervals of the experiment at all points. So, we conclude that these experiments validate the spectral-slicing method over the range of conditions tested. They also validate the depth-slicing method over most of that range, with the exception of small target depth.

 figure: Fig. 6.

Fig. 6. Comparison of experimental and simulation results for the smaller beam size. The format matches that of Fig. 5. Again, the spectral-slicing method is accurate, while the Monte Carlo method is not. Unlike before, one depth-slicing result now falls outside of the experiment confidence intervals, indicating that the depth-slicing method may be slightly inaccurate when the target’s depth is small.

Download Full Size | PDF

5. CONCLUSION

Previously, in Part 1 of this two-part study, we investigated the limitations, sampling requirements, and efficiencies of three polychromatic wave-optics models, namely Monte Carlo averaging, depth slicing, and spectral slicing. We also used theory to quantify the accuracy of the models for cases involving well-resolved targets. Now, in Part 2, we investigated model accuracy for unresolved imaging scenarios by using a laboratory experiment. Such scenarios are outside of the regime of the existing theory.

The experimental results presented here cover a wider range with a much larger number of data points than we reported previously (in a recent conference paper [43]). The additional data better defined the accuracy of the methods. They conclusively showed that the Monte Carlo method is inaccurate for unresolved targets. On the other hand, the spectral-slicing results always fell within the confidence intervals of the experiment, which validates that model over the range of conditions considered here. Finally, the depth-slicing results were usually accurate, but they did indicate some error when the target depth was about equal to or less than the coherence length. So, that method is also valid unless the depth is small.

Additionally, we provided some key details regarding the target material selection, vibration reduction, postprocessing techniques, source spectrum, and beam shapes used in the experiments. Future experiments could expand on this work by including the effects of atmospheric turbulence. Future work could also utilize the validated models to quantify the benefits of polychromatic illumination for wavefront sensing. Overall, the validated models should find use in a number of areas, including remote sensing, target tracking, free-space optical communication, and power beaming.

Acknowledgment

The authors thank Dan Marker and Trevor Moore of the Air Force Research Laboratory (AFRL) for many useful discussions and varied assistance. The views expressed in this paper are those of the authors and do not reflect the official policy or position of the U.S. Air Force, the Department of Defense, or the U.S. Government.

REFERENCES

1. I. Markhvida, L. Tchvialeva, T. K. Lee, and H. Zeng, “Influence of geometry on polychromatic speckle contrast,” J. Opt. Soc. Am. A 24, 93–97 (2007). [CrossRef]  

2. C. M. P. Rodrigues and J. L. Pinto, “Contrast of polychromatic speckle patterns and its dependence to surface heights distribution,” Opt. Eng. 42, 1699–1703 (2003). [CrossRef]  

3. J. M. Huntley, “Simple model for image-plane polychromatic speckle contrast,” Appl. Opt. 38, 2212–2215 (1999). [CrossRef]  

4. V. Molebny, P. McManamon, O. Steinvall, T. Kobayashi, and W. Chen, “Laser radar: historical prospective–from the East to the West,” Opt. Eng. 56, 031220 (2016). [CrossRef]  

5. P. F. McManamon, “Review of ladar: a historic, yet emerging, sensor technology with rich phenomenology,” Opt. Eng. 51, 060901 (2012). [CrossRef]  

6. S. Sahin, Z. Tong, and O. Korotkova, “Sensing of semi-rough targets embedded in atmospheric turbulence by means of stochastic electromagnetic beams,” Opt. Commun. 283, 4512–4518 (2010). [CrossRef]  

7. Y. Cai, O. Korotkova, H. T. Eyyuboglu, and Y. Baykal, “Active laser radar systems with stochastic electromagnetic beams in turbulent atmosphere,” Opt. Express 16, 15834–15846 (2008). [CrossRef]  

8. R. D. Richmond and S. C. Cain, Direct-Detection LADAR Systems (SPIE, 2010).

9. J. G. Manni and J. W. Goodman, “Versatile method for achieving 1% speckle contrast in large-venue laser projection displays using a stationary multimode optical fiber,” Opt. Express 20, 11288–11315 (2012). [CrossRef]  

10. J. Riker, “Requirements on active (laser) tracking and imaging from a technology perspective,” Proc. SPIE 8052, 805202 (2011). [CrossRef]  

11. N. R. Van Zandt, J. E. McCrae, and S. T. Fiorino, “Modeled and measured image-plane polychromatic speckle contrast,” Opt. Eng. 55, 024106 (2016). [CrossRef]  

12. D. Dayton, J. Allen, R. Nolasco, G. Fertig, and M. Myers, “Comparison of fast correlation algorithms for target tracking,” Proc. SPIE 8520, 85200G (2012). [CrossRef]  

13. R. S. Pierre, G. Holleman, M. Valley, H. Injeyan, J. Berg, G. Harpole, R. Hilyard, M. Mitchell, M. Weber, J. Zamel, T. Engler, D. Hall, R. Tinti, and J. Machan, “Active tracker laser (ATLAS),” in Advanced Solid State Lasers, C. Pollock and W. Bosenberg, eds., OSA Trends in Optics and Photonics Series (Optical Society of America, 1997), Vol. 10, paper HP4.

14. R. K. Tyson, Introduction to Adaptive Optics (SPIE, 2000).

15. M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016). [CrossRef]  

16. N. R. Van Zandt, S. J. Cusumano, R. J. Bartell, S. Basu, J. E. McCrae, and S. T. Fiorino, “Comparison of coherent and incoherent laser beam combination for tactical engagements,” Opt. Eng. 51, 104301 (2012). [CrossRef]  

17. M. Laurenzis, Y. Lutz, F. Christnacher, A. Matwyschuk, and J. Poyet, “Homogeneous and speckle-free laser illumination for range-gated imaging and active polarimetry,” Opt. Eng. 51, 061302 (2012). [CrossRef]  

18. T. Iwai and T. Asakura, “Speckle reduction in coherent information processing,” in Proceedings of the IEEE (IEEE, 1996), Vol. 84, pp. 765–781.

19. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts & Company, 2007).

20. J. W. Goodman, Statistical Optics (Wiley, 1985).

21. J. C. Dainty and W. T. Welford, “Reduction of speckle in image plane hologram reconstruction by moving pupils,” Opt. Commun. 3, 289–294 (1971). [CrossRef]  

22. J. Bures, C. Delisle, and A. Zardecki, “Détermination de la Surface de Cohérence à Partir d’une Expérience de Photocomptage,” Can. J. Phys. 50, 760–768 (1972). [CrossRef]  

23. M. Elbaum, M. Greenbaum, and M. King, “A wavelength diversity technique for reduction of speckle size,” Opt. Commun. 5, 171–174 (1972). [CrossRef]  

24. R. A. Sprague, “Surface roughness measurement using white light speckle,” Appl. Opt. 11, 2811–2816 (1972). [CrossRef]  

25. N. George and A. Jain, “Speckle reduction using multiple tones of illumination,” Appl. Opt. 12, 1202–1212 (1973). [CrossRef]  

26. H. Pedersen, “On the contrast of polychromatic speckle patterns and its dependence on surface roughness,” Opt. Acta 22, 15–24 (1975). [CrossRef]  

27. K. Nakagawa and T. Asakura, “Average contrast of white-light image speckle patterns,” Opt. Acta 26, 951–960 (1979). [CrossRef]  

28. H. Pedersen, “Second-order statistics of light diffracted from Gaussian, rough surfaces with applications to the roughness dependence of speckles,” Opt. Acta 22, 523–535 (1975). [CrossRef]  

29. G. Parry, “Speckle patterns in partially coherent light,” in Laser Speckle and Related Phenomena, J. C. Dainty, ed. (Springer-Verlag, 1975), pp. 78–120.

30. T. S. McKechnie, “Image-plane speckle in partially coherent illumination,” Opt. Quantum Electron. 8, 61–67 (1976). [CrossRef]  

31. Y.-Q. Hu, “Dependence of polychromatic-speckle-pattern contrast on imaging and illumination directions,” Appl. Opt. 33, 2707–2714 (1994). [CrossRef]  

32. L. Tchvialeva, T. K. Lee, I. Markhvida, D. I. McLean, H. Lui, and H. Zeng, “Using a zone model to incorporate the influence of geometry on polychromatic speckle contrast,” Opt. Eng. 47, 074201 (2008). [CrossRef]  

33. L. Tchvialeva, I. Markhvida, and T. K. Lee, “Error analysis for polychromatic speckle contrast measurements,” Opt. Lasers Eng. 49, 1397–1401 (2011). [CrossRef]  

34. N. R. Van Zandt, M. W. Hyde IV, S. Basu, D. G. Voelz, and X. Xiao, “Synthesizing time-evolving partially-coherent Schell-model sources,” Opt. Commun. 387, 377–384 (2017). [CrossRef]  

35. Y. Chen, F. Wang, J. Yu, L. Liu, and Y. Cai, “Vector Hermite-Gaussian correlated Schell-model beam,” Opt. Express 24, 15232–15250 (2016). [CrossRef]  

36. X. Chen, C. Chang, Z. Chen, Z. Lin, and J. Pu, “Generation of stochastic electromagnetic beams with complete controllable coherence,” Opt. Express 24, 21587–21596 (2016). [CrossRef]  

37. M. W. Hyde IV, S. Bose-Pillai, D. G. Voelz, and X. Xiao, “Generation of vector partially coherent optical sources using phase-only spatial light modulators,” Phys. Rev. Appl. 6, 064030 (2016). [CrossRef]  

38. M. W. Hyde IV, S. Bose-Pillai, X. Xiao, and D. G. Voelz, “A fast and efficient method for producing partially coherent sources,” J. Opt. 19, 025601 (2017). [CrossRef]  

39. Y. Cai, Y. Chen, J. Yu, X. Liu, and L. Liu, “Generation of partially coherent beams,” Prog. Opt. 62, 157–223 (2017). [CrossRef]  

40. C. Zeringue, I. Dajani, S. Naderi, G. T. Moore, and C. Robin, “A theoretical study of transient stimulated Brillouin scattering in optical fibers seeded with phase-modulated light,” Opt. Express 20, 21196–21213 (2012). [CrossRef]  

41. A. V. Harish and J. Nilsson, “Optimization of phase modulation with arbitrary waveform generators for optical spectral control and suppression of stimulated Brillouin scattering,” Opt. Express 23, 6988–6999 (2015). [CrossRef]  

42. B. Anderson, A. Flores, R. Holten, and I. Dajani, “Comparison of phase modulation schemes for coherently combined fiber amplifiers,” Opt. Express 23, 27046–27060 (2015). [CrossRef]  

43. N. R. Van Zandt, M. F. Spencer, M. J. Steinbock, B. M. Anderson, M. W. Hyde, and S. T. Fiorino, “Comparison of polychromatic wave-optics models,” Proc. SPIE 9982, 998209 (2016). [CrossRef]  

44. N. R. Van Zandt, J. E. McCrae, M. F. Spencer, M. J. Steinbock, M. W. Hyde IV, and S. T. Fiorino, “Polychromatic wave-optics models for image-plane speckle. 1. Well-resolved objects,” Appl. Opt. 57, 4090–4102 (2018).

45. V. P. Lukin and B. V. Fortes, Adaptive Beaming and Imaging in the Turbulent Atmosphere (SPIE, 2002).

46. G. P. Perram, S. J. Cusumano, R. L. Hengehold, and S. T. Fiorino, Introduction to Laser Weapon Systems (Directed Energy Professional Society, 2010).

47. M. A. Vorontsov, V. V. Kolosov, and A. Kohnle, “Adaptive laser beam projection on an extended target: phase- and field-conjugate precompensation,” J. Opt. Soc. Am. A 24, 1975–1993 (2007). [CrossRef]  

48. J. D. Barchers, D. L. Fried, and D. J. Link, “Evaluation of the performance of Hartmann sensors in strong scintillation,” Appl. Opt. 41, 1012–1021 (2002). [CrossRef]  

49. G. Artzner, “Microlens arrays for Shack-Hartmann wavefront sensors,” Opt. Eng. 31, 1311–1322 (1992). [CrossRef]  

50. A. Deninger and T. Renner, “12 orders of coherence control,” Toptica Appl-1010 (2010), http://www.toptica.com/fileadmin/Editors_English/12_literature/quantum_technologies/12_orders_of_coherence_control.pdf.

51. Y. Aoki, K. Tajima, and I. Mito, “Input power limits of single-mode optical fibers due to stimulated Brillouin scattering in optical communication systems,” J. Lightwave Technol. 6, 710–719 (1988). [CrossRef]  

52. E. Lichtman, R. G. Waarts, and A. A. Friesem, “Stimulated Brillouin scattering excited by a modulated pump wave in single-mode fibers,” J. Lightwave Technol. 7, 171–174 (1989). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Polychromatic illumination of a planar object (a.k.a. target) at 45° slope. Image-plane speckle is mitigated due to the multiple coherence regions within a single resolution cell on target.
Fig. 2.
Fig. 2. Validation experiment layout. A highly coherent master oscillator (MO) laser provides light for illumination. Optionally, an electro-optic modulator broadens the linewidth to 29.5 GHz. The light then scatters off of the rough target, which we image through a 0.5 mm aperture such that the beam on target is unresolved. We then measure the speckle statistics for comparison with results from several different wave-optics models.
Fig. 3.
Fig. 3. Measured illuminator spectra. Plot (a) shows the spectrum immediately after the EOM, while (b) shows that after the fiber amplifier. The similarity between (a) and (b) indicates that the spectrum hardly changes during amplification. Also, the two curves in (b) show the spectrum 2 min after turn on and 10 min after turn on, respectively. They are almost identical, indicating that the spectrum is stable over time.
Fig. 4.
Fig. 4. Irradiance patterns on the target. Image (a) is the pattern used for the larger target Fresnel number, while (b) is used for the smaller one. Note that (a) is close to Gaussian, but (b) is quite different.
Fig. 5.
Fig. 5. Comparison of experimental and simulation results for the larger beam size. Results from all three polychromatic wave-optics methods are shown. The marker size on those curves indicates the 95% confidence intervals (CIs) due to the finite number of simulation realizations. The bars about the experimental results also represent 95% CIs, which are now dominated by the number of independent speckle measurements. The Monte Carlo method appears to be inaccurate, while both spectral-slicing and depth-slicing results are accurate.
Fig. 6.
Fig. 6. Comparison of experimental and simulation results for the smaller beam size. The format matches that of Fig. 5. Again, the spectral-slicing method is accurate, while the Monte Carlo method is not. Unlike before, one depth-slicing result now falls outside of the experiment confidence intervals, indicating that the depth-slicing method may be slightly inaccurate when the target’s depth is small.

Tables (1)

Tables Icon

Table 1. Comparison of a Nominal System with the Experiment

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

N F = D T λ R ,
l c = c | γ ( τ ) | 2 d τ .
γ ( τ ) = G ( ν ) exp ( j 2 π ν τ ) d ν ,
C = σ I I ¯ ,
N = 1 C 2 ,
N VibFree = N poly N coh ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.