Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Shack-Hartmann wavefront sensor optical dynamic range

Open Access Open Access

Abstract

The widely used lenslet-bound definition of the Shack-Hartmann wavefront sensor (SHWS) dynamic range is based on the permanent association between groups of pixels and individual lenslets. Here, we formalize an alternative definition that we term optical dynamic range, based on avoiding the overlap of lenslet images. The comparison of both definitions for Zernike polynomials up to the third order plus spherical aberration shows that the optical dynamic range is larger by a factor proportional to the number of lenslets across the SHWS pupil. Finally, a pre-centroiding algorithm to facilitate lenslet image location in the presence of defocus and astigmatism is proposed. This approach, based on the SHWS image periodicity, is demonstrated using optometric lenses that translate lenslet images outside the projected lenslet boundaries.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The Shack-Hartmann wavefront sensor (SHWS) [1,2] is widely used in science [310], industry [1113] and medicine [1417]. This device samples wavefronts by estimating the centroids of images formed by an array of lenslets onto a pixelated detector. When the pixelated sensor is at the geometrical back focal plane [18] of a uniformly-illuminated lenslet [19], the displacement of a lenslet image centroid ${\boldsymbol \rho }$ from a reference position is proportional to the gradient of the wavefront $W({\boldsymbol r} )$ averaged over the lenslet [20],

$${\boldsymbol \rho } = {f_l}\frac{{\smallint\!\!\!\smallint \nabla W({\boldsymbol r} ){\textrm{d}^\textrm{2}}{\boldsymbol r}}}{{\smallint\!\!\!\smallint {\textrm{d}^\textrm{2}}{\boldsymbol r}}}.$$

Here, ${f_l}$ is the lenslet focal length and the double integrals are evaluated over the lenslet area.

A critical design specification for SHWSs is the dynamic range for aberrations of interest. Despite its importance, there are different, and often misunderstood SHWS dynamic range definitions. Here we first examine the commonly used lenslet-bound definition, which is based on the permanent association between non-overlapping groups of pixels and the corresponding lenslets. The resulting modest dynamic range has motivated the development of numerous SHWS variations [2129], often based on the sequential sampling of pupil regions [24,3037], and with some being substantially different and/or more complex instruments [3845]. By removing the permanent association between pixels and lenslets, which can be thought of as a software [4648] or hardware limitation, we formalize a previously proposed definition based on avoiding the overlap of adjacent lenslet images [4957]. Because this definition does not include the pixelated sensor, we refer to it as the SHWS optical dynamic range. This definition is followed by a comparison of both the lenslet-bound and optical dynamic ranges, first for an arbitrary wavefront, and then, for each Zernike polynomial up to the third order plus spherical aberration. Then, we derive the relation between the lenslet image lattice vectors for periodic SHWS lenslet arrays and the Zernike polynomial coefficients for defocus and astigmatism. Finally, we use this relation to demonstrate a pre-centroiding algorithm that facilitates the assignment of lenslet images to groups of pixels on which to centroid in order to obtain a precise wavefront estimation.

2. SHWS dynamic range based on wavefront first derivative

The pixelated sensors of the first SHWSs consisted of arrays of pixel arrays with their output wired to readout and processing electronics dedicated to each lenslet [58]. This permanent association between pixels and lenslets led to a lenslet-bound definition of dynamic range in terms of the maximum wavefront slope that would shift a lenslet image towards the edge of its pixel group, as depicted in Fig. 1 [13,23,25,38,45,5963]. In this way, for a group of pixels with diameter ${D_l}$, the maximum wavefront slope averaged over a lenslet $|{{\theta_{\textrm{max}}}} |= \left|{\smallint\!\!\!\smallint \nabla \textrm{W}({\boldsymbol r} ){\textrm{d}^2}{\boldsymbol r}} \right|$ before the lenslet image of width ${D_i}$ reaches the edge of the pixel group is,

$$|{{\theta_{\textrm{max}}}} |= \frac{1}{{2{f_l}}}({{D_l} - {D_i}} ),$$
where it is assumed that the image position for a flat wavefront is at the center of the pixel group. These groups are often chosen as consisting of all pixels within the projection of the lenslet boundary on to the pixelated sensor, in which case ${D_l}$ is the lenslet pitch, which coincides with its diameter for lenslets with 100% fill factor.

 figure: Fig. 1.

Fig. 1. Shack-Hartmann wavefront sensor geometry used to define lenslet-bound dynamic range such the lenslet image (red spot) does not reach the boundary of the lenslet projection onto the sensor, in which the pixels are depicted as gray squares.

Download Full Size | PDF

The condition in Eq. (2) can be used to calculate the maximum measurable amplitude ${a_j}$ of a wavefront ${Z_j}({x,y} )$ over the SHWS pupil $\mathrm{\Omega }$, by solving

$$\frac{1}{{2{f_l}}}({{D_l} - {D_i}} )\textrm{ = }\frac{{2{a_j}}}{{{D_{p}}}}\; \textrm{max}\left\{ {\textrm{ma}{\textrm{x}_\mathrm{\Omega }}\left|{\frac{{\partial {Z_j}({x,y} )}}{{\partial x}}} \right|,\textrm{ma}{\textrm{x}_\mathrm{\Omega }}\left|{\frac{{\partial {Z_j}({x,y} )}}{{\partial y}}} \right|} \right\},$$
where it is assumed that the lenslets are squares with their sides along the $x$- and $y$-axis and ${D_{p}}$ is the SHWS pupil diameter. This condition ignores the averaging of the wavefront slope over each lenslet in the interest of simplicity, leading to a slight underestimation of the amplitude of the maximum measurable wavefronts. For reasons that will become apparent later, let us now rewrite this condition using the directional derivative of the wavefront along a line with slope $\tan \alpha $ relative to the $x$-axis,
$${\nabla _\alpha }W({x,y} )\; = \nabla W({x,y} )\cdot ({\cos \alpha ,\sin \alpha } ),$$
where ${\cdot} $ denotes the dot product between vectors on the $xy$ plane. Therefore, we can use this definition to rewrite Eq. (3) as
$$\frac{1}{{2{f_l}}}({{D_l} - {D_i}} )\textrm{ = }\frac{{2{a_j}}}{{{D_{p}}}}\textrm{ma}{\textrm{x}_{\mathrm{\Omega },\; \alpha = 0,\pi }}|{{\nabla_\alpha }{Z_j}({x,y} )} |.$$

For hexagonal lenslets, this condition must be modified to consider the lenslet image reaching any of the six lenslet sides along the lines defined by $\alpha ={-} \pi /3$, 0 and $\pi /3$. Now considering that wavefront slope variation can often be considered negligible across individual lenslets, we can generalize this definition by maximizing the directional wavefront derivative over a circle inscribed in the lenslet, that is,

$$\frac{{({{D_l} - {D_i}} )}}{{2{f_l}}}\textrm{ = }\frac{{2{a_j}}}{{{D_{p}}}}\textrm{ma}{\textrm{x}_{\mathrm{\Omega },\; \alpha }}|{{\nabla_\alpha }{Z_j}({x,y} )} |,$$
with $\alpha \in [{0,\pi } ]$. The analytical calculation of the directional derivative maxima might not be trivial, as this is a problem of maximization with constraints, with the SHWS pupil being the constraint. Thus, numerical evaluation of this condition might be preferable.

3. SHWS dynamic range based on wavefront second derivative

Most SHWSs capture all the lenslet images with a single two-dimensional array of pixels, and thus, are not limited by the permanent association between pixels and lenslets. This allows for a dynamic range definition based on avoiding the partial overlap of lenslet images, irrespective of their position on the pixelated sensor. Here, we formalize this idea, independently suggested by multiple authors [4957], to which we refer as the optical dynamic range.

Let us start by assuming that two identical adjacent lenslets, indexed k and $k + 1$, without loss of generality have their centers (${x_k},{y_k}$) along a line parallel to the x-axis (i.e., ${y_{k + 1}} = {y_k}$) and with ${x_{k + 1}} > {x_k}$. As depicted in Fig. 2, for such images to not overlap the following condition between angular displacements ${\theta _{x,k}}$ and ${\theta _{x,k + 1}}$ must be met

$${d_l} - {D_i} + {f_l}\; ({{\theta_{x,k + 1}} - {\theta_{x,k}}} )\ge 0,$$
with ${d_l}$ being the distance between the lenslets, which coincides with the lenslet diameter ${D_l}$ if the lenslet fill factor is 100%. If the lenslets are small relative to the period of the maximum spatial frequency of a wavefront $W({x,y} )$ with x and y normalized by the SHWS pupil radius (${D_{p}}/2)$, we can approximate the angular difference in terms of the wavefront second derivative as follows (see Appendix A)
$${\theta _{x,k + 1}} - {\theta _{x,k}} \approx \left( {\frac{{4{d_l}}}{{{D_{p}}^2}}} \right)\frac{{{\partial ^2}W({x,y} )}}{{\partial {x^2}}}.$$

 figure: Fig. 2.

Fig. 2. Shack-Hartmann wavefront sensor condition used to define optical dynamic range as to avoid overlap between images of adjacent lenslets (red spots on the top view), with the pixels depicted as gray squares (compare with Fig. 1).

Download Full Size | PDF

Therefore, the maximum positive measurable amplitude $b_j^ + $ of a wavefront described by function ${Z_j}({x,y} )$ that avoids lenslet image overlap along the $x$-direction can be calculated by solving

$${d_l} - {D_i} + {f_l}\left( {\frac{{4{d_l}}}{{{D_{p}}^2}}} \right)b_j^ + \; \textrm{mi}{\textrm{n}_\mathrm{\Omega }}\left[ {\frac{{{\partial^2}{Z_j}({x,y} )}}{{\partial {x^2}}}} \right] = 0.$$

When the lenslet image is smaller than the lenslet separation (i.e., ${d_l} > {D_i})$ and because $b_j^ + > 0$, this condition can only be met if the function second derivative is negative somewhere within the pupil. Otherwise, the positive limit of the optical dynamic range $b_j^ + \; $ is +$\infty $, as it is the case for defocus. In practice, of course, no SHWS can measure an infinitely large amount of positive defocus because the lenslet images will eventually reach the edge of the pixelated sensor. The same reasoning that led to Eq. (9) applied to a negative amplitude $b_j^ - $ yields

$${d_l} - {D_i} + {f_l}\left( {\frac{{4{d_l}}}{{{D_{p}}^2}}} \right)b_j^ - \; \textrm{ma}{\textrm{x}_\mathrm{\Omega }}\left[ {\frac{{{\partial^2}{Z_j}({x,y} )}}{{\partial {x^2}}}} \right] = 0,$$

Again, when ${d_l} > {D_i}$, and because $b_j^ - < 0$, this condition has a solution if the wavefront second derivative is positive somewhere within the pupil. From these two conditions, it is important to note that it is possible to have different minimum and maximum measurable amplitudes for the same wavefront aberration, that is, the optical dynamic range can be asymmetric.

The conditions in Eqs. (9) and (10), however, only consider the $x$-axis direction. As Fig. 3 depicts for a square lenslet array, a lenslet image (green circle) could be shifted by aberrated wavefront towards images of adjacent or even distant lenslets (red circles) along the directions shown by the (orange) line segments. Using the second directional derivative $\nabla _\alpha ^2W$ defined as ${\nabla _\alpha }({{\nabla_\alpha }W} )$, we can generalize the condition for avoiding the lenslet images along the direction defined by the angle $\alpha $ as

$${d_l}(\alpha )- {D_i} + {f_l}\left( {\frac{{4{d_l}(\alpha )}}{{{D_{p}}^2}}} \right)b_j^ + \; \textrm{mi}{\textrm{n}_\mathrm{\Omega }}[{\nabla_\alpha^2{Z_j}({x,y} )} ]= 0,$$
for positive amplitudes and
$${d_l}(\alpha )- {D_i} + {f_l}\left( {\frac{{4{d_l}(\alpha )}}{{{D_{p}}^2}}} \right)b_j^ - \; \textrm{ma}{\textrm{x}_\mathrm{\Omega }}[{\nabla_\alpha^2{Z_j}({x,y} )} ]= 0.$$
for negative amplitudes, where the dependence of ${d_l}$ on $\alpha $ captures the fact that lenslets along different directions can be separated by different distances. This means that these equations have to be solved for each angle separately. In the interest of simplicity, we propose to use ${D_l}$ which is the minimum value of ${d_l}(\alpha )$, instead of ${d_l}(\alpha )$, being aware that this will lead to a slight underestimation of the optical dynamic range. Therefore, our proposed formulae for determining the optical dynamic range of a SHWS, irrespective of the lenslet geometry, are
$$b_j^ + \; ={-} \frac{{({{D_l} - {D_i}} ){D_{p}}^2}}{{4{f_l}{D_l}\; \textrm{mi}{\textrm{n}_{\mathrm{\Omega },\mathrm{\alpha }}}[{\nabla_\alpha^2{Z_j}({x,y} )} ]}},$$
for positive amplitudes and
$$b_j^ - \; ={-} \frac{{({{D_l} - {D_i}} ){D_{p}}^2}}{{4{f_l}{D_l}\; \textrm{ma}{\textrm{x}_{\mathrm{\Omega },\mathrm{\alpha }}}[{\nabla_\alpha^2{Z_j}({x,y} )} ]}}$$
for negative amplitudes, with the minimization and maximization performed over the entire pupil and for $\alpha \in [{0,\pi } ]$.

 figure: Fig. 3.

Fig. 3. Depiction of lenslet image locations in a Shack-Hartmann wavefront sensor with square lenslets (black outlines) showing “adjacent” (red) images to the green image along various directions indicated by the orange line segments.

Download Full Size | PDF

4. Dynamic range definition comparison for low order Zernike polynomials

Let us now compare the two SHWS dynamic range definitions discussed above for the Zernike polynomials up to the third order and spherical aberration [64] through their ratios, noting that

$$\nabla _\alpha ^2W({x,y} )= \frac{{{\partial ^2}W({x,y} )}}{{\partial {x^2}}}\textrm{co}{\textrm{s}^2}\alpha + \frac{{{\partial ^2}W({x,y} )}}{{\partial x\partial y}}\; \textrm{sin}2\alpha + \frac{{{\partial ^2}W({x,y} )}}{{\partial {y^2}}}\textrm{si}{\textrm{n}^2}\alpha .$$

For positive wavefront amplitudes, we have

$$\frac{{b_j^ + }}{{{a_j}}}\; ={-} \left( {\frac{{{D_p}}}{{{D_l}}}} \right)\frac{{\textrm{max}\left\{ {\textrm{ma}{\textrm{x}_\mathrm{\Omega }}\left|{\frac{{\partial {Z_j}({x,y} )}}{{\partial x}}} \right|,\textrm{ma}{\textrm{x}_\mathrm{\Omega }}\left|{\frac{{\partial {Z_j}({x,y} )}}{{\partial y}}} \right|} \right\}}}{{\; \; \textrm{mi}{\textrm{n}_{\mathrm{\Omega },\alpha \; }}\left[ \frac{{{\partial^2}{Z_j}({x,y} )}}{{\partial {x^2}}}{{\cos }^2}\alpha { + \frac{{{\partial^2}{Z_j}({x,y} )}}{{\partial x\partial y}}\textrm{sin}2\alpha + \frac{{{\partial^2}{Z_j}({x,y} )}}{{\partial {y^2}}}{{\sin }^2}\alpha } \right]}},$$
where if the denominator is positive, this ratio should be replaced with +$\infty $. Similarly, for negative amplitudes we have
$$\frac{{b_j^ - }}{{{a_j}}}\; ={-} \left( {\frac{{{D_p}}}{{{D_l}}}} \right)\frac{{\textrm{max}\left\{ {\textrm{ma}{\textrm{x}_\mathrm{\Omega }}\left|{\frac{{\partial {Z_j}({x,y} )}}{{\partial x}}} \right|,\textrm{ma}{\textrm{x}_\mathrm{\Omega }}\left|{\frac{{\partial {Z_j}({x,y} )}}{{\partial y}}} \right|} \right\}}}{{\; \; \textrm{ma}{\textrm{x}_{\mathrm{\Omega },\alpha \; }}\left[ {\frac{{{\partial^2}{Z_j}({x,y} )}}{{\partial {x^2}}}{{\cos }^2}\alpha + \frac{{{\partial^2}{Z_j}({x,y} )}}{{\partial x\partial y}}\textrm{sin}2\alpha + \frac{{{\partial^2}{Z_j}({x,y} )}}{{\partial {y^2}}}{{\sin }^2}\alpha } \right]}},$$
where if the denominator is negative, the ratio should be replaced with $- \infty $. Interestingly, these ratios do not depend on the image size ${D_i}$. More importantly, the ratios are proportional to the number of lenslets across the pupil ${D_p}/{D_l}$, which for most SHWSs is greater than 10.

Analytical evaluation of these ratios over a circular pupil of radius are shown in Table 1 below, with the fourth column values calculated using Eq. (5), the fifth column using Eqs. (13) and (14), and the sixth column using Eqs. (16) and (17). As expected for tip and tilt, the second definition yields an infinite optical dynamic range because these aberrations shift all the lenslet images equally without changing their separation. Also, as mentioned earlier, defocus has an asymmetric optical dynamic range, infinite towards the positive amplitudes because these wavefronts separate the lenslet images, and finite towards negative amplitudes because the lenslet images are brought closer together. In practice, the infinite ends of the SHWS optical dynamic range are truncated by either the pixelated sensor finite size or the increased size of the lenslet images. Third order aberrations have symmetric finite optical dynamic ranges, while spherical aberration has an asymmetric finite optical dynamic range.

Tables Icon

Table 1. SHWS dynamic range for Zernike polynomials based on preventing lenslet images from leaving the corresponding lenslet outline (lenslet-bound dynamic range, 4rd column) and avoiding lenslet image overlap (optical dynamic range, 5th column).

5. SHWS lattice vectors in the presence of tip, tilt, defocus, and astigmatism

In order to take full advantage of the optical dynamic range, the SHWS image processing should include a pre-centroiding step, in which the coarse location of individual lenslet images is determined and assigned to the corresponding lenslets. This can be achieved by exploiting the fact that when the wavefronts are within the SHWS optical dynamic range, the lenslet images are monotonically sorted along the x- and y-axis, and in most cases, forming a 2-dimensional Bravais square or hexagonal lattice. When this is the case, if a single lenslet image is found, then the other lenslet images can be coarsely found by moving across the pixelated sensor in integer combinations of the Bravais lattice vectors. Tip and tilt will shift all lenslet images equally, thus preserving the lattice vectors. Defocus, vertical and oblique astigmatism, on the other hand, will change the lattice vectors as it is calculated next.

Let us start by defining the lenslet lattice vectors as ${{\boldsymbol v}_1} = {\boldsymbol \; }({{v_{1,x}},{v_{1,y}}} )$ and ${{\boldsymbol v}_2} = {\boldsymbol \; }({{v_{2,x}},{v_{2,y}}} )$ that describe the SHWS lenslet image lattice resulting from a flat wavefront. The lattice vectors ${\boldsymbol v}_1^{^{\prime}}$ and ${\boldsymbol v}_2^{^{\prime}}$ for an aberrated wavefront in normalized pupil coordinates $W({2x/{D_p},2y/{D_p}} )$, can be approximated using the wavefront derivatives at the center of the ith lenslet $({{x_i},{y_i}} )$, instead of the average lenslet value over each lenslet, as

$${\boldsymbol v}_{1}^{\prime}\approx {{{\boldsymbol v}}_{1}}+\frac{2{{f}_{l}}}{{{D}_{p}}}\left[ \frac{\partial W\left( \frac{{{x}_{i}}+{{v}_{1,x}}}{{{D}_{p}}/2},\frac{{{y}_{i}}+{{v}_{1,y}}}{{{D}_{p}}/2} \right)}{\partial \left( \frac{{{x}_{i}}}{{{D}_{p}}/2} \right)}-\frac{\partial W\left( \frac{{{x}_{i}}}{{{D}_{p}}/2},\frac{{{y}_{i}}}{{{D}_{p}}/2} \right)}{\partial \left( \frac{{{x}_{i}}}{{{D}_{p}}/2} \right)},~\frac{\partial W\left( \frac{{{x}_{i}}+{{v}_{1,x}}}{{{D}_{p}}/2},\frac{{{y}_{i}}+{{v}_{1,y}}}{{{D}_{p}}/2} \right)}{\partial \left( \frac{{{y}_{i}}}{{{D}_{p}}/2} \right)}-\frac{\partial W\left( \frac{{{x}_{i}}}{{{D}_{p}}/2},\frac{{{y}_{i}}}{{{D}_{p}}/2} \right)}{\partial \left( \frac{{{y}_{i}}}{{{D}_{p}}/2} \right)} \right],$$
$${\boldsymbol v}_{2}^{\prime}\approx {{{\boldsymbol v}}_{2}}+\frac{2{{f}_{l}}}{{{D}_{p}}}\left[ \frac{\partial W\left( \frac{{{x}_{i}}+{{v}_{2,x}}}{{{D}_{p}}/2},\frac{{{y}_{i}}+{{v}_{2,y}}}{{{D}_{p}}/2} \right)}{\partial \left( \frac{{{x}_{i}}}{{{D}_{p}}/2} \right)}-\frac{\partial W\left( \frac{{{x}_{i}}}{{{D}_{p}}/2},\frac{{{y}_{i}}}{{{D}_{p}}/2} \right)}{\partial \left( \frac{{{x}_{i}}}{{{D}_{p}}/2} \right)},\frac{\partial W\left( \frac{{{x}_{i}}+{{v}_{2,x}}}{{{D}_{p}}/2},\frac{{{y}_{i}}+{{v}_{2,y}}}{{{D}_{p}}/2} \right)}{\partial \left( \frac{{{y}_{i}}}{{{D}_{p}}/2} \right)}-\frac{\partial W\left( \frac{{{x}_{i}}}{{{D}_{p}}/2},\frac{{{y}_{i}}}{{{D}_{p}}/2} \right)}{\partial \left( \frac{{{y}_{i}}}{{{D}_{p}}/2} \right)} \right].$$

Let us now consider a wavefront, described by a polynomial of the form $A{x^2} + Bxy + C{y^2} + Dx + Ey + F$, as a linear combination of defocus, astigmatism, tip, tilt, and piston. The SHWS image lattice vectors that would result from such wavefront can be calculated using Eqs. (18) and (19) as

$${\boldsymbol v}_1^{\prime} \approx {{\boldsymbol v}_1} + \left( {\frac{{4{f_l}}}{{D_p^2}}} \right)({2A{v_{1,x}} + B{v_{1,y}},B{v_{1,x}} + 2C{v_{1,y}}} ),$$
$${\boldsymbol v}_2^{\prime} \approx {{\boldsymbol v}_2} + \left( {\frac{{4{f_l}}}{{D_p^2}}} \right)({2A{v_{2,x}} + B{v_{2,y}},B{v_{2,x}} + 2C{v_{2,y}}} ).$$

These new vectors do not depend on the lenslet coordinates $({{x_i},{y_i}} )$, and thus, the 2D Bravais lenslet image lattice is preserved. Now, substituting the OSA definition of Zernike polynomials [64] in Eq. (20) and Eq. (21) for oblique astigmatism with amplitude ${b_3}$, defocus with amplitude ${b_4}$, and vertical astigmatism with amplitude ${b_5}$ for an initial square lattice along the x- and y-axis, the lattice vectors of the SH image lattice are

$${\boldsymbol v}_1^{\prime} \approx {D_l}({1,\; 0} )+ {D_l}\left( {\frac{{4{f_l}}}{{D_p^2}}} \right)\left( {4\sqrt 3 {b_4} + 2\sqrt 6 {b_5},2\sqrt 6 {b_3}} \right),$$
$${\boldsymbol v}_2^{\prime} \approx {D_l}({0,\; 1} )+ {D_l}\left( {\frac{{4{f_l}}}{{D_p^2}}} \right)\left( {2\sqrt 6 {b_3},\; 4\sqrt 3 {b_4} - 2\sqrt 6 {b_5}} \right).$$

If the lattice vectors are experimentally estimated (see Fig. 4), then the x- and y-components of these vector equations form a system of four linear equations with three unknown aberration amplitudes. These amplitudes can be calculated either analytically or numerically using linear algebra.

 figure: Fig. 4.

Fig. 4. Images from a SHWS with a square lenslet array illuminated by an aberration-free wavefront (a), and a wavefront generated by a 32.7 D convex cylinder oriented at 45° (c), with lattice vectors ${v_1}$ and ${v_2}$ shown in green. Panels (b) and (d) show the corresponding spectra with the reciprocal lattice vectors ${u_1}$ and ${u_2}$, in dark blue. The axes of the SHWS images are in units of lenslet pitch ${D_l}$, and for the spectra in units of $1/{D_l}$.

Download Full Size | PDF

The linearity of the lattice vector change with defocus and astigmatism, together with the array theorem can be used, together with the discrete Fourier transform (DFT) [65], to estimate the amplitudes of defocus and astigmatism from a raw SHWS image as follows. First, we calculate the absolute value of the DFT of the SHWS image, and then, we locate the two maxima nearest to the zero-frequency term (DC) that are not axisymmetric. The vector positions of these maxima relative to the zero spatial frequency (${\boldsymbol u}_1^{\prime}$ and ${\boldsymbol u}_2^{\prime}$) are the reciprocal lattice vectors, which can be used to calculate the actual lattice vectors [66], as

$${\boldsymbol v}_1^{\prime} \approx \frac{1}{{{\boldsymbol u}_1^{\prime} \cdot {{\boldsymbol R}_{90^\circ }}{\boldsymbol u}_2^{\prime}}}{{\boldsymbol R}_{90^\circ }}{\boldsymbol u}_2^{\prime},$$
$${\boldsymbol v}_2^{\prime} \approx \frac{1}{{{\boldsymbol u}_2^{\prime} \cdot {{\boldsymbol R}_{90^\circ }}{\boldsymbol u}_1^{\prime}}}{{\boldsymbol R}_{90^\circ }}{\boldsymbol u}_1^{\prime},$$
where ${\cdot} $ denotes inner product and ${{\boldsymbol R}_{90^\circ }}$ is the rotation matrix
$${{\boldsymbol R}_{90^\circ }} = \left[ {\begin{array}{cc} 0&{ - 1}\\ 1&0 \end{array}} \right]. $$

The direct and reciprocal lattice vectors of SHWS images captured with square lenslet arrays with and without a convex cylinder optometric lens are shown in Fig. 4.

The value of this algorithm is not in the estimation of defocus and astigmatism coefficients, which is coarse due to the finite SHWS image sampling, but rather on the facilitation of the location of the SHWS lenslet images to assign a group of pixels to each SHWS lenslet image. This can be achieved simply by finding a single lenslet image (e.g., the brightest), and then move along the image by integer combinations of the lattice vectors. The resulting locations would then be used as the center of a group of pixels used to estimate the lenslet image centroid, which will allow precise estimation of the wavefront.

Here it is important to note that if the wavefront has third or higher order polynomial wavefront components, the periodicity of the lenslet image lattice will degrade. When such distortion is small, that is, if the lenslet images remain within the cell associated to the previously estimated lattice vector, then regions of interest on the pixelated sensor centered on the lattice cell centers will suffice to initiate the lenslet image centroid calculations.

6. Experiments

Wavefronts with defocus and astigmatism were measured with a custom SHWS depicted at the top of Fig. 5, consisting of a EXi Aqua camera (Teledyne Qimaging, Surrey, BC, Canada) with an array of 7.6 mm geometrical focal length and 300 µm pitch (Adaptive Optics Associates, now part of AOA Xinetics, Devens, MA, USA) lenslets focused to account for their low Fresnel number [18]. Light from a 637 nm S1FC637 laser diode (Thorlabs, Newton, NJ, USA) delivered by a single-mode optical fiber was collimated with an achromatic doublet to illuminate the first of three pupil planes with an approximately plane wavefront. An iris diaphragm, optometric lenses, and the SHWS lenslet array were placed in the three pupil planes relayed by afocal telescopes formed by achromatic doublets, as shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Schematic diagram of the optical setup used to capture Shack-Hartmann wavefront sensor (SHWS) images (top) while placing convex and concave sphere lenses in the pupil plane P2, such as the examples shown above and below the plot. In these images, the red boxes show the boundaries of the lenslets with peak intensity 40% or higher than the maximum pixel value in the 0 D image. The plot shows defocus and astigmatism estimated using the proposed pre-centroiding algorithm based on the estimation of the SHWS image lattice vectors. The separation between the adjacent gray vertical lines correspond to the lenslet-bound SHWS dynamic range definition, while the red and cyan vertical lines indicate the optical dynamic range and its limit when considering the camera’s region of interest, respectively.

Download Full Size | PDF

SHWS images captured the wavefronts generated through sphere lenses with optical power ${P_D}$ in the ±12 D range with 1 D steps, which due to the $0.43$ magnification between the lens plane and the lenslet array plane the power scaled to ±65.3 D (${P_D}^{\prime} = {P_D}/{M^2}$). The lattice vectors were calculated for each SHWS image and their components were compared with the prediction of Eqs. (22) and (23), with the results plotted in Fig. 5. The spacing between vertical light gray lines in this plot correspond to the lenslet-bound dynamic range, calculated using Eq. (5), modeling the lenslet image width as 2× the full-width at half-maximum of a Gaussian beam (see Eq. (19) in Ref. [67]). The positive end of the optical dynamic range, denoted by the vertical orange line on the right, is approximately 13 times larger than the separation of the vertical gray lines (lenslet-bound dynamic range). The data itself, shows that the dominant aberration, defocus (${b_4}$), is estimated with a maximum of 11% error over a range 8 times larger than the lenslet-bound definition.

We did not test larger positive defocus amplitudes because the metal housing in which the lenslet array was mounted vignetted the beam (outer lenslets). A small artifactual astigmatism (mean 2.3%), likely due to trial lens centration inaccuracies and coarse DFT interpolation, was estimated. Although the lowest end of the SHWS optical defocus dynamic range is $- \infty $, in practice this lower end is determined by whenever a lenslet images reaches the edge of the pixelated sensor’s region of interest ${D_{\textrm{ROI}}}$, that is when spacing between adjacent lenslet images in the presence of defocus sl meets the condition,

$${s_l}\frac{{{D_p}}}{{{D_l}}} - {D_i} = {D_{\textrm{ROI}}}.$$

This limit is shown in the plot in Fig. 5 as cyan vertical line.

The results of a similar experiment using convex cylinder optometric lenses oriented at 45° to generate oblique astigmatism and defocus are shown in Fig. 6. Accounting for pupil magnification, the optical powers at the SHWS lenslet array plane were between 5.4 and 32.7 D, which spans almost 6 times the lenslet-bound dynamic range definition.

 figure: Fig. 6.

Fig. 6. SHWS images captured with the optical setup depicted at the top of Fig. 5 using various optometric cylinder lenses. In these images, the red boxes show the boundaries of the lenslets with peak intensity 40% or higher than the maximum pixel value in the 0 D image. The plot shows the defocus estimated using the proposed pre-centroiding algorithm based on the estimation of the SHWS image lattice vectors for defocus and astigmatism.

Download Full Size | PDF

7. Summary

A definition of the SHWS optical dynamic range in terms of avoiding lenslet image overlap was formalized and compared with the widely used definition based on restricting each lenslet image to a fixed group of pixels. The optical dynamic range is larger by a factor proportional to the number of lenslets across the pupil and provides a method for calculating the minimum number of lenslets across its pupil for measuring a desired wavefront aberration amplitude. The proposed formulation of the optical dynamic range definition in terms of the extreme values of the directional wavefront curvature within circles inscribed in lenslets is applicable to lenslets of any shape and array geometry, even if non-periodic.

A pre-centroiding algorithm based on the estimation of the SHWS image lattice vectors was proposed to facilitate lenslet image location in the presence of large defocus and astigmatism amplitudes. This algorithm was demonstrated using optometric trial lenses that displaced the SHWS lenslet images well beyond the projection of the lenslet boundary onto the SHWS pixelated sensor.

Appendix A

Let $W({x^{\prime},y^{\prime}} )$ be a wavefront with $x^{\prime}$ and $y^{\prime}$ unnormalized SHWS pupil coordinates, and x and y be the coordinates normalized by the SHWS pupil radius, that is, $x = 2x^{\prime}/{D_p}$ and $y = 2y^{\prime}/{D_p}$. Now, the gradient of the wavefront along x-axis is given by,

$${\theta _x} = \frac{{\partial W({x^{\prime},y^{\prime}} )}}{{\partial x^{\prime}}} = \frac{{\partial W({x,y} )}}{{\partial x}}\left( {\frac{{\partial x}}{{\partial x^{\prime}}}} \right) = \frac{{\partial W({x,y} )}}{{\partial x}}\left( {\frac{2}{{{D_p}}}} \right).$$

Let us assume two identical adjacent lenslets with their centers along the x-axis and the second lenslet having the larger abscissa. Then, the difference between the lenslet image angular coordinate can be written as follows,

$${\theta _{{x_{k + 1}}}} - {\theta _{{x_k}}} = \left( {\frac{2}{{{D_p}}}} \right)\left[ {\frac{{\partial W\left( {x + \frac{{{d_l}}}{{{D_p}/2}},y} \right)}}{{\partial {x_{k + 1}}}} - \frac{{\partial W({x,y} )}}{{\partial {x_k}}}} \right],$$
${d_l}$ being the distance between lenslets. Multiplying and dividing by $2{d_l}/{D_p}$ on the right-hand side of the above equation, we get,
$${\theta _{{x_{k + 1}}}} - {\theta _{{x_k}}} = \left( {\frac{{4{d_l}}}{{{D_{p}}^2}}} \right)\frac{{{\partial ^2}W({x,y} )}}{{\partial {x^2}}}.$$

Funding

National Eye Institute (P30EY026877, R01EY025231, R01EY031360, R01EY027301); Research to Prevent Blindness (Challenge Grant).

Acknowledgments

The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Disclosures

The authors declare no conflicts of interest.

References

1. J. Hartmann, “Bemerkungen über den bau und die justierung von spektrographen,” Z. Instrumentenkd 20, 47–58 (1900).

2. R. V. Shack and B. C. Platt, “Production and use of a lenticular Hartmann screen,” J. Opt. Soc. Am. 61(5), 656 (1971).

3. J. Liang, D. R. Williams, and D. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14(11), 2884–2892 (1997). [CrossRef]  

4. G. Y. Yoon and D. R. Williams, “Visual performance after correcting the monochromatic and chromatic aberrations of the eye,” J. Opt. Soc. Am. A 19(2), 266–275 (2002). [CrossRef]  

5. A. Roorda, F. Romero-Borja, W. Donnelly III, H. Queener, T. Hebert, and M. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10(9), 405–412 (2002). [CrossRef]  

6. P. Artal, L. Chen, E. J. Fernández, B. Singer, S. Manzanera, and D. R. Williams, “Neural compensation for the eye's optical aberrations,” J. Vis. 4(4), 4–287 (2004). [CrossRef]  

7. P. L. Wizinowich, D. Le Mignant, A. H. Bouchez, R. D. Campbell, J. C. Y. Chin, A. R. Contos, M. A. van Dam, S. K. Hartman, E. M. Johansson, R. E. Lafon, H. Lewis, P. J. Stomski, and D. M. Summers, “The W. M. Keck Observatory laser guide star adaptive optics system: Overview,” Publ. Astron. Soc. Pac. 118(840), 297–309 (2006). [CrossRef]  

8. O. Azucena, J. Crest, S. Kotadia, W. Sullivan, X. Tao, M. Reinig, D. Gavel, S. Olivier, and J. Kubby, “Adaptive optics wide-field microscopy using direct wavefront sensing,” Opt. Lett. 36(6), 825–827 (2011). [CrossRef]  

9. A. Dubra, Y. Sulai, J. L. Norris, R. F. Cooper, A. M. Dubis, D. R. Williams, and J. Carroll, “Noninvasive imaging of the human rod photoreceptor mosaic using a confocal adaptive optics scanning ophthalmoscope,” Biomed. Opt. Express 2(7), 1864–1876 (2011). [CrossRef]  

10. M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light: Sci. Appl. 3(4), e165 (2014). [CrossRef]  

11. B. M. Levine, E. A. Martinsen, A. Wirth, A. Jankevics, M. Toledo-Quinones, F. Landers, and T. L. Bruno, “Horizontal line-of-sight turbulence over near-ground paths and implications for adaptive optics corrections in laser communications,” Appl. Opt. 37(21), 4553–4560 (1998). [CrossRef]  

12. C. Forest, C. Canizares, D. Neal, M. McGuirk, and M. Schattenburg, “Metrology of thin transparent optics using Shack-Hartmann wavefront sensing,” Opt. Eng. 43(3), 742–753 (2004). [CrossRef]  

13. B. Dörband, H. Müller, and H. Gross, “Handbook of Optical Systems,” Volume 5: Metrology of Optical Components and Systems, 1st ed. (John Wiley & Sons, 2012).

14. M. Mrochen, M. Kaemmerer, and T. Seiler, “Wavefront-guided laser in situ keratomileusis: early results in three eyes,” J. Refract. Surg. 16(2), 116–121 (2000).

15. M. Mrochen, M. Kaemmerer, and T. Seiler, “Clinical results of wavefront-guided laser in situ keratomileusis 3 months after surgery,” J. Cataract Refract. Surg. 27(2), 201–207 (2001). [CrossRef]  

16. S. Schallhorn, M. Brown, J. Venter, D. Teenan, K. Hettinger, and H. Yamamoto, “Early clinical outcomes of wavefront-guided myopic LASIK treatments using a new-generation Hartmann-Shack aberrometer,” J Refract Surg 30(1), 14–21 (2014). [CrossRef]  

17. M. Vinas, C. Benedi-Garcia, S. Aissati, D. Pascual, V. Akondi, C. Dorronsoro, and S. Marcos, “Visual simulators replicate vision with multifocal lenses,” Sci. Rep. 9(1), 1539 (2019). [CrossRef]  

18. V. Akondi and A. Dubra, “Accounting for focal shift in the Shack–Hartmann wavefront sensor,” Opt. Lett. 44(17), 4151–4154 (2019). [CrossRef]  

19. V. Akondi, S. Steven, and A. Dubra, “Centroid error due to non-uniform lenslet illumination in the Shack–Hartmann wavefront sensor,” Opt. Lett. 44(17), 4167–4170 (2019). [CrossRef]  

20. V. Akondi and A. Dubra, “Average gradient of Zernike polynomials over polygons,” Opt. Express 28(13), 18876–18886 (2020). [CrossRef]  

21. M. C. Roggemann and T. J. Schulz, “Algorithm to increase the largest aberration that can be reconstructed from Hartmann sensor measurements,” Appl. Opt. 37(20), 4321–4329 (1998). [CrossRef]  

22. S. Groening, B. Sick, K. Donner, J. Pfund, N. Lindlein, and J. Schwider, “Wave-front reconstruction with a Shack–Hartmann sensor with an iterative spline fitting method,” Appl. Opt. 39(4), 561–567 (2000). [CrossRef]  

23. V. Molebny, “Scanning Shack-Hartmann wavefront sensor,” Proc. SPIE 5412, 66–71 (2004). [CrossRef]  

24. L. Seifert, H. J. Tiziani, and W. Osten, “Wavefront reconstruction with the adaptive Shack–Hartmann sensor,” Opt. Commun. 245(1-6), 255–269 (2005). [CrossRef]  

25. H. Choo and R. S. Muller, “Addressable Microlens Array to Improve Dynamic Range of Shack–Hartmann Sensors,” J. Microelectromech. Syst. 15(6), 1555–1567 (2006). [CrossRef]  

26. Y. Hongbin, Z. Guangya, C. F. Siong, L. Feiwen, and W. Shouhua, “A tunable Shack–Hartmann wavefront sensor based on a liquid-filled microlens array,” J. Micromech. Microeng. 18(10), 105017 (2008). [CrossRef]  

27. M. Xia, C. Li, L. Hu, Z. Cao, Q. Mu, and X. Li, “Shack-Hartmann wavefront sensor with large dynamic range,” J. Biomed. Opt. 15(2), 1–10 (2010). [CrossRef]  

28. R. Martínez-Cuenca, V. Durán, V. Climent, E. Tajahuerce, S. Bará, J. Ares, J. Arines, M. Martínez-Corral, and J. Lancis, “Reconfigurable Shack–Hartmann sensor without moving elements,” Opt. Lett. 35(9), 1338–1340 (2010). [CrossRef]  

29. N. Kumar, A. Khare, and B. Boruah, “Enhanced dynamic range of the grating array based zonal wavefront sensor using a zone wise scanning method,” Proc. SPIE 11287, E1–E6 (2020). [CrossRef]  

30. A. Carmichael Martins and B. Vohnsen, “Measuring ocular aberrations sequentially using a digital micromirror device,” Micromachines 10(2), 117 (2019). [CrossRef]  

31. M. Aftab, H. Choi, R. Liang, and D. W. Kim, “Adaptive Shack-Hartmann wavefront sensor accommodating large wavefront variations,” Opt. Express 26(26), 34428–34441 (2018). [CrossRef]  

32. G.-Y. Yoon, S. Pantanelli, and L. J. Nagy, “Large-dynamic-range Shack-Hartmann wavefront sensor for highly aberrated eyes,” J. Biomed. Opt. 11(3), 1–3 (2006). [CrossRef]  

33. S. Pantanelli, S. MacRae, T. M. Jeong, and G. Yoon, “Characterizing the Wave Aberration in Eyes with Keratoconus or Penetrating Keratoplasty Using a High–Dynamic Range Wavefront Sensor,” Ophthalmology 114(11), 2013–2021 (2007). [CrossRef]  

34. G. Yoon, “Large dynamic range Shack-Hartmann wavefront sensor,” U.S. patent, 7,414,712 (19 Aug. 2008).

35. S. Olivier, V. Laude, and J.-P. Huignard, “Liquid-crystal Hartmann wave-front scanner,” Appl. Opt. 39(22), 3838–3846 (2000). [CrossRef]  

36. V. Laude, S. Olivier, C. Dirson, and J.-P. Huignard, “Hartmann wave-front scanner,” Opt. Lett. 24(24), 1796–1798 (1999). [CrossRef]  

37. R. Navarro and E. Moreno-Barriuso, “Laser ray-tracing method for optical testing,” Opt. Lett. 24(14), 951–953 (1999). [CrossRef]  

38. G. N. McKay, F. Mahmood, and N. J. Durr, “Large dynamic range autorefraction with a low-cost diffuser wavefront sensor,” Biomed. Opt. Express 10(4), 1718–1735 (2019). [CrossRef]  

39. H. Shinto, Y. Saita, and T. Nomura, “Shack-Hartmann wavefront sensor with large dynamic range by adaptive spot search method,” Appl. Opt. 55(20), 5413–5418 (2016). [CrossRef]  

40. X. J.-F. Levecq and S. H. Bucourt, “Method and device for analysing a highly dynamic wavefront,” U.S. patent, 6,750,957 (15 June 2004).

41. G. Altmann, “Method and apparatus for improving the dynamic range and accuracy of a Shack-Hartmann wavefront sensor,” U.S. patent application, 10/013, 565 (2003).

42. N. Lindlein and J. Pfund, “Experimental results for expanding the dynamic range of a Shack-Hartmann sensor by using astigmatic microlenses,” Opt. Eng. 41(2), 529–533 (2002). [CrossRef]  

43. X. Wei, T. Van Heugten, and L. Thibos, “Validation of a Hartmann-Moiré Wavefront Sensor with Large Dynamic Range,” Opt. Express 17(16), 14180–14185 (2009). [CrossRef]  

44. D. Podanchuk, V. Dan’ko, M. Kotov, J.-Y. Son, and Y.-J. Choi, “Extended-range Shack-Hartmann wavefront sensor with nonlinear holographic lenslet array,” Opt. Eng. 45(5), 053605 (2006). [CrossRef]  

45. J. Ko and C. C. Davis, “Comparison of the plenoptic sensor and the Shack-Hartmann sensor,” Appl. Opt. 56(13), 3689–3698 (2017). [CrossRef]  

46. Z. Gao, X. Li, and H. Ye, “Large dynamic range Shack–Hartmann wavefront measurement based on image segmentation and a neighbouring-region search algorithm,” Opt. Commun. 450, 190–201 (2019). [CrossRef]  

47. C. Leroux and C. Dainty, “A simple and robust method to extend the dynamic range of an aberrometer,” Opt. Express 17(21), 19055–19061 (2009). [CrossRef]  

48. L. Lundström and P. Unsbo, “Unwrapping Hartmann-Shack Images from Highly Aberrated Eyes Using an Iterative B-spline Based Extrapolation Method,” Optom. Vis. Sci. 81(5), 383–388 (2004). [CrossRef]  

49. D. G. Smith and J. E. Greivenkamp, “Generalized method for sorting Shack-Hartmann spot patterns using local similarity,” Appl. Opt. 47(25), 4548–4554 (2008). [CrossRef]  

50. D. G. Smith, “High dynamic range calibration for an infrared Shack-Hartmann wavefront sensor,” (University of Arizona, 2008).

51. S. Mauch and J. Reger, “Real-Time Spot Detection and Ordering for a Shack–Hartmann Wavefront Sensor With a Low-Cost FPGA,” IEEE Trans. Instrum. Meas. 63(10), 2379–2386 (2014). [CrossRef]  

52. M. Ares, S. Royo, and J. Caum, “Shack-Hartmann sensor based on a cylindrical microlens array,” Opt. Lett. 32(7), 769–771 (2007). [CrossRef]  

53. J. Lee, R. V. Shack, and M. R. Descour, “Sorting method to extend the dynamic range of the Shack–Hartmann wave-front sensor,” Appl. Opt. 44(23), 4838–4845 (2005). [CrossRef]  

54. W.-W. Lee, J. H. Lee, and C. K. Hwangbo, “Increase of dynamic range of a Shack-Hartmann sensor by shifting detector plane,” Proc. SPIE 5639, 70–77 (2004). [CrossRef]  

55. J. Pfund, N. Lindlein, and J. Schwider, “Dynamic range expansion of a Shack–Hartmann sensor by use of a modified unwrapping algorithm,” Opt. Lett. 23(13), 995–997 (1998). [CrossRef]  

56. M. Rocktäschel and H. J. Tiziani, “Limitations of the Shack–Hartmann sensor for testing optical aspherics,” Opt. Laser Tech. 34(8), 631–637 (2002). [CrossRef]  

57. C. E. Campbell, “The range of local wavefront curvatures measurable with Shack-Hartmann wavefront sensors,” Clin. Exp. Optom. 92(3), 187–193 (2009). [CrossRef]  

58. J. W. Hardy, Adaptive Optics for Astronomical Telescopes (Oxford University Press, 1998).

59. G. Yoon, “Wavefront sensing and diagnostic uses,” in Adaptive Optics for Vision Science (Wiley, 2006), pp. 63–81.

60. A. Nikitin, J. Sheldakova, A. Kudryashov, G. Borsoni, D. Denisov, V. Karasik, and A. Sakharov, “A device based on the Shack-Hartmann wave front sensor for testing wide aperture optics,” Proc. SPIE 9754, 97540K (2016). [CrossRef]  

61. Y. Saita, H. Shinto, and T. Nomura, “Holographic Shack-Hartmann wavefront sensor based on the correlation peak displacement detection method for wavefront sensing with large dynamic range,” Optica 2(5), 411–415 (2015). [CrossRef]  

62. C. Curatu, G. Curatu, and J. Rolland, “Fundamental and specific steps in Shack-Hartmann wavefront sensor design,” Proc. SPIE 6288, 1–9 (2006). [CrossRef]  

63. R. Rammage, D. Neal, and R. Copland, “Application of Shack-Hartmann wavefront sensing technology to transmissive optic metrology,” Proc. SPIE 4779, 161–172 (2002). [CrossRef]  

64. L. N. Thibos, R. A. Applegate, J. T. Schwiegerling, and R. Webb, “Standards for reporting the optical aberrations of eyes,” J. Refract. Surg.18(5), S652–S660 (2002).

65. J. W. Goodman, Introduction to Fourier Optics, 4th ed. (W. H. Freeman and Company, 2017).

66. C. Kittel, Introduction to Solid State Physics, 5th ed. (Wiley, 1976).

67. V. Akondi and A. Dubra, “Multi-layer Shack-Hartmann wavefront sensing in the point source regime,” Biomed. Opt. Express 12(1), 409–432 (2021). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Shack-Hartmann wavefront sensor geometry used to define lenslet-bound dynamic range such the lenslet image (red spot) does not reach the boundary of the lenslet projection onto the sensor, in which the pixels are depicted as gray squares.
Fig. 2.
Fig. 2. Shack-Hartmann wavefront sensor condition used to define optical dynamic range as to avoid overlap between images of adjacent lenslets (red spots on the top view), with the pixels depicted as gray squares (compare with Fig. 1).
Fig. 3.
Fig. 3. Depiction of lenslet image locations in a Shack-Hartmann wavefront sensor with square lenslets (black outlines) showing “adjacent” (red) images to the green image along various directions indicated by the orange line segments.
Fig. 4.
Fig. 4. Images from a SHWS with a square lenslet array illuminated by an aberration-free wavefront (a), and a wavefront generated by a 32.7 D convex cylinder oriented at 45° (c), with lattice vectors ${v_1}$ and ${v_2}$ shown in green. Panels (b) and (d) show the corresponding spectra with the reciprocal lattice vectors ${u_1}$ and ${u_2}$, in dark blue. The axes of the SHWS images are in units of lenslet pitch ${D_l}$, and for the spectra in units of $1/{D_l}$.
Fig. 5.
Fig. 5. Schematic diagram of the optical setup used to capture Shack-Hartmann wavefront sensor (SHWS) images (top) while placing convex and concave sphere lenses in the pupil plane P2, such as the examples shown above and below the plot. In these images, the red boxes show the boundaries of the lenslets with peak intensity 40% or higher than the maximum pixel value in the 0 D image. The plot shows defocus and astigmatism estimated using the proposed pre-centroiding algorithm based on the estimation of the SHWS image lattice vectors. The separation between the adjacent gray vertical lines correspond to the lenslet-bound SHWS dynamic range definition, while the red and cyan vertical lines indicate the optical dynamic range and its limit when considering the camera’s region of interest, respectively.
Fig. 6.
Fig. 6. SHWS images captured with the optical setup depicted at the top of Fig. 5 using various optometric cylinder lenses. In these images, the red boxes show the boundaries of the lenslets with peak intensity 40% or higher than the maximum pixel value in the 0 D image. The plot shows the defocus estimated using the proposed pre-centroiding algorithm based on the estimation of the SHWS image lattice vectors for defocus and astigmatism.

Tables (1)

Tables Icon

Table 1. SHWS dynamic range for Zernike polynomials based on preventing lenslet images from leaving the corresponding lenslet outline (lenslet-bound dynamic range, 4rd column) and avoiding lenslet image overlap (optical dynamic range, 5th column).

Equations (30)

Equations on this page are rendered with MathJax. Learn more.

ρ = f l W ( r ) d 2 r d 2 r .
| θ max | = 1 2 f l ( D l D i ) ,
1 2 f l ( D l D i )  =  2 a j D p max { ma x Ω | Z j ( x , y ) x | , ma x Ω | Z j ( x , y ) y | } ,
α W ( x , y ) = W ( x , y ) ( cos α , sin α ) ,
1 2 f l ( D l D i )  =  2 a j D p ma x Ω , α = 0 , π | α Z j ( x , y ) | .
( D l D i ) 2 f l  =  2 a j D p ma x Ω , α | α Z j ( x , y ) | ,
d l D i + f l ( θ x , k + 1 θ x , k ) 0 ,
θ x , k + 1 θ x , k ( 4 d l D p 2 ) 2 W ( x , y ) x 2 .
d l D i + f l ( 4 d l D p 2 ) b j + mi n Ω [ 2 Z j ( x , y ) x 2 ] = 0.
d l D i + f l ( 4 d l D p 2 ) b j ma x Ω [ 2 Z j ( x , y ) x 2 ] = 0 ,
d l ( α ) D i + f l ( 4 d l ( α ) D p 2 ) b j + mi n Ω [ α 2 Z j ( x , y ) ] = 0 ,
d l ( α ) D i + f l ( 4 d l ( α ) D p 2 ) b j ma x Ω [ α 2 Z j ( x , y ) ] = 0.
b j + = ( D l D i ) D p 2 4 f l D l mi n Ω , α [ α 2 Z j ( x , y ) ] ,
b j = ( D l D i ) D p 2 4 f l D l ma x Ω , α [ α 2 Z j ( x , y ) ]
α 2 W ( x , y ) = 2 W ( x , y ) x 2 co s 2 α + 2 W ( x , y ) x y sin 2 α + 2 W ( x , y ) y 2 si n 2 α .
b j + a j = ( D p D l ) max { ma x Ω | Z j ( x , y ) x | , ma x Ω | Z j ( x , y ) y | } mi n Ω , α [ 2 Z j ( x , y ) x 2 cos 2 α + 2 Z j ( x , y ) x y sin 2 α + 2 Z j ( x , y ) y 2 sin 2 α ] ,
b j a j = ( D p D l ) max { ma x Ω | Z j ( x , y ) x | , ma x Ω | Z j ( x , y ) y | } ma x Ω , α [ 2 Z j ( x , y ) x 2 cos 2 α + 2 Z j ( x , y ) x y sin 2 α + 2 Z j ( x , y ) y 2 sin 2 α ] ,
v 1 v 1 + 2 f l D p [ W ( x i + v 1 , x D p / 2 , y i + v 1 , y D p / 2 ) ( x i D p / 2 ) W ( x i D p / 2 , y i D p / 2 ) ( x i D p / 2 ) ,   W ( x i + v 1 , x D p / 2 , y i + v 1 , y D p / 2 ) ( y i D p / 2 ) W ( x i D p / 2 , y i D p / 2 ) ( y i D p / 2 ) ] ,
v 2 v 2 + 2 f l D p [ W ( x i + v 2 , x D p / 2 , y i + v 2 , y D p / 2 ) ( x i D p / 2 ) W ( x i D p / 2 , y i D p / 2 ) ( x i D p / 2 ) , W ( x i + v 2 , x D p / 2 , y i + v 2 , y D p / 2 ) ( y i D p / 2 ) W ( x i D p / 2 , y i D p / 2 ) ( y i D p / 2 ) ] .
v 1 v 1 + ( 4 f l D p 2 ) ( 2 A v 1 , x + B v 1 , y , B v 1 , x + 2 C v 1 , y ) ,
v 2 v 2 + ( 4 f l D p 2 ) ( 2 A v 2 , x + B v 2 , y , B v 2 , x + 2 C v 2 , y ) .
v 1 D l ( 1 , 0 ) + D l ( 4 f l D p 2 ) ( 4 3 b 4 + 2 6 b 5 , 2 6 b 3 ) ,
v 2 D l ( 0 , 1 ) + D l ( 4 f l D p 2 ) ( 2 6 b 3 , 4 3 b 4 2 6 b 5 ) .
v 1 1 u 1 R 90 u 2 R 90 u 2 ,
v 2 1 u 2 R 90 u 1 R 90 u 1 ,
R 90 = [ 0 1 1 0 ] .
s l D p D l D i = D ROI .
θ x = W ( x , y ) x = W ( x , y ) x ( x x ) = W ( x , y ) x ( 2 D p ) .
θ x k + 1 θ x k = ( 2 D p ) [ W ( x + d l D p / 2 , y ) x k + 1 W ( x , y ) x k ] ,
θ x k + 1 θ x k = ( 4 d l D p 2 ) 2 W ( x , y ) x 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.