Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

2D zonal integration with unordered data

Open Access Open Access

Abstract

Numerical integration of two-dimensional gradient data is an important step for many slope-measuring optical instruments. However, existing methods are limited by low accuracy or data location restrictions. The zonal integration algorithm in this paper is a generalized process that works with unordered data via Taylor series approximations of finite difference calculations. This method does not require iteration, and all significant steps rely on matrix calculations for a least-squares solution. Simultaneous integration and interpolation is achieved with high accuracy and arbitrary data locations.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Slope and gradient measuring devices are common in optical analysis. Popular tools include the Hartmann and Shack–Hartmann tests, deflectometry, and lateral shearing interferometry. In these systems, surface gradients are the measured outputs. When wavefront or surface shape data are desired, numerical integration must convert measured gradient values into shape data.

Factors complicating the numerical integration include aberrations and scanning tools, which may provide heavily distorted or unordered data. Measurement noise can also strongly affect integration accuracy. The ideal numerical integration routine must address these issues and be computationally fast enough to work with modern imaging data that may contain millions of gradient values.

Numerical integration is generally categorized as modal or zonal integration. Modal integration [1,2] fits basis functions to the entire measured dataset then integrates those functions to get the desired shape features. Although modal integration easily handles unordered data, it often performs poorly when localized features are present.

Popularized by Southwell [3], zonal integration excels at reconstructing localized features because the integration uses small patches of data in the vicinity of the feature. However, Southwell integration has several limitations. Recent papers improve on the bilinear assumption with iterative procedures [4], spline fitting [5,6], and Taylor series expansions [7,8]. These achieve higher accuracy but require a regular grid of points. Quadrilateral grids [9] and nonuniform, low-sampling cases [10] have been addressed, but these place restrictions on data sampling. Radial basis functions [11] have no sampling limitations but are generally too slow for datasets with millions of points.

In other fields, finite element analysis [12] has long been used to solve partial differential equations with irregular sampling and high accuracy. It addresses many issues with existing integration techniques, but it is built for a different class of problems that involves complicated simulations neglecting noise. Numerical integration must work with noisy, measured data.

This paper provides a framework based on finite difference calculations [1315] for zonal integration. Within this framework, gradient data and integrated data can each be arbitrarily located to provide simultaneous integration and interpolation. Accuracy is adjustable by a choice of how many terms and how many adjacent points are selected for calculation. The solution uses noniterative matrix calculations and consolidates many previous efforts. An example system highlights important properties of the method.

2. GRADIENT INTEGRATION

The fundamental calculation to be solved is the matrix equation

$${\boldsymbol G} = {\boldsymbol D} \cdot {\boldsymbol Z},$$
where ${\boldsymbol G}$ is a vector of gradient values, ${\boldsymbol Z}$ is a vector describing surface or wavefront values, and ${\boldsymbol D}$ is the matrix transformation,
$${\boldsymbol G} = {\left[{\frac{{\partial {z_1}}}{{\partial {x_1}}},\frac{{\partial {z_2}}}{{\partial {x_2}}}, \cdots ,\frac{{\partial {z_K}}}{{\partial {x_K}}},\frac{{\partial {z_1}}}{{\partial {y_1}}}, \cdots ,\frac{{\partial {z_K}}}{{\partial {y_K}}}} \right]^{\intercal}},$$
$${\boldsymbol Z} = {\left[{{Z_1},{Z_2}, \cdots ,{Z_N}} \right]^{\intercal}}.$$

For this problem, gradient data (${\boldsymbol G}$) is assumed to be known, and ${\boldsymbol Z}$ is the desired shape data which can be computed by the Moore–Penrose pseudoinverse:

$${\boldsymbol Z} = {{\boldsymbol D}^ +} \cdot {\boldsymbol G},$$
$${{\boldsymbol D}^{+}}={{( {{\boldsymbol D}^{\intercal}}{\boldsymbol D} )}^{-1}}{{\boldsymbol D}^{\intercal}},$$
where ${()^{\rm T}}$ denotes a transpose and ${()^{- 1}}$ an inverse operation.

Coordinates for shape and gradient vectors do not need to be coincident. To maintain this distinction, capital letters $[{{X_n},{Y_n},{Z_n}}]$ denote the desired shape data, and lower-case letters $[{{x_k},{y_k},{z_k}}]$ are gradient coordinates. The total number of shape data points ($ N $) does not need to equal the total number of gradient evaluation points ($ K $).

Evaluation coordinates for shape data ${\boldsymbol Z}$ may be freely chosen based on needs of the problem, but they should be within the boundary of the gradient data to avoid extrapolation. Missing or invalid data should be excluded from the measured gradient vector data ${\boldsymbol G}$.

Transformation matrix ${\boldsymbol D}$ describes how shape data are related to gradient data. Each row contains the required weighting of all shape points to determine a single gradient value. To explain it another way, each row represents the best estimate of a single gradient point, and solving the matrix Eq. (4) is a way to make these independent estimates consistent in a least-squares sense. Details of matrix ${\boldsymbol D}$ are described next.

A. Gradient at a Single Location

Consider the scenario shown in Fig. 1 (left).

 figure: Fig. 1.

Fig. 1. Gradient data locations (crosses) and desired shape locations (dots). [left] All points in a region. [right] A single gradient point (red) and the nearest shape points.

Download Full Size | PDF

Crosses represent scattered locations $[{x_k},{y_k}]$ where measured gradient data exist, and dots represent locations $[{X_n},{Y_n}]$ where wavefront or surface shape information is to be computed. Desired shape information could be co-located with the gradient data, or it could be some other irregular pattern, but many cases desire a regular grid of shape data, so that is depicted.

Estimating a single gradient value begins by considering all shape data in the region near the gradient value. The number of points to consider ($P$) is a free parameter. In Fig. 1 (right), the $P = 7$ shape points nearest the desired gradient location are selected.

Assume the $ x $ component of the gradient can be described by a well-behaved function $f(x,y)$ evaluated at shape coordinates $[{{X_p},{Y_p}}]$:

$$\begin{split}\frac{{\partial {z_k}}}{{\partial {x_k}}}&= {c_1}f({x_k} + \Delta {X_1},{y_k} + \Delta {Y_1})\\&\quad + {c_2}f({x_k} + \Delta {X_2},{y_k} + \Delta {Y_2})\\&\quad + \cdots\\&\quad + {c_{\!P}}f({x_k} + \Delta {X_P},{y_k} + \Delta {Y_P}),\end{split}$$
where $\Delta {X_p} \equiv {X_p} - {x_k}$ and $\Delta {Y_p} \equiv {Y_p} - {y_k}$ represent displacements of the shape point from the gradient point. The goal is to find fit coefficients ${c_{\!p}}$ because they describe the weighting of each neighboring shape point that best estimates the gradient value.

Solving for fit coefficients ${c_{\!p}}$ is accomplished via a 2D Taylor series approximation of the function $f(x,y)$ about the gradient location $[{x_k},{y_k}]$

$$\begin{split}&f({x_k} + \Delta {X_p},{y_k} + \Delta {Y_p})\\&\quad = f({x_k},{y_k})\\&\qquad + \Delta {X_p}\frac{{\partial {z_k}}}{{\partial {x_k}}} + \Delta {Y_p}\frac{{\partial {z_k}}}{{\partial {y_k}}}\\&\qquad + \frac{{\Delta X_p^2}}{{2!}}\frac{{{\partial ^2}{z_k}}}{{\partial x_k^2}} + \Delta {X_p}\Delta {Y_p}\frac{{{\partial ^2}{z_k}}}{{\partial {x_k}\partial {y_k}}} + \frac{{\Delta Y_p^2}}{{2!}}\frac{{{\partial ^2}{z_k}}}{{\partial y_k^2}}\\&\qquad + \cdots,\end{split}$$
where the derivatives are all evaluated at the gradient location $[{x_k},{y_k}]$.
Tables Icon

Algorithm 1. Gradient integration with unordered data

Applying the series expansion [Eq. (7)] for each term in Eq. (6) provides a set of constraints on the coefficients ${c_{\!p}}$:

$$\left({{c_1} + {c_2} + \cdots + {c_{\!P}}} \right)f({x_k},{y_k}) = 0,$$
$$\left({{c_1}\Delta {X_1} + {c_2}\Delta {X_2} + \cdots + {c_{\!P}}\Delta {X_P}} \right)\frac{{\partial {z_k}}}{{\partial {x_k}}} = \frac{{\partial {z_k}}}{{\partial {x_k}}},$$
$$\left({{c_1}\Delta {Y_1} + {c_2}\Delta {Y_2} + \cdots + {c_{\!P}}\Delta {Y_P}} \right)\frac{{\partial {z_k}}}{{\partial {y_k}}} = 0.$$
All higher-order terms in the Taylor expansion must also equal to zero.

Because these relationships must hold for a nontrivial function $f(x,y)$, then the following matrix equation can be written for the constraints:

$$\left({\begin{array}{c}0\\1\\0\\0\\0\\0\\ \vdots \end{array}} \right) = \left({\begin{array}{cccc}1&1& \cdots &1\\{\Delta {X_1}}&{\Delta {X_2}}& \cdots &{\Delta {X_P}}\\{\Delta {Y_1}}&{\Delta {Y_2}}& \cdots &{\Delta {Y_P}}\\{\frac{1}{2}\Delta X_1^2}&{\frac{1}{2}\Delta X_2^2}& \cdots &{\frac{1}{2}\Delta X_P^2}\\{\Delta {X_1}\Delta {Y_1}}&{\Delta {X_2}\Delta {Y_2}}& \cdots &{\Delta {X_P}\Delta {Y_P}}\\{\frac{1}{2}\Delta Y_1^2}&{\frac{1}{2}\Delta Y_2^2}& \cdots &{\frac{1}{2}\Delta Y_P^2}\\ \vdots & \vdots && \vdots \end{array}} \right) \cdot \left({\begin{array}{c}{{c_1}}\\{{c_2}}\\{{c_3}}\\ \vdots \\{c_{\!P}}\end{array}} \right)\!.$$

In vector notation, this can be expressed as

$${\boldsymbol V} = {\boldsymbol M} \cdot {\boldsymbol C},$$
where vectors ${\boldsymbol V}=[0,1,0,0,0,0,\cdots]^{\intercal }$ and ${\boldsymbol C}=[{{c}_{1}},{{c}_{2}},{{c}_{3}},\cdots ,{{c}_{P}}]^{\intercal }$. Matrix ${\boldsymbol M}$ contains the Taylor expansion terms.

To solve this equation, the Taylor expansion must be truncated to a finite number of terms. The zeroth- order and first-order terms must be retained, but the inclusion of additional terms is a freely adjustable parameter that affects accuracy according to the Taylor theorem [7]. In principle, the Taylor series truncation and number of nearby points ($ P $) can vary for each gradient point, but this paper assumes they are the same for all gradient points. A valid solution only requires the number of Taylor terms must be less than or equal to the number of nearby points.

After truncation of the Taylor series expansion, the matrix equation can be solved for the coefficients ${\boldsymbol C}$ via the Moore–Penrose pseudoinverse:

$${\boldsymbol C} = {{\boldsymbol M}^ +} \cdot {\boldsymbol V},$$
$${{\boldsymbol M}^ +} = {\left({{{\boldsymbol M}^{\intercal}}{\boldsymbol M}} \right)^{- 1}}{{\boldsymbol M}^{\intercal}}.$$

The coefficients ${\boldsymbol C}$ from Eq. (13) are the finite difference coefficients for the best-fit gradient at a single point. They are the elements that form matrix ${\boldsymbol D}$ in Eq. (1).

A similar process holds for the $ y $ component of the gradient, where $\partial {z_k}/\partial {x_k}$ is replaced by $\partial {z_k}/\partial {y_k}$ in Eq. (6). The only practical change to Eq. (12) is that vector ${\boldsymbol V}=[0,1,0,0,0,0,\ldots]^{\rm T}$ changes to ${\boldsymbol V}=[0,0,1,0,0,0,\ldots]^{\rm T}$ for the $ y $ component of the gradient. Matrix ${\boldsymbol M}$ remains the same.

For a gradient point located at $[{x_k},{y_k}]$, the finite difference coefficients ${c_{\!p}}$ are placed in the $ k $th row of matrix ${\boldsymbol D}$ for the $ x $ component of the gradient or the $(K + k)$ row of matrix ${\boldsymbol D}$ for the $ y $ component of the gradient. The nonzero columns of ${\boldsymbol D}$ correspond to the shape points selected for evaluation.

B. Constant of Integration

At this stage, enough information is available to solve for the desired shape height using Eq. (4). However, the constant of integration has not been constrained, which means the shape offset can be arbitrarily large. In principle, the offset could be removed by subtracting it after the integration is complete. However, the numerical precision of computer calculations can lead to catastrophic cancellation and loss of accuracy.

To solve this, an extra row is appended to matrix ${\boldsymbol D}$. All elements of the row are given a value of 0, except an arbitrary $ q $th column, which receives a value of 1. This selects a single shape value ${Z_q}$. By also appending a value of 0 to vector ${\boldsymbol G}$, then the selected shape value will be constrained to a value of 0 $({Z_q} = 0)$. The constant of integration is thus defined.

In many scenarios, it is desired to set the constant of integration based on the mean shape offset instead of the shape value ${Z_q}$ at a single location. Although the mean value could be constrained by setting all elements of the appended row in matrix ${\boldsymbol D}$ to a value of 1, it is not recommended. Solving Eq. (4) with sparse matrix methods is generally very fast, but it becomes significantly slower when many elements are nonzero. It is more efficient to define the constant of integration based on a single ${Z_q}$ value and make small adjustments to the shape offset after the integration is complete.

C. Gradient Integration Algorithm

The procedure for 2D zonal integration of unordered data is summarized in Algorithm 1. The essence of this method is a point-by-point, finite difference, gradient calculation, and it is a generalization of many previous works.

For example, the method presented by Southwell [3] is a limiting case of this algorithm when a regular grid of points with equal spacing is assumed, and a three-point forward finite difference is used with maximum of first-order Taylor series terms. Because Southwell’s algorithm assumes a regular grid of points, matrix ${\boldsymbol M}$ will be identical for every gradient point. This means Eq. (13) can be symbolically computed to find ${\boldsymbol C} = [{- 1/h,1/h}]$. Using this result to build matrix ${\boldsymbol D}$, then Eq. (1) in this paper becomes identical to Eq. (11) in Southwell’s paper [3].

In general, accuracy of the integrated shape features is primarily dependent on properties of matrix ${\boldsymbol D}$ in Eq. (1). Matrix analysis tools [16] can be applied, but they will depend on the free parameters of Algorithm 1, which describe the selected shape locations, the number of nearest points to evaluate, and number of Taylor series terms to include. Quantitative analysis can only be performed on a case-by-case basis.

3. DEMONSTRATION

To demonstrate the utility of this algorithm and highlight many features, a numerical simulation is performed. Cartesian coordinates $[{x_k},{y_k}]$ for gradient data are independently chosen from a uniform random distribution ${x_k},{y_k} \in [- 1,1]$. At each location, an ideal gradient value is computed by Eqs. (15) and (16):

$$\frac{{\partial {z_k}}}{{\partial {x_k}}} = - \frac{{4{x_k}}}{w}\exp \left({- \frac{{\rho _k^2}}{w}} \right) + \frac{1}{t}\sin \left({\frac{{{x_k} - {y_k}}}{t}} \right)\!,$$
$$\frac{{\partial {z_k}}}{{\partial {y_k}}} = - \frac{{4{y_k}}}{w}\exp \left({- \frac{{\rho _k^2}}{w}} \right) - \frac{1}{t}\sin \left({\frac{{{x_k} - {y_k}}}{t}} \right)\!,$$
where ${\rho _k} = \sqrt {x_k^2 + y_k^2}$ is the radial location of a gradient point, $w$ describes the width of a Gaussian, and $t$ describes the period of a sinusoidal function. For this simulation, $w = 0.02$ and $t = 0.15$. These data are analogous to the output from a slope measuring instrument that randomly samples a surface.

Randomly located gradient data are an extreme example of a common measurement challenge. By showing this algorithm can work with random positions, this simulation serves as an example of more common cases with quasiregular data caused by distorted images or scanning systems that follow programmed routes. This algorithm has no constraints on data locations or ordering.

It is common to have circular optical elements, so gradient values with ${\rho _k} \gt 1$ represent missing data. The number of random points are adjusted until a total of 1,000,000 valid points are defined within the circular region $\rho \le 1$. An additional 273,241 points outside the aperture were set to a value of $\textit{NaN}$.

Surface coordinates $[{X_n},{Y_n}]$ are chosen to form a regular grid of $1{,}000 \times 1{,}000$ points over the range ${X_n},{Y_n} \in [- 1,1]$. Ideal surface values are given by Eq. (17) and shown in Fig. 2:

$$\textit{ideal}\!{Z_n} = 2\exp \left({- \frac{{\rho _n^2}}{w}} \right) - \cos \left({\frac{{{X_n} - {Y_n}}}{t}} \right)\!,$$
where ${\rho _n} = \sqrt {X_n^2 + Y_n^2}$ is the radial coordinate of a surface point.
 figure: Fig. 2.

Fig. 2. Ideal surface height $Z$ used for simulation demonstration. The surface is made from a summation of Gaussian and sinusoidal terms.

Download Full Size | PDF

Perfect reconstruction of the theoretical surface is not possible in this simulation because surface and associated gradient data are not polynomial functions. The finite number of elements in the Taylor expansion can only approximate the surface features. [7]

Figures 3 and 4 provide simulation results showing how computation time and integration accuracy vary with Taylor expansion order and the quantity of nearby points to evaluate.

 figure: Fig. 3.

Fig. 3. Simulation results of calculation time with each curve representing a different order of truncation for the Taylor series approximation.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Simulation results of calculation error with each curve representing a different order of truncation for the Taylor series approximation.

Download Full Size | PDF

As seen in Fig. 3, computation time is strongly dependent on the number of neighbor points chosen. The absolute time for the calculation depends on the implementation details and computer processor, but the trends are expected to remain the same. For reference, this simulation was performed in MATLAB with an Intel Xeon E3-1275 v6 processor.

To understand the compute time, consider the case with the longest calculation time (number of neighbor ${\rm points} = 36$ and fifth-order Taylor expansion). When a profiler is used, 60.5% of computation time is spent building up matrix ${\boldsymbol D}$ by solving Eq. (13). For this simulation, a single-threaded implementation determined each row of the matrix sequentially. A multi-threaded or GPU implementation could reduce this time. Next, 38.5% of computation time was used to solve matrix Eq. (4). This time is limited by the size of matrix ${\boldsymbol D}$ and the number of nonzero elements it contains. The remaining 1% of time is almost entirely spent finding the nearest surface points for each gradient point.

It should be noted that Algorithm 1 does not directly describe the order of truncation for the Taylor expansion. Instead, the algorithm contains a free parameter for the number of Taylor series terms to include. For zeroth-order (constant) features, only one term is needed. To completely describe features up to the first order (linear), three terms are needed. To achieve higher accuracy, then six terms are needed for second-order (quadratic) accuracy, 10 terms are needed for third-order accuracy, and so on. The data in Figs. 3 and 4 are based on the order of Taylor series truncation and not the number of terms included in the calculation.

Figure 4 shows relative error of the simulation results defined in terms of standard deviations of simulated and ideal surface height values: ${\rm error} \equiv {\rm std}(\textit{simulationZ} - \textit{idealZ})/{\rm std}(\textit{idealZ})$.

When the number of nearby points to evaluate is small, very large errors are observed. This can be understood by examining how each surface point is connected to gradient points. The way Algorithm 1 works, each gradient point will be associated with a fixed number ($ P $) of nearby shape points. However, that does not mean each shape point is used in matrix ${\boldsymbol D}$. Some shape points may not be associated with any gradient values, in which case solving Eq. (4) will yield an arbitrary shape value at those specific locations. Similarly, if a shape point is only associated with a single gradient point, then noise and oscillations in Taylor polynomials can lead to poorly estimated shape values.

For this simulation, Fig. 5 shows how many gradient points are connected to each surface point.

 figure: Fig. 5.

Fig. 5. Number of connections each surface point has to gradient points in matrix ${\boldsymbol D}$.

Download Full Size | PDF

When the number of neighbor points $P = 3$, Fig. 5 shows there are over 10,000 surface points not used in matrix ${\boldsymbol D}$. The corresponding error in Fig. 4 is in excess of ${10^{10}}$ because those unused surface points are assigned arbitrary values, which could be ${10^{24}}$ or larger. Large errors are eliminated when there are no unused surface points and no surface points with only 1 connection to a gradient point.

Other pathological cases are possible, such as all selected points are coincident or collinear. These cases will also yield poor shape estimation but can frequently be remedied by increasing the number of nearby points to select ($ P $). The downside to increasing the number of nearby points is that localized shape features will be smoothed out.

For this simulation, choosing number of neighbor points $P \gt 13$ guarantees every surface point will be connected to two or more gradient points. Once that condition is satisfied, the error is dominated by how well the Taylor expansion describes local surface features. For first-order and second-order Taylor series expansion, Fig. 4 shows the relative error in this simulation is approximately ${10^{- 4}}$. When third- or fourth-order Taylor series is used, the errors decrease to approximately ${10^{- 8}}$. Using higher-order Taylor expansion decreases the relative error down to approximately ${10^{- 12}}$, which is approaching limits of computer accuracy as performed by double-precision arithmetic.

Figure 6 shows errors in the estimated surface for the case with number of neighbor ${\rm points} = 36$ and fifth-order Taylor expansion. The mean value of the error (0.9844…) is determined by the constant of integration, and it has been subtracted from the plot to highlight the spatial variations.

 figure: Fig. 6.

Fig. 6. Error in simulated surface with number of neighbor ${\rm points} = 36$ and fifth-order Taylor expansion.

Download Full Size | PDF

The error peaks in the region can be observed where the Gaussian term is significant. This is the region with highest spatial frequency content, and it has a maximum relative error of ${7.3^{\,*}10^{- 11}}$.

4. CONCLUSION

This paper provides a general method for numerical integration of measured gradient data with unordered and irregular point locations. Surface or wavefront shape locations can also be arbitrarily decided, allowing for simultaneous interpolation and integration.

Using data localized in the region of each gradient point, a Taylor series approximation relates shape data to a measured gradient value in that region. Truncation of the Taylor series and number of nearby points to choose are free parameters that can be selected based on point locations and data quality.

With the relation established between shape locations and gradient values, the finite difference coefficients are placed in a larger matrix for typical zonal integration methods. All significant steps in this process rely on matrix calculations for a noniterative, least-squares solution that is readily implemented on computers.

A demonstration with simulated data highlights the ability of this method to work with ${10^6}$ randomly located gradient points with missing data. This is an extreme example of a common problem with distorted images or scanning tools that produce irregular data locations. The arbitrary nature of the gradient locations places some constraints on the free parameters of Algorithm 1. However, sufficient number of Taylor series terms and nearby points achieves accuracy near the limits of machine precision. This is achieved despite the fact the simulated functions cannot be exactly represented with finite Taylor series expansion.

The method presented here can greatly improve numerical integration accuracy for any optical slope-measuring application. Data are not constrained to be in any pattern or in any order.

Acknowledgment

Many thanks to Dae Wook Kim, Chang Jin Oh, and the team at the University of Arizona Optical Engineering and Fabrication Facility for suggestions and support during manuscript preparation.

Disclosures

The author declares no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the author upon reasonable request.

REFERENCES

1. R. Cubalchini, “Modal wave-front estimation from phase derivative measurements,” J. Opt. Soc. Am. 69, 972–977 (1979). [CrossRef]  

2. J. Ye, Z. Gao, S. Wang, J. Cheng, W. Wang, and W. Sun, “Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture,” J. Opt. Soc. Am. A 31, 2304–2311 (2014). [CrossRef]  

3. W. Southwell, “Wave-front estimation from wave-front slope measurements,” J. Opt. Soc. Am. 70, 998–1006 (1980). [CrossRef]  

4. L. Huang and A. Asundi, “Improvement of least-squares integration method with iterative compensations in fringe reflectometry,” Appl. Opt. 51, 7459–7465 (2012). [CrossRef]  

5. L. Huang, J. Xue, B. Gao, C. Zuo, and M. Idir, “Spline based least squares integration for two-dimensional shape or wavefront reconstruction,” Opt. Laser Eng. 91, 221–226 (2017). [CrossRef]  

6. K. K. Pant, D. R. Burada, M. Bichra, A. Ghosh, G. S. Khan, S. Sinzinger, and C. Shakher, “Weighted spline based integration for reconstruction of freeform wavefront,” Appl. Opt. 57, 1100–1109 (2018). [CrossRef]  

7. G. Li, Y. Li, K. Liu, X. Ma, and H. Wang, “Improving wavefront reconstruction accuracy by using integration equations with higher-order truncation errors in the Southwell geometry,” J. Opt. Soc. Am. A 30, 1448–1459 (2013). [CrossRef]  

8. H. Ren, F. Gao, and X. Jiang, “Improvement of high-order least-squares integration method for stereo deflectometry,” Appl. Opt. 54, 10249–10255 (2015). [CrossRef]  

9. L. Huang, J. Xue, B. Gao, C. Zuo, and M. Idir, “Zonal wavefront reconstruction in quadrilateral geometry for phase measuring deflectometry,” Appl. Opt. 56, 5139–5144 (2017). [CrossRef]  

10. C. Chen, T. Ma, and F. Wang, “Enhancing reconstruction precision of zonal methods under low sampling density in non-uniform meshes,” Appl. Opt. 58, 4823–4827 (2019). [CrossRef]  

11. S. Ettl, J. Kaminski, M. C. Knauer, and G. Häusler, “Shape reconstruction from gradient data,” Appl. Opt. 47, 2091–2097 (2008). [CrossRef]  

12. J. Peiró and S. Sherwin, “Finite difference, finite element and finite volume methods for partial differential equations,” in Handbook of Materials Modeling (Springer, 2005), pp. 2415–2446.

13. N. Perrone and R. Kao, “A general finite difference method for arbitrary meshes,” Comput. Struct. 5, 45–57 (1975). [CrossRef]  

14. B. Fornberg, “Generation of finite difference formulas on arbitrarily spaced grids,” Math. Comput. 51, 699–706 (1988). [CrossRef]  

15. “Finite difference coefficients calculator,” https://web.media.mit.edu/crtaylor/calculator.html. Accessed: 2020-12-22.

16. W. Zou and J. P. Rolland, “Quantifications of error propagation in slope-based wavefront estimations,” J. Opt. Soc. Am. A 23, 2629–2638 (2006). [CrossRef]  

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Gradient data locations (crosses) and desired shape locations (dots). [left] All points in a region. [right] A single gradient point (red) and the nearest shape points.
Fig. 2.
Fig. 2. Ideal surface height $Z$ used for simulation demonstration. The surface is made from a summation of Gaussian and sinusoidal terms.
Fig. 3.
Fig. 3. Simulation results of calculation time with each curve representing a different order of truncation for the Taylor series approximation.
Fig. 4.
Fig. 4. Simulation results of calculation error with each curve representing a different order of truncation for the Taylor series approximation.
Fig. 5.
Fig. 5. Number of connections each surface point has to gradient points in matrix ${\boldsymbol D}$ .
Fig. 6.
Fig. 6. Error in simulated surface with number of neighbor ${\rm points} = 36$ and fifth-order Taylor expansion.

Tables (1)

Tables Icon

Algorithm 1. Gradient integration with unordered data

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

G = D Z ,
G = [ z 1 x 1 , z 2 x 2 , , z K x K , z 1 y 1 , , z K y K ] ,
Z = [ Z 1 , Z 2 , , Z N ] .
Z = D + G ,
D + = ( D D ) 1 D ,
z k x k = c 1 f ( x k + Δ X 1 , y k + Δ Y 1 ) + c 2 f ( x k + Δ X 2 , y k + Δ Y 2 ) + + c P f ( x k + Δ X P , y k + Δ Y P ) ,
f ( x k + Δ X p , y k + Δ Y p ) = f ( x k , y k ) + Δ X p z k x k + Δ Y p z k y k + Δ X p 2 2 ! 2 z k x k 2 + Δ X p Δ Y p 2 z k x k y k + Δ Y p 2 2 ! 2 z k y k 2 + ,
( c 1 + c 2 + + c P ) f ( x k , y k ) = 0 ,
( c 1 Δ X 1 + c 2 Δ X 2 + + c P Δ X P ) z k x k = z k x k ,
( c 1 Δ Y 1 + c 2 Δ Y 2 + + c P Δ Y P ) z k y k = 0.
( 0 1 0 0 0 0 ) = ( 1 1 1 Δ X 1 Δ X 2 Δ X P Δ Y 1 Δ Y 2 Δ Y P 1 2 Δ X 1 2 1 2 Δ X 2 2 1 2 Δ X P 2 Δ X 1 Δ Y 1 Δ X 2 Δ Y 2 Δ X P Δ Y P 1 2 Δ Y 1 2 1 2 Δ Y 2 2 1 2 Δ Y P 2 ) ( c 1 c 2 c 3 c P ) .
V = M C ,
C = M + V ,
M + = ( M M ) 1 M .
z k x k = 4 x k w exp ( ρ k 2 w ) + 1 t sin ( x k y k t ) ,
z k y k = 4 y k w exp ( ρ k 2 w ) 1 t sin ( x k y k t ) ,
ideal Z n = 2 exp ( ρ n 2 w ) cos ( X n Y n t ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.