Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Vision ray metrology for freeform optics

Open Access Open Access

Abstract

Vision ray techniques are known in the optical community to provide low-uncertainty image formation models. In this work, we extend this approach and propose a vision ray metrology system that estimates the geometric wavefront of a measurement sample using the sample-induced deflection in the vision rays. We show the feasibility of this approach using simulations and measurements of spherical and freeform optics. In contrast to the competitive technique deflectometry, this approach relies on differential measurements and, hence, requires no elaborated calibration procedure that uses sophisticated optimization algorithms to estimate geometric constraints. Applications of this work are the metrology and alignment of freeform optics.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The precise, contact-free, and full-field measurement of freeform optics is a challenge in modern metrology [1]. A popular solution is a full-field interferometry that enables measurements with a repeatability of a few nanometers for both the form and mid-spatial frequency (MSF) regime. For freeform optics, the complex freeform surface of the sample may have high surface slopes that necessitate the use of computer-generated holograms (CGHs) to provide a stable reference wavefront that serves as an optical null [2]. However, the cost of CGHs is relatively high, and the measurement uncertainty is highly dependent on calibration errors and misalignments. These costs may limit the practical applicability of interferometry to high-volume measurements or high-performance applications where the budget is justified.

Coordinate measuring machines (CMMs) provide point-wise measurements over large measurement volumes and handle steep surface slopes. For contact probes, the measurement is independent of the sample alignment and provides true measurements of power, coma, astigmatism (see discussion in [36]). CMMs are well-established in the metrology community and have a widely accepted terminology for errors [7,8]. However, CMM-based techniques include significant measurement times (e.g., centimeter-class aspheres requiring 15 minutes) and various systematic error sources. Probes are required to measure at a normal incidence condition, requiring additional tilt axes or adapted configurations that work in cylindrical or spherical coordinates. These limitations diminish their value in industrial practice.

Ray tracing is used for metrology in cases where the camera can be well calibrated [9]. An example is Phase Measuring Deflectometry (PMD) [1016]. PMD is a null–free full-field metrology solution with high resolution and short measurement times that plays out its strength for large and complex freeform optics [16]. Measuring the refractive power through absolute phase measurements using incoherent sources was introduced in [17] and extended to PMD in transmission in [1822]. PMD systems were further advanced in [23,24] through complex calibration procedures based on ray tracing. Nowadays, this idea has been broadened for metrology of multisurface freeform optics as in [2527], where the camera sensor model is using a non-linear extension of a pinhole model. Nonetheless, this model has been proven to be insufficient to model some sensors as shown in [9]. To the credit of camera manufacturers, there are some camera lenses as the Nikon D60 (see Ref. [28]), that can be described well with the pinhole model.

PMD enables form and MSF measurements. In particular, for the MSF regime, PMD has been reported to “measure MSF errors on freeform parts orders of magnitude faster than traditional tactile metrology tools" [29]. PMD systems are highly sensitive slope measuring systems with high repeatability, even for samples with high surface slopes. Reference [30] reported an environmental instability and noise of 0.6 nm RMS and an overall slope measurement uncertainty near ∼100nrad. Deflectometry is a robust and low-cost alternative for low and medium-volume applications with unique advantages for measuring MSF structures.

A significant drawback of PMD systems is systematic errors that produce slowly varying form errors in the measurement, in this regard many calibration efforts are used to minimize the this systematic terms [3134]. The systematic error can be attributed to simple error sources as the flatness of the cover-glass of the display or the drift of the baseline (relative position between camera and display) after the system calibration [14,3543]. A further critical aspect of PMD systems is the numerical reconstruction algorithm. Depending on the algorithms’ sophistication, many of those solvers rely on mathematical optimization routines that minimize a cost function. Many of these solvers are either based on simpler models (convex optimization problems [4446]) that have unique mathematical solutions (global minima) but do not model the physical problem well or use more sophisticated models (nonconvex optimization problems [47]) that have no straightforward solution because the solver can get trapped in a local minimum. Both types of optimization solvers contribute to systematic error. An exciting aspect of PMD is that PMD is extremely sensitive to out-of-plane deformations, and many systematic errors cancel out in comparative measurements. Using a reference artifact that is accurately measurable with a slower metrology instrument (e.g., a CMM), it is possible to calibrate systematic errors down to the 20 nm level in the Zernike coefficients [48].

Deflectometry has also been reported to transmission measurements. Fischer [31] reported a transmission PMD system for the measurement of aspheric optics. Other transmission deflectometry systems have been reported by Petz [33,34]. PMD has been an innovative area of research. Seßner [49], proposed the use of telecentric imaging systems to overcome the slope-height ambiguity. Komander [32] proposed a display is mounted onto a motorized linear stage and can thus be moved to various positions during the measurement process.

Experimental Ray Tracing (ERT) is another competitive technique that was introduced by Häusler et al. in 1988 [50]. In ERT, a ray with known angle and position is deflected by the sample, where the direction of the deflected ray is measured using two parallel planes that are orthogonal to z [51]. ERT has also been performed to find the rays that propagate near the focus of test pieces which allows point-wise measurements of the deflected rays [50], and can be used for the characterization of gradient index of optical elements as described in [51].

Motivated by that, this work proposes a Vision Ray Metrology system that consists of an active target and a camera with well-characterized vision rays. The sample under test is placed between the camera and the screen, resulting in a deflection of the vision rays that can be accurately measured by analyzing the patterns projected onto the active target. The concept of vision rays has been borrowed from the camera calibration techniques of the vision community, where the bundle of rays incident onto a camera pixel is represented by a single chief ray. This vision ray camera model and was introduced by Grossberg and Nayar [52], was later improved [9,28,5357].

The proposed Vision Ray metrology system measures the sample-induced deflection in the vision rays; in contrast, PMD requires accurate knowledge of the camera's location and the active target to trace the rays through the sample and the measurement system. The reconstruction algorithm of the proposed metrology system requires a simple fitting procedure (fitting a ray through a line), whereas PMD requires elaborated optimization routines that may be sensitive to system drifts and suffer from convergence problems.

This manuscript is structured as follows: Section 2 introduces the concept of Vision Ray Metrology, Section 3 describes one possible experimental VRM system implementation. Section 4 shows measurement results for the measured vision rays, and Section 5 shows the results for the reconstructed wavefront. Finally, discussions and conclusions are presented in Sections 5 and 6.

2. Vision rays as a metrology tool

Vision rays are widely used in camera calibrations for metrology systems, especially for cases where conventional techniques fail. The vision ray camera model [9,28,56] is a geometric model that assigns to every sensor pixel with the pixel coordinates $({u,v} )$ a so-called 3D vision ray $\{{{{\vec{o}}_c},{{\vec{r}}_c}} \}$ (pixel line of sight) that originates at the coordinate vector ${\vec{o}_c}$ and has the direction vector ${\vec{r}_c}$. Any point on the ray $\{{{{\vec{o}}_c},{{\vec{r}}_c}} \}$ projects back to $({u,v} )$ as shown in Fig. 1 (i.e., pierces the sensor plane at the location of the pixel of origin). A common convention is to define the vision rays so that the third component of ${\vec{o}_c}$ equals 0 and the third component of ${\vec{r}_c}$ equals 1. The other components of ${\vec{r}_c}$ are given by $\tan {\alpha _x}\; $ and $\tan {\alpha _y}.$ The angles ${\alpha _x}$ and ${\alpha _y}$ describe the ray direction in the x- and y-direction [9]. For visualization, the direction vector (gradient) amplitude S is calculated as $S = \sqrt {V_x^2 + V_y^2} $ [9].

 figure: Fig. 1.

Fig. 1. Vision ray image formation model. Each pixel collects light from a closely arranged ray bundle represented by a principal (chief) ray. Single vision ray that passes through all control points. Parameters defining the vision ray: (${x_0},{y_0}$) offset, slope (${\textrm{V}_x},{\textrm{V}_y}$).

Download Full Size | PDF

A quantitative assessment for freeform surfaces is possible if we quantify the change in the direction of the rays (deflection) caused by the samples under test. In essence, Snell's Law in 3D space [58] can be used to write a minimization problem that looks for the surface normal that generates the corresponding change in direction. For such an assessment, it is useful to use a telecentric lens. The telecentricity of the imaging systems allows for sample placement within the constant field of view of the sensor without additional alignment concerns, e.g., a position-dependent magnification; see the comparison in Fig. 2.

 figure: Fig. 2.

Fig. 2. Comparison between using a non-telecentric and a telecentric imaging system during sample assessment based on Vision ray model. a) For the non-telecentric imaging system, the number of vision rays on the sample depends on the field of view. In contrast, b) for the telecentric imaging systems, the number of vision rays incident on the sample surface only marginally depends on the sample placement along the z-axis.

Download Full Size | PDF

Although the vision ray model pictures the chief ray from image space, in reality, the rays are produced in object space, as shown in Fig. 3. The vision rays for telecentric imaging system are ${T_x} = \tan \alpha _x^T$, ${T_y} = \tan \alpha _y^T$, where ${a_x}$ and ${a_y}$ are related to telecentricity. When placing a sample, the direction of the vision rays change. The changes can be measured and are defined in the manuscript as follows $\mathrm{\Delta }{a_x}$ and $\mathrm{\Delta }{a_y}$ with

$$\mathrm{\Delta }{\alpha _x} = {a_x} - a_x^T$$
$$\mathrm{\Delta }{\alpha _y} = {a_y} - a_y^T$$
where ${\alpha _x}$ and ${\alpha _y}$ and is the angle of the vision ray in the $x$- and $y$-direction in the presence of the measurement sample.

 figure: Fig. 3.

Fig. 3. Schematic of measurement principle. From all incoming rays, the imaging detector captures only those rays that match the vision rays of the imaging system.

Download Full Size | PDF

Although nowadays, the vision ray model is progressively becoming a standard in camera calibration to reduce the uncertainty in incoherent metrology setups [9,59,60], to our knowledge, it has not been used directly as a metrology tool.

3. Method

The vision rays are measured using the setup proposed in Fig. 4. The sample is mounted in front of a telecentric lens, causing a deflection of vision rays of the imaging system. These vision rays (and thereby the ray deflection) are estimated using the setup of Fig. 4. This system steps both the camera and the measurement sample along the z-axis to a series of z-planes, starting from $z = {z_0}$ to the plane $z = {z_0} + \mathrm{\Delta }z$. This translation causes the vision rays to pierce the target plane at different xy-locations. Each xy-location is then measured. This process is repeated, measuring at least 30 different 3D piercing points for each vision ray with low uncertainty. These points are then used to fit a line in 3D through the points, as shown in Fig. 1(b). The resultant set of 3D coordinates per pixel is transformed into rays parameters $\{{{{\vec{o}}_c},{{\vec{r}}_c}} \}$. The origin of the measurement system of coordinates was chosen to be the center of one control point in the flat target so that the screen itself was in the plane located at $z\; = \; {z_i}$, with ${z_i}$ a position along the z axis of the calibration coordinate system. The design of the active target is a critical step and is critical to the accuracy of the system.

 figure: Fig. 4.

Fig. 4. Measurement setup: An active target comprises a projector and a well-defined diffuse passive calibration board. Fringes are projected onto a diffuse reference target, and only the camera and the sample are stepped along the z-axis. At all times, both the distance between the sample and telecentric lens ${d_s}$ and the distance between projector and calibration target ${d_{pt}}$ remain constant.

Download Full Size | PDF

This work proposes an active target consisting of a passive diffusive calibration target and a fringe projector, as shown in Fig. 4, to avoid the cover glass uncertainty [3537]. The passive calibration target has a matt finish on top of aluminum/LDPE composite sheets, which offer high flatness and stiffness (∼500 µm from the vendor Calib.io). The surface has been treated using an ultra-violet inkjet printing (from the vendor Calib.io) process to generate the reference markers. To obtain spatial information on every point of the passive target, we employ a projector to generate and project a series of horizontal and vertical fringes onto the target. Combined with phase-shifting techniques, it is possible to estimate the absolute phase in x- and y-direction and obtain the corresponding spatial coordinates for every camera pixel. This process is the projector calibration, which differs from the classical sense (i.e., treat a projector as an inverse camera),

The principle is shown in Fig. 5. In other words, the xy-location of each reference marker has been used to generate a 2D xy-map for each point on the board via interpolation, see Fig. 5(b). Similarly, the projected fringe patterns are used to obtain a 2D phase map in x- and y-direction (see Fig. 5(a)). Having a 2D xy-map for each point on the target and two 2D phase maps provides all necessary data to create a function that maps the phase in x- and y-direction into spatial coordinates. The data to generate this mesh does not need to originate from a single plane; in fact, numerous planes can be used for this purpose to reduce the error, even data from auxiliary cameras that capture the fringe patterns at, e.g., at a different angle. We employ Delaunay triangulation [24] with natural neighbor interpolation using Matlab built-in routines to generate the mapping function. Once the calibration mapping function is available, the spatial position on the target can be estimated for each ray using solely the phase data

$$[{x,y} ]= Phase\_to\_XYMapping({{\phi^x},\; {\phi^y}} )$$
even for the case where the control points are not visible.

 figure: Fig. 5.

Fig. 5. High-resolution extraction of the spatial information using the proposed active target. a) At every z-position, vertical and horizontal fringes are projected onto the target to generate absolute phase measurements in x and y directions. b) The spatial XY-location of each reference marker is used to obtain a high-resolution spatial map for both the x- and y-coordinate. This is a two-step procedure; firstly, a Delaunay triangulation mesh is created using the spatial information at the sparse features on the target and the absolute phase values at those locations. In case data is available from different cameras or z-planes, this could reduce the error. Afterward, the absolute phase maps obtained from a) serve as a query point to estimate each pixel's high-resolution x- and y-coordinates (i.e., vision ray).

Download Full Size | PDF

The active calibration target proposed here combines the advantages of currently known passive and active targets, where there is no cover-glass problem and a high spatial resolution is maintained. The absolute phase measurements enable sub-1/100 fringe uncertainties with robustness against defocus errors while maintaining the ability to work with high tilt angles. In most cases, the absolute phase and spatial coordinates are simple low-order polynomials because both follow an almost linear trend. The latter aspect makes noise filtering very simple.

4. Experiments

4.1 Measurement Setup:

To demonstrate the feasibility of this method, we measured the vision rays of five different samples:

  • • a plano-concave lens with a 50 mm focal length (Ø25.4 mm, N-BK7)
  • • a plano-convex lens with a 100 mm focal length (Ø25.4 mm, N-BK7)
  • • a pair of commercially available spectacle lenses with adjustable focus.
  • • an array consisting of cubic phase plates (manufactured at UNC Charlotte)
  • • an Alvarez micro-lens array consisting of two cubic phase plate arrays (manufactured at UNC Charlotte)

The experimental setup follows the structure of Fig. 5 and consists of an Edmund Optics TitanTL Telecentric Lens, (0.136${\times} $, f/11-f/22, telecentricity <0.1°), a FLIR camera (model BFS-U3-200S6M-C with 5472 × 3648, 20MP, pixel pitch of 2.4 µm), and an Optoma Technology EH200ST projector (1920 × 1080, 3,000 lumens, Contrast ratio 20,000:1). The telecentric lens is stepped along the z-axis using a Physics Instruments (PI) M-404-6DG Precision Linear Stage (resolution 0.1µm, yaw 75µrad, pitch 75µrad).

Each sample has been measured using the data from 30 equidistant with a plane separation of 1mm. A series of phase-shifted fringe patterns are projected at each plane with the periods [10,40, 160, 640, 2560] (sample 1-4) and [42, 126, 378, 1134, 3402] (sample 5) projector pixels. The projector gamma nonlinearity is compensated using a two-stage compensation with

  • (i) firstly, a passive gamma calibration using the method by Zhang is applied, where non-sinusoidal fringes are sent to the projector to produce sinusoidal fringes [61], and
  • (ii) secondly, the 10-step Bruning temporal phase-shifting algorithm is employed [62] to suppress the remaining harmonics 0, −1, ±2, ±3, ±4, ±5, ±6, ±7, ±8, +9 [63].

A repeatability test across the calibration volume showed that a total of 10 phase steps for each period was sufficient to obtain a sufficiently low phase noise level (<2$\pi $/400). An exception was the Alvarez micro-lens array that had significantly lower fringe visibility due to the high surface roughness. To overcome the resulting low SNR value, a 14-step Bruning algorithm has been employed [63].

The obtained wrapped phases are processed using the multi-wavelength phase unwrapping technique GOMF [64] to obtain the absolute phase map [65].

The absolute phase is then converted into spatial xy-coordinates, using the mapping procedure described in Section 3. These measurements provide xyz coordinates for each vision ray as both the camera and the sample are stepped along the z-axis. The vision rays at each pixel are estimated by fitting a line through each data-point using robust regression techniques [66],

$$\vec{x} = {\vec{o}_c} + {\vec{r}_c}\; \mathrm{\Delta }z = {\vec{o}_c} + \left( {\begin{array}{c} {{V_x}}\\ {{V_y}}\\ 1 \end{array}} \right)n\; \mathrm{\Delta }z = \left( {\begin{array}{c} {{x_0}}\\ {{y_0}}\\ 0 \end{array}} \right) + \left( {\begin{array}{c} {{V_x}}\\ {{V_y}}\\ 1 \end{array}} \right)n\; \mathrm{\Delta }z$$
where n is the plane number of the 30 measurement planes with n = 0, 1, 2, …, 29.

It is convenient to define the direction vector (gradient) amplitude [9] as

$$S = \sqrt {V_x^2 + V_y^2} . $$
for visualization purposes because it provides valuable information on the structure of the sample.

4.2 Measurement results for the spherical lenses:

The complete vision ray data of the two spherical lenses have been measured. The results for the 50mm focal length plano-concave lens are shown in Fig. 6. The sample is placed near the center of the FOV of the telecentric lens. Figure 6(a) shows the fringe visibility for each pixel. The reference markers of this calibration target are printed in black and provide, therefore, low fringe visibility. This limitation may be overcome in future designs by printing the reference dots of the target in a lighter color (e.g., grey). Figure 6(b) shows the direction vector amplitude $S = \sqrt {V_x^2 + V_y^2} $ [9] for the measurement system without the sample. This data is, in essence, related to the direction of the vision rays of the telecentric lens. The deflection can be calculated using Eqs. (1) and (2). Figure 6(c) shows the raw data of the corresponding direction vector amplitude of the deflection induced by the measurement sample.

 figure: Fig. 6.

Fig. 6. Measurement sample 1 (Ø25.4 mm Plano-Concave Lens, 50 mm EFL): Direction vector amplitude. a) Sensor view of the sample under test, the missing data correspond to obscured features in the passive calibration board. b) Direction vector amplitude for the telecentric system only (before placing the sample). c) Sample direction vector. Noll-ordered Zernike decomposition for sample evaluation across spatial frequencies: d) Zernike terms 1:37, e) Zernike terms 38:150. f) Residual after removing 150 Zernike terms

Download Full Size | PDF

Figure 6(d) and 6(e) show the case where the Zernike polynomials 1-37 and 38-150 have been fitted to the raw data of Fig. 6(c), respectively. The residual in Fig. 6(f) contains the data that is not described by the Zernike fitting process of Fig. 6(d) and 6(e), i.e., adding the data of Fig. 6(d), 6(e), and 6f together, one obtains the data in Fig. 6(c).

A comparison of the measurement results with the theoretical values is shown in Fig. 7. Figure 7(a) and 7(c) show the results of the direction vector amplitude $S = \sqrt {V_x^2 + V_y^2} $ for the (fitted) low-order measurement data and the theoretically expected value for an EFL of 50mm, respectively. For additional comparison, we also plot the direction vector angle $\alpha $, calculated as $\alpha = atan2({{V_y},{V_x}} )$ to distinguish between concave and convex surfaces. Furthermore, $\alpha $ also encodes information of the tangential component of the aberration of the sample; thus, it can be used as another useful metric during teste piece assessment.

 figure: Fig. 7.

Fig. 7. Measurement sample 1 (Ø25.4 mm Plano-Concave Lens, 50 mm EFL): measured (a) and simulated (c) direction vector amplitude $S = \sqrt {V_x^2 + V_y^2} $. To distinguish between concave and convex wavefronts, the direction vector angle $\alpha = atan2({{V_y},{V_x}} )$ is shown in (b) and (d) for the measurement and simulated data, respectively.

Download Full Size | PDF

The corresponding measurement results for a Ø25.4mm” Plano-Convex Lens with 100 mm EFL are shown in Figs. 8 and 9. As expected, measurement sample 1 (50 mm EFL) has a direction vector amplitude that is twice as large as the case of measurement sample 2 (100 mm EFL), i.e., the measurement results indicate that the lens is faster by a factor of two.

 figure: Fig. 8.

Fig. 8. Measurement sample 2 (Ø25.4 mm Plano-Convex Lens, 100 mm EFL): Direction vector amplitude. a) Sensor view of the sample under test, the missing data correspond to obscured features in the passive calibration board. b) Direction vector amplitude for the telecentric system only (before placing the sample). c) Sample direction vector. Noll-ordered Zernike decomposition for sample evaluation across spatial frequencies: d) Zernike terms 1:37, e) Zernike terms 38:150. f) Residual after removing 150 Zernike terms.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Measurement sample 2 (Ø25.4 mm Plano-Convex Lens, 100 mm EFL): measured (a) and simulated (c) direction vector amplitude $S = \sqrt {V_x^2 + V_y^2} $. To distinguish between concave and convex wavefronts, the direction vector angle $\alpha = atan2({{V_y},{V_x}} )$ is shown in (b) and (d) for the measurement and simulated data, respectively.

Download Full Size | PDF

4.3 Measurement results for commercially available spectacle lenses:

The first freeform optic measured in this work is a pair of adjustable spectacle lenses based on an Alverez lens design [67], as shown in Fig. 10. Each lens consists of two cubic phase plates that can be sheared laterally relative to one another to adjust the power. Figure 10 shows the measurement results for three different lens shears for one adjustable lens. The results in Fig. 10 clearly show a change in the sample direction vector. The maximum amplitudes for the three different configurations are 0.06, 0.11, and 0.26, respectively. A decomposition using the Noll-ordered Zernike polynomials is shown in Fig. 11. The decomposition allows separating the direction vector amplitude into its low-frequency terms and the Mid-spatial-frequency (MSF) components that can be used to assess the errors introduced during the manufacturing process where the design prescription is available.

 figure: Fig. 10.

Fig. 10. Measurement sample 3 (off-the-shelf spectacle lenses with adjustable power): direction vector amplitude for three different configurations of the adjustable glasses. The missing data correspond to obscured features in the passive calibration board shown in the second row.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Noll-ordered Zernike decomposition for sample 3. a),f), and k) (first column) show the fringe visibility of the sample for the different relative positions of the two surfaces that compose the variable EFL lens. The raw data of the measured sample direction vectors are shown in b), g), and l). The third column, c), h), and m), shows the fitted Noll-ordered Zernike polynomials that dominate the direction vector amplitude. The fitted higher-ordered Noll-ordered Zernike terms 38:150 are shown in d), i), and n) that provide valuable insights into the mid-spatial frequency components. Finally, the residual for all datasets are shown in the last column. The obscured features from the calibration target are responsible for the missing data.

Download Full Size | PDF

4.4 Measurement results of a micro-optic array consisting of cubic-phase plates:

The second freeform optic measured in this work is a micro-optic array consisting of cubic phase plates. For this sample, we selected a single unit without the markers to avoid data dropout. The measurement results in Fig. 12 show that a single array cell produces a direction vector amplitude between 0.062 and 0.097.

 figure: Fig. 12.

Fig. 12. Measurement sample 4 (micro-optic array consisting of cubic phase plates): a) Sensor view of the sample under test. b) Direction vector amplitude for the telecentric system only (before placing the sample). c) Direction vector amplitude of the sample. The Noll-ordered Zernike decomposition for sample evaluation across spatial frequencies ranges is found in d) Zernike terms 1:37, e) Zernike terms 38:150, where the residual is shown in f).

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Measurement sample 4 (Alvarez lens array single unit): Direction vector amplitude. a) Sensor view of the sample under test, the missing data correspond to obscured features in the passive calibration board. b) Direction vector amplitude for the telecentric system only (before placing the sample). c) Sample direction vector. Noll-ordered Zernike decomposition for sample evaluation across spatial frequencies: d) Zernike terms 1:37. e) Zernike terms 38:150. f) Residual after removing 150 Zernike terms.

Download Full Size | PDF

4.5 Measurement results for a micro-lens array consisting of Alvarez lens cells:

Similar to the case of the specular lenses, we measure the vision rays of a micro-optic Alvarez lens array that consists of two micro-optic array elements with cubic phase profiles (i.e., two arrays of the type of sample in Section 4.4). The results in Fig. 13 show the direction vectors of the cell where the two cubic phase elements overlap. Similar to the previous cases, the regions with missing data are related to low fringe visibility caused by the reference markers of the calibration board.

4.6 Reprojection error within the Vision ray calibration:

To evaluate the vision ray calibration, we have calculated the reprojection error in the calibration coordinate system as the Euclidean distance between the recorded coordinates ($\vec{x}_w^m$) and the vision rays reprojection $\vec{x}$ given in Eq. (3) for every pixel as

$${\delta _m} = ||\vec{x}_w^m - {\vec{x}_m}||^2,$$
the resulting error distributions in Fig. 14 show that for the spherical lenses (samples 1&2), the RMSE reprojection error is smaller than 5 µm while for sample 3 is ∼10 µm. For sample 4, micro-lens array with cubic phase plates, the RMSE is 55 µm. The micro-lens array consisting of Alvarez lens cells, sample 5, has an RMSE of 90 µm.

 figure: Fig. 14.

Fig. 14. Reprojection error (Euclidean distance) for various samples

Download Full Size | PDF

4.7 Three-dimensional visualization of the vision rays of all samples:

For visualization purposes, the measured trajectories of the vision rays for all samples are plotted in 3D in Fig. 15. Figure 15(b) shows the vision rays for the pixels that go through the center of the sample. For this dataset, the minimum bundle diameter for configuration 1, 2, and 3 is found near ∼50 mm, ∼5 mm, and ∼1 mm, respectively. The wavefront of the Alvarez cell in Fig. 15(d) has more mid-spatial frequency components and, therefore, a less regular vision ray profile compared to the other cases; see in particular results for Figs. 15(a).

 figure: Fig. 15.

Fig. 15. (all measurement samples): Vision rays for a) the spherical lenses with 50 mm and 100 mm EFL, b) the commercial spectacle lenses with adjustable power, c) a single cubic phase element of the micro-optic array, and d) a single cell of the Alvarez micro-lens array. Each ray has been color-coded as a function of the direction vector amplitudes. For (b) and (d), only the vision rays of the relevant center region of the lenses are plotted.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. Difference in the focus point between sample 1 and sample 2. a) The focus point for the 50 mm EFL lens is located at z = −275.83 mm. b) The focus point for the 100 mm EFL lens is located z = −126.33. The distance between the two focus points is 149.5 mm.

Download Full Size | PDF

For sample 1 and 2, in Fig. 16 we have plotted the vision rays in their individual coordinates system such that z = 0 is the plane where the bundle diameter is smaller for visualization purposes. However, the focus point for the spherical samples in the calibration coordinate system clearly shows the difference in the focal length of the two samples.

5. Wavefront reconstruction

The measured refracted rays correspond are the vectors normal to the propagating wavefront [9,17]. The directions of the refracted rays can be interpreted as a pointing vector, which is normal on the propagating wavefront [9,17]. Thus the wavefront can be reconstructed by integration of the local wavefront slopes. There are several different deterministic wavefront reconstruction algorithms described in the literature. Generally, they can be classified into two categories: zonal and modal integration techniques. In zonal algorithms, the wavefront is recovered from a set of linear equations that describe the local (zonal) relation between wavefront and its derivatives in x and y directions [44,68,69]. The intrinsic zonal property of this method translates into highly accurate local wavefront estimation; however, the same property makes this technique susceptible to noise. On the other hand, modal integration reconstructs the wavefront as a superposition of linearly independent (orthogonal) analytical polynomials that form a basis. The basis needs to be differentiable; this allows a fitting in the slope domain that returns the corresponding weighting coefficients for every mode in the basis. As a result of this “global” fitting, the susceptibility to noise and random error is reduced. Another advantage of model fitting is the direct relation between the basis and its connection to physical parameters in the measurements. Common polynomial bases for wavefront reconstruction include Zernike [70], Legendre [71], Chebyshev [72], radial basis functions [73], B-splines [74], complex exponential [75], and Q-Forbes [76].

Here we apply modal integration developed in [70], where the wavefront is calculated as

$$W({x,y} )= \mathop \sum \limits_{i = 1}^N {a_i}{f_i}({x,y} )$$
with ${f_i}({x,y} )$ the Noll ordered Zernike basis. The estimated vision rays are the gradient of the geometric wavefront [77], i.e., for this case $({{V_x},{V_y}} )= \vec{\nabla }W,$ and
$${V_k} = \mathop \sum \limits_{i = 1}^N \frac{d}{{dk}}{a_i}{f_i}({x,y} ),\; \; k = x,y,$$
since this method creates numerical orthogonal transformation based on analytical polynomial sets the technic can be applied to arbitrary shaped apertures.

The reconstructed geometric wavefronts are shown in Figs. 1719.

 figure: Fig. 17.

Fig. 17. (measurement results sample 1 and 2): Geometric-wavefront reconstruction using the vision ray measurements for the sample of Sections 4.2 and 4.3.

Download Full Size | PDF

 figure: Fig. 18.

Fig. 18. (measurement results sample 3): Geometric-wavefront reconstruction using the vision ray measurements for the sample of Sections 4.4.

Download Full Size | PDF

 figure: Fig. 19.

Fig. 19. (measurement results sample 4 and 5): Geometric-wavefront reconstruction using the vision ray measurements for the sample of Sections 4.5 and 4.6.

Download Full Size | PDF

6. Discussions

This work proposes a vision ray metrology system that consists of a cover-glass-free active target, a translation stage, and a telecentric camera. The translation stage steps both the camera, the telecentric lens, and the measurement sample together along the z-axis to record the xyz-coordinates for each vision ray (i.e., camera pixel) without changing their relative positions. Each vision ray is then estimated using a simple line fitting in 3D. This method relies on the information in the vision ray direction and is inherently robust against long-term drifts that typically occur in Deflectometry (e.g., drift of the camera-display baseline). Deflectometry techniques rely on mathematical optimization techniques to estimate the camera's position and display and may need recalibration with a flat reference sample.

The differential nature of the measurement has another advantage compared to Deflectometry. In vision ray metrology, the deflection of vision rays is measured, but the distance between the telecentric lens and the sample remains unchanged throughout the measurement. Hence, all vision rays reaching the camera have the same entry point on the surface of the sample – the line fitting in 3D can be applied with no problems. In contrast, Deflectometry often requires moving the sample within the measurement volume to estimate the geometric constraints using optimization solvers. The geometric constraints are needed to trace the rays in reverse from the camera pixel to the display pixel. However, as the sample is placed at different locations, the measured rays at a given camera pixel are not incident on the same surface point. As a result, sub-pixel interpolation or other methods must be considered to obtain the sample surface shape. A comparison between Vision Ray Metrology and Deflectometry is shown in Table 1.

Tables Icon

Table 1. Comparison of Vision Ray Metrology and Deflectometry

The measurement results in Section 4 show that the vision ray data can accurately be obtained for various measurement freeform optical samples while having fewer stringent requirements on the sample alignment. Figures 7 and 9 show an excellent agreement with the simulation. An interesting detail is that the slope information ${V_x}$, ${V_y}$, or $S = \sqrt {V_x^2 + V_y^2} \; $ can be used to highlight some surface properties. Figures 6, 8, 11, 12, and 13 show that further data processing permits the separation of the slowly varying terms from the mid-spatial frequency components in the direction vector amplitude. This information could be further used for assessment during the fabrication process of freeform surfaces.

Section 5 described how the vision ray metrology approach allows recovering the geometric wavefront of a sample under test using solely the ${V_x}$ and ${V_y}$(wavefront slopes). Using these vector components, it is possible to recover the vector normal to the surface as described in [78]. Modal integrations of the wavefront slopes can also be used to interpolate the missing data points, which are mainly dominated by the dark reference points on the calibration target for the current dataset. Notably, when using a different reference target with a lighter grey level, it is possible to obtain a fringe signal on the reference points. The use of a different target would overcome the missing data points and is part of future work. The resulting high-resolution dataset could then be processed using a combination of zonal and modal integration methods [79], which makes that technique applicable for highly irregular surfaces.

Supplement 1 contains several sections on the error budget considerations. Section 2 of Supplement 1 discusses the random component of the vision ray error and the resulting wavefront error. Monte-Carlo simulations suggest that the random error in the measured vision ray could reach 500nrad when using 50 planes and ∼150nrad when using 150 planes for $\mathrm{\Delta }z = 10$ mm. Nevertheless, the dominant errors are systematic and include systems drifts. Section 3 of Supplement 1 discusses systematic errors due to system drifts. That section concludes that the vision ray metrology system is insensitive to lateral drifts but sensitive to axial drifts and changes in the projector orientation angle. These systematic errors could be problematic, especially if some time has passed since the last calibration. Section 4 of Supplement 1 discusses the resulting wavefront error when differential measurements are applied, i.e., vision rays are measured back-to-back for the telecentric lens with no sample and the telecentric lens with the sample. The results show that for this specific configuration, the systematic error cancel and a wavefront error in the order of 123nm PV (15.9nm RMS) can be achieved.

The results presented in this manuscript are technically not fully differential because the image acquisition is not optimized, resulting in very long measurement times, where drifts even for back-to-back measurements are to be expected. Other systematic errors that are presented but uncompensated originate from various sources:

  • • The flatness of the target (or flatness of the display, if a display is used)
  • • The ability to measure the reference markers of the target accurately
  • • Thermal drifts, in particular, if they occur within back-to-back measurements. This includes thermal expansion of the sample or the calibration board, changes in the aberrations of the telecentric lens of the system projector, drifts of the projector orientation and location.
  • • Linear stage errors (positioning errors, as well as yaw, roll, and pitch errors)
  • • Sample drifts (displacement or tilt) relative to the telecentric system during measurements
  • • other temporal changes (vibrations, etc.)

One of the primary sources of errors of our current configuration is the long measurement time (∼7 hours per sample with post-processing) due to the slow implementation of the image acquisition system. However, we estimate that optimized systems (with hardware triggered projectors) that operate at 5fps could measure the given sample (with post-processing) within 17min. Sophisticated acquisition system could further reduce the measurement time to below 5min. We want to highlight that although geometric wavefront and surface height profiles are nowadays the most commonly used metrics in optical metrology, the reported vision ray amplitude may serve as a decisive metric for assessing the geometrical properties of the surface under test. Optical shops and lens manufacturers could benefit from this information for their manufacturing process.

7. Conclusions

Vision ray techniques are known in the vision community to provide image formation models even when conventional techniques fail.

This work extends this approach and proposes a Vision Ray Metrology system that estimates the geometric wavefront of a measurement sample using the sample-induced deflection in the vision rays [52]. A critical aspect is using a telecentric imaging system for the sensor, which allows sample placement within the constant field of view of the sensor without additional alignment concerns. In contrast to PMD, this work relies on differential measurements, and hence, the absolute position and orientation between target and camera do not need to be known. This optical configuration significantly reduces the complexity of the reconstruction algorithms; unlike deflectometry, the proposed vision ray metrology system does not require mathematical optimization algorithms for calibration and reconstruction – the vision rays are obtained using a simple 3D fitting of a line. Furthermore, the cover glass of the display [3537] is a significant error source in deflectometry. In this work, we propose an active target consisting of a passive diffusive calibration target and a fringe projector (see Fig. 4), to avoid any cover glass related problems.

We have demonstrated the feasibility of this approach via simulation and experiments for both spherical and freeform surfaces. For all samples, the estimated vision rays ${V_x}$ and ${V_y}$ have been used to estimate the geometric wavefront [77] using modal integration techniques [70], which can be translated into a heightmap if the material properties are known.

The accurate phase measurements produce a notably small random error of ∼500 nrad in the estimated vision rays. However, the presence of significant systematic errors as well as drifts that limit the actual measurement uncertainty. Characterizing and compensating these error sources is part of future research.

This work may be extended in the future to multi-freeform surfaces if multiple measurements are recovered for different sample positions and orientations. Thus, the proposed testing method provides a simple, low-cost, and optical shop floor-friendly way to measure the wavefronts of optical samples.

This work has numerous applications, but in particular, the metrology and alignment of freeform optics.

Funding

Industry members of the Center for Freeform Optics (https://centerfreeformoptics.org); National Science Foundation (IIP-1338877, IIP-1338898, IIP1822026, IIP-1822049).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. J. Rolland, M. Davies, T. Suleski, C. Evans, A. Bauer, J. Lambropoulos, and K. Falaggis, “Freeform optics for imaging,” Opt. Soc. Am. (2021), accepted for publication.

2. S. M. Arnold, “How To Test An Asphere With A Computer Generated Hologram,” Hologr. Opt. Opt. Comput. Gener. 1052, 191 (1989). [CrossRef]  

3. C. B. Kreischer, “Retrace error: interferometry’s dark little secret,” Proc. SPIE 8884, 88840X (2013). [CrossRef]  

4. C. J. Evans and J. B. Bryan, “Compensation for Errors Introduced by Nonzero Fringe Densities in Phase-Measuring Interferometers,” CIRP Ann. 42(1), 577–580 (1993). [CrossRef]  

5. H. Shahinian, C. D. Hovis, and C. J. Evans, “Effect of retrace error on stitching coherent scanning interferometry measurements of freeform optics,” Opt. Express 29(18), 28562 (2021). [CrossRef]  

6. T. Blalock, B. W. Myer, I. Ferralli, M. J. Brunelle, and T. Lynch, “Metrology for the manufacturing of freeform optics,” Proc. SPIE 10448, 1044817 (2017). [CrossRef]  

7. “Geometrical product specifications (GPS)—acceptance and reverification tests for coordinate measuring machines (CMM),” ISO 10360 (2020).

8. “Geometrical product specifications (GPS)—coordinate measuring machines (CMM): technique for determining the uncertainty of measurement,” ISO 15530 (2011).

9. T. Bothe, W. Li, M. Schulte, C. von Kopylow, R. B. Bergmann, and W. P. O. Jüptner, “Vision ray calibration for the quantitative geometric description of general imaging and projection optics in metrology,” Appl. Opt. 49(30), 5851 (2010). [CrossRef]  

10. C. Faber, E. Olesch, R. Krobot, and G. Häusler, “Deflectometry challenges interferometry: the competition gets tougher!” Interferom. XVI Tech. Anal. 8493, 84930R (2012). [CrossRef]  

11. G. Häusler, C. Faber, E. Olesch, and S. Ettl, “Deflectometry vs . Interferometry,” Proc. SPIE 8788, 87881C (2013). [CrossRef]  

12. C. von Kopylow and R. B. Bergmann, “Optical Metrology -Micro Metal Forming,” in F. Vollertsen, ed. (Springer Berlin Heidelberg, 2013), pp. 392–404.

13. R. B. Bergmann, J. Burke, and C. Falldorf, “Precision optical metrology without lasers,” Int. Conf. Opt. Photonic Eng. (icOPEN 2015)9524(July 2015), 952403 (2015).

14. M. C. Knauer, J. Kaminski, and G. Hausler, “Phase measuring deflectometry: a new approach to measure specular free-form surfaces,” Proc. SPIE 5457, 366 (2004). [CrossRef]  

15. M. Fischer, M. Petz, and R. Tutsch, “Evaluation of LCD monitors for deflectometric measurement systems,” Opt. Sens. Detect. 7726, 77260V (2010). [CrossRef]  

16. L. Huang, M. Idir, C. Zuo, and A. Asundi, “Review of phase measuring deflectometry,” Opt. Lasers Eng. 107, 247–257 (2018). [CrossRef]  

17. H. Canabal, “Automatic wavefront measurement technique using a computer display and a charge-coupled device camera,” Opt. Eng. 41(4), 822 (2002). [CrossRef]  

18. M. C. Knauer, C. Richter, P. Vogt, and G. Häusler, “Measuring the refractive power with deflectometry in transmission,” DGaO Proceedings 2008, pp. 7–8 (2008).

19. J. Vargas, J. A. Gómez-Pedrero, J. Alonso, and J. A. Quiroga, “Deflectometric method for the measurement of user power for ophthalmic lenses,” Appl. Opt. 49(27), 5125–5132 (2010). [CrossRef]  

20. J. L. Flores, B. Bravo-Medina, and J. A. Ferrari, “One-frame two-dimensional deflectometry for phase retrieval by addition of orthogonal fringe patterns,” Appl. Opt. 52(26), 6537–6542 (2013). [CrossRef]  

21. T. Liu, C. Zhou, Y. Liu, S. Si, and Z. Lei, “Deflectometry for phase retrieval using a composite fringe,” Opt. Appl. 44(3), 451–461 (2014). [CrossRef]  

22. J. L. Flores, R. Legarda-Saenz, and G. Garcia-Torales, “Color deflectometry for phase retrieval using phase-shifting methods,” Opt. Commun. 334, 298–302 (2015). [CrossRef]  

23. L. Jiang, X. Zhang, F. Fang, X. Liu, and L. Zhu, “Wavefront aberration metrology based on transmitted fringe deflectometry,” Appl. Opt. 56(26), 7396 (2017). [CrossRef]  

24. D. Wang, P. Xu, Z. Gong, Z. Xie, R. Liang, X. Xu, M. Kong, and J. Zhao, “Transmitted wavefront testing with large dynamic range based on computer-aided deflectometry,” J. Opt. 20(6), 065705 (2018). [CrossRef]  

25. D. Wang, P. Xu, Z. Wu, X. Fu, R. Wu, M. Kong, J. Liang, B. Zhang, and R. Liang, “Simultaneous multisurface measurement of freeform refractive optics based on computer-aided deflectometry,” Optica 7(9), 1056 (2020). [CrossRef]  

26. O. Huerta-Carranza, M. Avendaño-Alejo, and R. Díaz-Uribe, “Null screens to evaluate the shape of freeform surfaces: progressive addition lenses,” Opt. Express 29(17), 27921 (2021). [CrossRef]  

27. D. Wang, Y. Yin, J. Dou, M. Kong, X. Xu, L. Lei, and R. Liang, “Calibration of geometrical aberration in transmitted wavefront testing of refractive optics with deflectometry,” Appl. Opt. 60(7), 1973 (2021). [CrossRef]  

28. A. Pak, “The concept and implementation of smooth generic camera calibration,” Interferom. XVIII 9960, 99600I (2016). [CrossRef]  

29. T. F. Blalock, B. D. Cox, and B. Myer, “Measurement of mid-spatial frequency errors on freeform optics using deflectometry,” Proc. SPIE 11056, 110561H (2019). [CrossRef]  

30. P. Su, Y. Wang, J. H. Burge, K. Kaznatcheev, and M. Idir, “Non-null full field X-ray mirror metrology using SCOTS: a reflection deflectometry approach,” Opt. Express 20(11), 12393 (2012). [CrossRef]  

31. M. Fischer, Deflektometrie in Transmission - Ein neues Verfahren zur Erfassung der Geometrie asphärischer refraktiver Optiken (Shaker, 2016).

32. B. Komander, D. Lorenz, M. Fischer, M. Petz, and R. Tutsch, “Data fusion of surface normals and point coordinates for deflectometric measurements,” J. Sensors Sens. Syst. 3(2), 281–290 (2014). [CrossRef]  

33. M. Petz, M. Fischer, and R. Tutsch, “Three-dimensional shape measurement of aspheric refractive optics by pattern transmission photogrammetry,” Proc. SPIE 7239, 723906 (2009). [CrossRef]  

34. M. Petz and R. Tutsch, “Reflection grating photogrammetry: a technique for absolute shape measurement of specular free-form surfaces,” Opt. Manuf. Test. VI 5869, 58691D (2005). [CrossRef]  

35. M. Petz, H. Dierke, and R. Tutsch, “Photogrammetric determination of the refractive properties of liquid crystal displays,” Tech. Mess. 86(6), 319–324 (2019). [CrossRef]  

36. T. Reh, W. Li, J. Burke, and R. B. Bergmann, “Improving the Generic Camera Calibration technique by an extended model of calibration display,” J. Eur. Opt. Soc. Rapid Publ. 9, 14044 (2014). [CrossRef]  

37. D. Maestro-Watson, A. Izaguirre, and N. Arana-Arexolaleiba, “LCD screen calibration for deflectometric systems considering a single layer refraction model,” in 2017 IEEE International Workshop of Electronics, Control, Measurement, Signals and Their Application to Mechatronics (ECMSM) (IEEE, 2017), (1), pp. 1–6.

38. M. Petz, M. Fischer, and R. Tutsch, “Systematic errors in deflectometry induced by use of liquid crystal displays as reference structure,” in 21st IMEKO TC2 Symposium on Photonics in Measurement (2013).

39. T. Zhou, K. Chen, H. Wei, and Y. Li, “Improved system calibration for specular surface measurement by using reflections from a plane mirror,” Appl. Opt. 55(25), 7018 (2016). [CrossRef]  

40. J. Bartsch, M. Kalms, and R. B. Bergmann, “Improving the calibration of phase measuring deflectometry by a polynomial representation of the display shape,” J. Eur. Opt. Soc. Rapid Publ. 15(1), 20 (2019). [CrossRef]  

41. Z. Zhang, Y. Liu, S. Huang, Z. Niu, J. Guo, N. Gao, F. Gao, and X. Jiang, “Full-field 3D shape measurement of specular surfaces by direct phase to depth relationship,” Opt. Metrol. Insp. Ind. Appl. IV 10023, 100230X (2016). [CrossRef]  

42. S. Allgeier, U. Gengenbach, B. Köhler, K.-M. Reichert, and V. Hagenmeyer, “Reproducibility of two calibration procedures for phase-measuring deflectometry,” Proc. SPIE 11490, 114900G (2020). [CrossRef]  

43. A. P. Fard, “Low Uncertainty Surface Area Measurement Using Deflectometry,” disseration (The University of North Carolina at Charlotte, 2018).

44. L. Huang, J. Xue, B. Gao, C. Zuo, and M. Idir, “Zonal wavefront reconstruction in quadrilateral geometry for phase measuring deflectometry,” Appl. Opt. 56(18), 5139 (2017). [CrossRef]  

45. L. R. Graves, H. Choi, W. Zhao, C. J. Oh, P. Su, T. Su, and D. W. Kim, “Model-free deflectometry for freeform optics measurement using an iterative reconstruction technique,” Opt. Lett. 43(9), 2110 (2018). [CrossRef]  

46. M. Aftab, J. H. Burge, G. A. Smith, L. Graves, C. jin Oh, and D. W. Kim, “Modal Data Processing for High Resolution Deflectometry,” Int. J. Precis. Eng. Manuf. - Green Technol. 6(2), 255–270 (2019). [CrossRef]  

47. L. Huang, J. Xue, B. Gao, C. McPherson, J. Beverage, and M. Idir, “Modal phase measuring deflectometry,” Opt. Express 24(21), 24649 (2016). [CrossRef]  

48. W. Li, P. Huke, J. Burke, C. von Kopylow, and R. B. Bergmann, “Measuring deformations with deflectometry,” Interferom. XVII Tech. Anal. 9203, 92030F (2014). [CrossRef]  

49. R. Seßner, Richtungscodierte Deflektometrie durch Telezentrie (Erlangen, 2009).

50. G. Häusler and G. Schneider, “Testing optics by experimental ray tracing with a lateral effect photodiode,” Appl. Opt. 27(24), 5160 (1988). [CrossRef]  

51. T. Binkele, R. Dylla-Spears, M. A. Johnson, D. Hilbig, M. Essameldin, T. Henning, and F. Fleischmann, “Characterization of gradient index optical components using experimental ray tracing,” in Photonic Instrumentation Engineering VI, Y. G. Soskind, ed. (SPIE, 2019), p. 13.

52. M. D. Grossberg and S. K. Nayar, “General Imaging Model and a Method for Finding its Parameters,” in Eighth International Conference on Computer Vision, 108–115 (2001).

53. P. Sturm and S. Ramalingam, A Generic Calibration Concept : Theory and Algorithms (INRIA, 2003).

54. S. Ramalingam, P. Sturm, and S. K. Lodha, Theory and Experiments towards Complete Generic Calibration (INRIA, 2006), p. 22.

55. W. Li, M. Schulte, T. Bothe, C. Kopylow, N. Kopp, and W. Juptner, “Beam based calibration for optical imaging device,” in 2007 3DTV Conference (IEEE, 2007), 1, pp. 1–4.

56. A. Pak, “Towards smooth generic camera calibration,” Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory 2014 (2014).

57. D. Uhlig and M. Heizmann, “A Calibration Method for the Generalized Imaging Model with Uncertain Calibration Target Coordinates,” in Proceedings of the Asian Conference on Computer Vision (ACCV) (2020).

58. O. N. Stavroudis, The Mathematics of Geometrical and Physical Optics (John Wiley & Sons., 2006).

59. T. Reh, W. Li, A. Gesierich, and R. B. Bergmann, “Vision Ray Camera Calibration for Small Field of View,” in Proceedings of the Deutsche Gesellschaft für Angewandte Optik (DGAO) (Brunswick, Germany, 21–25 May2013), pp. A019-9. Available online at http://www.dgao-proceedings.de (accessed 29 October 2021).

60. J. Bartsch, Y. Sperling, and R. B. Bergmann, “Efficient vision ray calibration of multi-camera systems,” Opt. Express 29(11), 17125 (2021). [CrossRef]  

61. S. Zhang, “Active versus passive projector nonlinear gamma compensation method for high-quality fringe pattern generation,” Proc. SPIE 9110, 911002 (2014). [CrossRef]  

62. J. H. Bruning, D. R. Herriott, J. E. Gallagher, D. P. Rosenfeld, A. D. White, and D. J. Brangaccio, “Digital wavefront measuring interferometer for testing optical surfaces and lenses,” Appl. Opt. 13(11), 2693–2703 (1974). [CrossRef]  

63. M. Servin, J. A. Quiroga, and M. Padilla, Fringe Pattern Analysis for Optical Metrology: Theory, Algorithms, and Applications (Wiley, 2014).

64. C. E. Towers, D. P. Towers, and J. D. C. Jones, “Generalized frequency selection in multifrequency interferometry,” Opt. Lett. 29(12), 1348 (2004). [CrossRef]  

65. K. Falaggis, A. H. Ramirez Andrade, R. Porras-Aguilar, D. P. Towers, and C. E. Towers, “Multi-wavelength phase unwrapping: a versatile tool for extending the measurement range, breaking the Nyquist limit, and encrypting optical communications,” in Interferometry XIX, M. B. North Morris, K. Creath, J. Burke, and A. D. Davies, eds. (SPIE, 2018), (August), p. 39.

66. P. W. Holland and R. E. Welsch, “Robust regression using iteratively reweighted least-squares,” Commun. Stat. - Theory Methods 6(9), 813–827 (1977). [CrossRef]  

67. L. W. Alvarez, “Two-Element variable-power spherical lens,” US Patent 3,305,294 (21 February 1967).

68. W. H. Southwell, “Wave-front estimation from wave-front slope measurements,” J. Opt. Soc. Am. 70(8), 998 (1980). [CrossRef]  

69. G. A. Smith, “2D zonal integration with unordered data,” Appl. Opt. 60(16), 4662 (2021). [CrossRef]  

70. J. Ye, W. Wang, Z. Gao, Z. Liu, S. Wang, P. Benítez, J. C. Miñano, and Q. Yuan, “Modal wavefront estimation from its slopes by numerical orthogonal transformation method over general shaped aperture,” Opt. Express 23(20), 26208 (2015). [CrossRef]  

71. Z. Xia, X. Li, Q. Lu, C. Wei, J. Shao, and Z. Wu, “Wavefront reconstruction in square region based on improved two-dimension Legendre polynomials,” Proc. SPIE 10839, 1083915 (2019). [CrossRef]  

72. M. Aftab, J. H. Burge, G. A. Smith, L. R. Graves, C. J. Oh, and D. W. Kim, “Chebyshev gradient polynomials for high resolution surface and wavefront reconstruction,” Proc. SPIE 10742, 1074211 (2018). [CrossRef]  

73. L. Huang, M. Idir, C. Zuo, K. Kaznatcheev, L. Zhou, and A. Asundi, “Shape reconstruction from gradient data in an arbitrarily-shaped aperture by iterative discrete cosine transforms in Southwell configuration,” Opt. Lasers Eng. 67, 176–181 (2015). [CrossRef]  

74. S. Ettl, E. Olesch, J. Kaminski, and H. S. G. Häusler, “Fast and robust 3D shape reconstruction from gradient data,” DGaO Proceedings108, 26 (2007).

75. K. R. Freischlad and C. L. Koliopoulos, “Modal estimation of a wave front from difference measurements using the discrete Fourier transform,” J. Opt. Soc. Am. A 3(11), 1852 (1986). [CrossRef]  

76. A. Ramirez Andrade, R. Porras-Aguilar, and K. Falaggis, “Numerical integration of slope data with application to deflectometry,” in Interferometry XX, M. B. North Morris, K. Creath, and R. Porras-Aguilar, eds. (SPIE, 2020), (August), p. 7.

77. M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th expand (Cambridge University Press, 2000).

78. J. DelOlmo-Márquez, G. Castillo-Santiago, M. Avendaño-Alejo, I. Moreno, E. Román-Hernández, and M. C. López-Bautista, “Ronchi-Hartmann type null screens for testing a plano-freeform surface with a detection plane inside a caustic surface,” Opt. Express 29(15), 23300 (2021). [CrossRef]  

79. J. Espinosa, D. Mas, J. Pérez, and C. Illueca, “Optical surface reconstruction technique through combination of zonal and modal fitting,” J. Biomed. Opt. 15(2), 026022 (2010). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplement

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Fig. 1.
Fig. 1. Vision ray image formation model. Each pixel collects light from a closely arranged ray bundle represented by a principal (chief) ray. Single vision ray that passes through all control points. Parameters defining the vision ray: (${x_0},{y_0}$) offset, slope (${\textrm{V}_x},{\textrm{V}_y}$).
Fig. 2.
Fig. 2. Comparison between using a non-telecentric and a telecentric imaging system during sample assessment based on Vision ray model. a) For the non-telecentric imaging system, the number of vision rays on the sample depends on the field of view. In contrast, b) for the telecentric imaging systems, the number of vision rays incident on the sample surface only marginally depends on the sample placement along the z-axis.
Fig. 3.
Fig. 3. Schematic of measurement principle. From all incoming rays, the imaging detector captures only those rays that match the vision rays of the imaging system.
Fig. 4.
Fig. 4. Measurement setup: An active target comprises a projector and a well-defined diffuse passive calibration board. Fringes are projected onto a diffuse reference target, and only the camera and the sample are stepped along the z-axis. At all times, both the distance between the sample and telecentric lens ${d_s}$ and the distance between projector and calibration target ${d_{pt}}$ remain constant.
Fig. 5.
Fig. 5. High-resolution extraction of the spatial information using the proposed active target. a) At every z-position, vertical and horizontal fringes are projected onto the target to generate absolute phase measurements in x and y directions. b) The spatial XY-location of each reference marker is used to obtain a high-resolution spatial map for both the x- and y-coordinate. This is a two-step procedure; firstly, a Delaunay triangulation mesh is created using the spatial information at the sparse features on the target and the absolute phase values at those locations. In case data is available from different cameras or z-planes, this could reduce the error. Afterward, the absolute phase maps obtained from a) serve as a query point to estimate each pixel's high-resolution x- and y-coordinates (i.e., vision ray).
Fig. 6.
Fig. 6. Measurement sample 1 (Ø25.4 mm Plano-Concave Lens, 50 mm EFL): Direction vector amplitude. a) Sensor view of the sample under test, the missing data correspond to obscured features in the passive calibration board. b) Direction vector amplitude for the telecentric system only (before placing the sample). c) Sample direction vector. Noll-ordered Zernike decomposition for sample evaluation across spatial frequencies: d) Zernike terms 1:37, e) Zernike terms 38:150. f) Residual after removing 150 Zernike terms
Fig. 7.
Fig. 7. Measurement sample 1 (Ø25.4 mm Plano-Concave Lens, 50 mm EFL): measured (a) and simulated (c) direction vector amplitude $S = \sqrt {V_x^2 + V_y^2} $. To distinguish between concave and convex wavefronts, the direction vector angle $\alpha = atan2({{V_y},{V_x}} )$ is shown in (b) and (d) for the measurement and simulated data, respectively.
Fig. 8.
Fig. 8. Measurement sample 2 (Ø25.4 mm Plano-Convex Lens, 100 mm EFL): Direction vector amplitude. a) Sensor view of the sample under test, the missing data correspond to obscured features in the passive calibration board. b) Direction vector amplitude for the telecentric system only (before placing the sample). c) Sample direction vector. Noll-ordered Zernike decomposition for sample evaluation across spatial frequencies: d) Zernike terms 1:37, e) Zernike terms 38:150. f) Residual after removing 150 Zernike terms.
Fig. 9.
Fig. 9. Measurement sample 2 (Ø25.4 mm Plano-Convex Lens, 100 mm EFL): measured (a) and simulated (c) direction vector amplitude $S = \sqrt {V_x^2 + V_y^2} $. To distinguish between concave and convex wavefronts, the direction vector angle $\alpha = atan2({{V_y},{V_x}} )$ is shown in (b) and (d) for the measurement and simulated data, respectively.
Fig. 10.
Fig. 10. Measurement sample 3 (off-the-shelf spectacle lenses with adjustable power): direction vector amplitude for three different configurations of the adjustable glasses. The missing data correspond to obscured features in the passive calibration board shown in the second row.
Fig. 11.
Fig. 11. Noll-ordered Zernike decomposition for sample 3. a),f), and k) (first column) show the fringe visibility of the sample for the different relative positions of the two surfaces that compose the variable EFL lens. The raw data of the measured sample direction vectors are shown in b), g), and l). The third column, c), h), and m), shows the fitted Noll-ordered Zernike polynomials that dominate the direction vector amplitude. The fitted higher-ordered Noll-ordered Zernike terms 38:150 are shown in d), i), and n) that provide valuable insights into the mid-spatial frequency components. Finally, the residual for all datasets are shown in the last column. The obscured features from the calibration target are responsible for the missing data.
Fig. 12.
Fig. 12. Measurement sample 4 (micro-optic array consisting of cubic phase plates): a) Sensor view of the sample under test. b) Direction vector amplitude for the telecentric system only (before placing the sample). c) Direction vector amplitude of the sample. The Noll-ordered Zernike decomposition for sample evaluation across spatial frequencies ranges is found in d) Zernike terms 1:37, e) Zernike terms 38:150, where the residual is shown in f).
Fig. 13.
Fig. 13. Measurement sample 4 (Alvarez lens array single unit): Direction vector amplitude. a) Sensor view of the sample under test, the missing data correspond to obscured features in the passive calibration board. b) Direction vector amplitude for the telecentric system only (before placing the sample). c) Sample direction vector. Noll-ordered Zernike decomposition for sample evaluation across spatial frequencies: d) Zernike terms 1:37. e) Zernike terms 38:150. f) Residual after removing 150 Zernike terms.
Fig. 14.
Fig. 14. Reprojection error (Euclidean distance) for various samples
Fig. 15.
Fig. 15. (all measurement samples): Vision rays for a) the spherical lenses with 50 mm and 100 mm EFL, b) the commercial spectacle lenses with adjustable power, c) a single cubic phase element of the micro-optic array, and d) a single cell of the Alvarez micro-lens array. Each ray has been color-coded as a function of the direction vector amplitudes. For (b) and (d), only the vision rays of the relevant center region of the lenses are plotted.
Fig. 16.
Fig. 16. Difference in the focus point between sample 1 and sample 2. a) The focus point for the 50 mm EFL lens is located at z = −275.83 mm. b) The focus point for the 100 mm EFL lens is located z = −126.33. The distance between the two focus points is 149.5 mm.
Fig. 17.
Fig. 17. (measurement results sample 1 and 2): Geometric-wavefront reconstruction using the vision ray measurements for the sample of Sections 4.2 and 4.3.
Fig. 18.
Fig. 18. (measurement results sample 3): Geometric-wavefront reconstruction using the vision ray measurements for the sample of Sections 4.4.
Fig. 19.
Fig. 19. (measurement results sample 4 and 5): Geometric-wavefront reconstruction using the vision ray measurements for the sample of Sections 4.5 and 4.6.

Tables (1)

Tables Icon

Table 1. Comparison of Vision Ray Metrology and Deflectometry

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

Δ α x = a x a x T
Δ α y = a y a y T
[ x , y ] = P h a s e _ t o _ X Y M a p p i n g ( ϕ x , ϕ y )
x = o c + r c Δ z = o c + ( V x V y 1 ) n Δ z = ( x 0 y 0 0 ) + ( V x V y 1 ) n Δ z
S = V x 2 + V y 2 .
δ m = | | x w m x m | | 2 ,
W ( x , y ) = i = 1 N a i f i ( x , y )
V k = i = 1 N d d k a i f i ( x , y ) , k = x , y ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.