Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast 3D form measurement using a tunable lens profiler based on imaging with LED illumination

Open Access Open Access

Abstract

We present a fast shape measurement of micro-parts based on depth discrimination in imaging with LED illumination. It is based on a 4f-setup with an electrically adjusted tunable lens at the common Fourier plane. Using such a configuration, the opportunity to implement a fast depth scan by means of a tunable lens without the requirement of mechanically moving parts and depth discrimination using the limited spatial coherence of LED illumination is investigated. The technique allows the use of limited spatially partially coherent illumination which can be easily adapted to the test object by selecting the geometrical parameters of the system accordingly. Using this approach, we demonstrate the approach by measuring the 3D form of a tilted optically rough surface and a cold-formed micro-cup. The approach is robust, fast since required images are captured in less than a second, and eye-safe and offers an extended depth of focus in the range of few millimetres. Using a step height standard, we determine a height error of ±1.75 μm (1σ). This value may be further decreased by lowering the spatial coherence length of the illumination or by increasing the numerical aperture of the imaging system.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The precise geometry measurement of technical parts plays a significant role for developing and/or optimizing the manufacturing process [1]. Therefore, a 3D shape profiler is a fundamental requirement for the quality assurance during production [2]. For mass production in the microscopic domain, large amounts of standardized micro-parts with a size of less than $1$ mm in all three dimensions are fabricated. Since micro-parts form the basis for larger assemblies, e.g. in automobile industry, a $100\%$ inspection ensuring failure-free parts is mandatory [3].

Optical techniques such as confocal microscopy approaches [4], computational shear interferometry [5,6], phase retrieval [7,8], digital holographic interferometry (DHI) [9,10] and white light interferometry (WLI) [11] achieve measurement uncertainties down to a fraction of the mean wavelength of the illumination and can be categorized in three groups according to their working principle:

  • (i) Methods based on the evaluation of the lateral phase distribution of monochromatic light (within the object plane): Examples are digital holography [12] or phase-shifting interferometry [13]. The optical path difference is identified from the reconstructed phase and thus the shape of the test object is obtained. These methods have high demands regarding both temporal and spatial coherence of the illumination. Moreover, the result of the measurement is prone to environmental disturbances such as vibrations or thermal fluctuations since the two wave fields travel along different paths.
  • (ii) Methods based on identifying the optical path difference and thus the form from the temporal coherence function of partially coherent light: Examples are WLI and time domain (TD) and spectral (or Fourier) domain (SD) optical coherence tomography (OCT) [14,15]. In WLI and TD-OCT, measuring the whole object requires a large number of measurements which include mechanical axial scanning. Therefore, SWLI and TD-OCT currently are too slow for inline quality control in fast production lines [16]. On the other hand, SD-OCT has the advantage that no mechanical adjustments are required to obtain axial scans. However, a 3D form measurement still requires a few seconds [17]. In addition, these techniques are sensitive to vibration since they are based on two beam interferometry.
  • (iii) Methods using light with limited spatial coherence: These methods identify the optical path difference from the spatial coherence function by correlating the spatial coherence and the geometrical structure of the utilized extended light source [18,19]. Based on this concept the so called spatial coherence profilometry (SCP) has been developed for shape measurements [20]. This is realized by means of a Michelson interferometer, with the test object serving as one mirror. Both the test object and the reference mirror are fixed while the depth scanning is achieved by changing the spatial coherence of the illumination by means of a sophisticated combination of several Fresnel zone plates (FZP) and a rotating ground glass. Accordingly, high contrast interference is observed during the depth scanning process without the need for moving mechanical parts. This approach, however, requires temporal coherence over the extent of the depth of the test object. In contrast to methods given in the second category where a temporally incoherent and spatially coherent illumination is used, the SCP technique utilizes temporally coherent and spatially incoherent illumination. However, one big disadvantage of the approach is that the the FZPs are static elements and have to be fabricated in accord with the test object. in addition, the technique is sensitive to vibration since it is based on a standard interferometer.
Table 1 compiles the properties offered by the methods presented in the three categories above. A technique is needed that is robust against vibrations, has an extended depth of focus, eye-safe and simultaneously allows for high speed and precise measurement in the field of optical quality assurance during industrial production of micro-components. Unfortunately, such a technique is not available in the current state of the art.

Tables Icon

Table 1. Properties of phase evaluation, white light interferometry (WLI), optical coherence tomography (OCT) and spatial coherence profilometry (SCP) relevant for application in industrial metrology. oe-29-1-385-i001 = suitable, oe-29-1-385-i002 = unsuitable.

However, encoding the optical path difference by the spatial coherence function in a common path configuration can provide a new measuring system. In contrast to other interferometric configurations, common path configurations have low demands regarding mechanical vibration. This is recently investigated by developing a new approach for phase retrieval (PR) utilizing spatially partially illumination in $4f$-imaging system. The requirements to define under which light with limited coherence can be used for PR are defined in [21] and in [22] shape measurement was presented as an application. However, here a spatially coherent light is required since PR recovers the lateral phase information of a coherent wave field. To overcome this dilemma and reduce the requirement towards the spatial coherence of the illumination, we developed another approach for fast shape measurements based on imaging with spatially partially coherent illumination [23]. It makes use of the spatial coherence properties of the illumination to allow depth discrimination for fast object’s depth scanning implemented using a digital micro-mirror device (DMD). The spatially partially coherent illumination is generated from a He-Ne laser beam incident on a rotating diffuser. However, in case of using partially coherent light emitted from a broadband LED instead, the DMD’s micro-mirrors array acts like a diffraction grating. Thus, the object wave will be diffracted into several diffraction orders where the diffraction angle depends on the wavelength (angular dispersion) resulting in a blurred image.

The present paper experimentally demonstrates, for the fist time, an approach to measure the 3D form of micro-parts enabling the use of LED illumination. This is achieved by realizing the depth scan by means of an electrically tunable lens (ETL) [22] in analogy to spatial light modulator (SLM) based PR [7,24,25]. In regard to the state of the art ELTs were also used for axial scan in cofocal microscope [26,27] and enabled a large field of view as well [28]. In contrast to [23], ETL is a continuous element thus it has high diffraction efficiency. In addition, it does not produce angular dispersion and thus the imaging quality is high. We give, here, a brief introduction to the proposed profiler including the setup and the concept of depth discrimination. In addition, we will show that the best focus measure which can be used in the reconstruction is based on Michelson contrast. This is verified by comparing the Michelson contrast with two additional focus measures. Finally, the measurement accuracy and uncertainty of the approach is identified.

2. Experimental setup of the proposed profiler

The aim of the proposed profiler is the determination of the shape of a test object using a common-path configuration. Such a common-path configuration should implement a fast depth scan and depth discrimination without the use of mechanical parts. Figure 1 shows an illustrative scheme of the profiler.

 figure: Fig. 1.

Fig. 1. Scheme of the proposed profiler: An extended light source with a diameter of $r_s$ illuminates a test object with a height profile of $h(\vec {y})$ located at a distance of $z_{ill}$ from the source. The numerical aperture of the illumination is $\theta _{ill}=r_s/2z_{ill}$. If $z_{ill} \gg r_s$ the emitted waves can be described as plane waves with wave vectors $\vec {k}_n$ and directions $\vec {e}_{I,n}$, with two examples shown in red and green. Each plane wave generates a field $U_{O}(\vec {y})$ in the front of the object which is manipulated using a $4f$-system having two identical lenses with focal length of $f$ and a light modulator located at common Fourier plane. This generates a shifted propagated field $U_{I}(\vec {x}+\Delta \vec {x},z)$ at the image plane, where the lateral shift $\Delta \vec {x}$ depends on the corresponding illumination direction. A high speed camera captures the intensity $I(\vec {r})$ of the generated fields $U_{I,n}(\vec {r})$ at the image plane.

Download Full Size | PDF

In contrast to other imaging systems based on a single lens, the $4f$-configuration with two lenses performs two consecutive Fourier transforms and offers the opportunity to manipulate light fields in the Fourier domain by means of the complex transmittance of the modulator. The modulator located at the Fourier plane of the system enables fast depth scanning in analogy to the $4f$-based phase retrieval [7]. Considering the shift-invariant properties of the $4f$-system, the wave field $U_{I}(\vec {x},t)$ generated across the image plane $\vec {x}$ of the system takes the form [21]

$$U_{I}(\vec{x},t) = \mathcal{F}\left\{ \widehat{U}_{o}(\vec{\xi},t) \cdot H(\vec{\xi)} \right\}\,.$$
Where $H$ refers to the transfer function (TF) of the imaging system and $\vec {\xi }$ is a vector in frequency domain, $\widehat {U}_0$ is the Fourier transform of $U_0$ while $\mathcal {F}$ refers to the Fourier transformation.

As a fast dynamic modulators, a DMD or an ETL can be used achieving up to $4$ kHz [23] or $0.5$ kHz [22] depth scanning, respectively. Accordingly, the depth scan can be realized without the requirement of any moving mechanical parts. Since it shows high light efficiency and no diffraction orders disturbing the measurements at the camera plane, an ETL is selected as a fast modulator to realize the depth scanning process. In this case,

$$H(\vec{\xi})= M_{LMF}(\vec{\xi}) \cdot\mathcal{W}(\vec{\xi})$$
with the ETL modulation function in the frequency domain given by [23]
$$M_{LTF}(\vec{\xi}) = \exp\left[-\mathord{\mathrm{i}} k \frac{f^{2}}{F_L}\lambda^{2} |\vec{\xi}|^{2}\right]\,$$
and the aperture function of the ETL in the frequency domain given by [29]
$$\mathcal{W}(\vec{\xi}) = \textrm{circ}(|\vec{\xi}|/\xi_B) \,.$$
Here, $k=2\pi /\lambda$ represents the wave number where $\lambda$ is the main wavelength of the illumination, $F_L$ is the tuned focal length of the ETL and $\xi _B$ is the diameter in frequency of the limiting aperture $\mathcal {W}$. It is noted here that $M_{LTF}(\vec {\xi })$ is obtained by substituting the standard lens modulation function from the space domain $\vec {v}$ to the frequency domain $\vec {\xi }$ using the substitution $\vec {v}=\vec {\xi }(\lambda f)$. The modulation with $M_{LTF}$ introduces de-focusing or propagation. In accord with our previous work [22], the relationship between the tuned focal length $F_L$ and the propagation distance $z$ is given by $z=f^{2}/F_L$. Consequently, a fast depth scan across the object plane can be achieved.

We assume that an extended light source with a circular shape having a diameter of $r_s$ emits temporally and spatially incoherent light. Thus, the light source can be regarded as multiple mutually independent sources. Let an object with an optically rough surface be placed at a distance of $z_{ill}$ from the light source. For $z_{ill}\gg r_s$, these elementary sources can be considered to emit plane waves with wave vectors $\vec {k}_n$. Each of the plane waves illumination will create a fully developed speckle field at the image plane of the system. The speckle size is determined by the radius $\delta x$ of the point spread function (PSF) of the system which is defined by the Fourier transform of the limiting aperture of the TF, i.e. $P=\mathcal {F}\{W\}$. For a circular aperture of $D$, the radius of the PSF is [29]

$$\delta x=1.22\lambda f/D .$$
By including the modulation introduced by the imaging system and the ETL considering Eq (1) and Eq (3), the intensity of the speckle fields $(U_{I,n})$ at the image plane $\vec {x}$ is given by [21]
$$I(\vec{r}) =\left \langle \left|\sum_{n=1}^{N} U_{I,n}(\vec{r},t) \right|^{2}\right \rangle_T=\sum_{n=1}^{N} I_{n}(\vec{x}+\Delta\vec{x}_{n},z)\,.$$
Here $\langle \cdots \rangle _T$ refers to the time $(T)$ average, $\vec {r}=(\vec {x},z)$ is a vector at the image plane, $N$ is the number of the elementary light sources and the lateral shift of each individual $\Delta \vec {x}_n$ is given by [23]
$$\Delta \vec{x}_n = z\cdot\vec{e}_{I,n}\,.$$
Equation (7) indicates that the shift $\Delta \vec {x}_n$ depends on the propagation distance $z$ and the illumination direction $\vec {e}_{I,n}$ of the plane wave.

3. Contrast of the intensity and depth discrimination

We now show, in analogy to our recently published paper [21] which is developed for multiple-intensity phase retrieval, that partially coherent light with limited spatial coherence can be used to achieve depth discrimination. As was discussed in [21], the contrast $K$ of the fully developed speckle field generated for an individual plane wave illumination, i.e. coherent mode, at the camera plane can be given by the Michelson contrast

$$K = \frac{I_{max}-I_{min}}{I_{max}+I_{min}}\,.$$
Where $I_{min}$ and $I_{max}$ are the minimum and the maximum intensity, respectively. A main feature of fully developed speckle fields is that $I_{min}=0$ and thus the Michelson contrast is at its maximum $K=1$. However, there is an intensity averaging at the camera plane associated with the shifted speckle fields generated for the $N$ elementary sources as given by Eq (6). The radius of the maximum averaging area, i.e. the maximum shift,
$$\Delta x_{max}=z\cdot \theta_{ill} = z{r_s}/{2z_{ill}}$$
is defined by the propagation distance $z$ and numerical aperture (angular diameter) of the illumination $\theta _{ill}$ of the extended light source, see Fig. 1, in accordance with Eq (7). The intensity averaging reduces the speckle contrast [23]. To understand the effect of intensity averaging on the contrast, it is essential to compare the speckle radius $\delta x$ given by Eq (5) and the absolute value of the radius $\Delta x$ of the averaging area with its maximum $\Delta x_{max}$ defined by Eq (9).

In analogy to the heuristic contrast model which we developed in [21] for another application, the behaviour of the Michelson contrast variation based on the intensity averaging can be described as follows. For normalized intensity images

$$I_{max}=1-(\Delta x/\delta x)^{2}\overline{I} \textrm{ and } I_{min}=(\Delta x/\delta x)^{2}\overline{I}\,.$$
Substituting these quantities into Eq (8) yields the contrast at the image plain being
$$K(z)= \begin{cases} 1-\left( \frac{\Delta x}{\delta x} \right)^{2}, & \textrm{for} \,\Delta x\leq\delta x\\ 0, & \textrm{otherwise} \end{cases}$$
Accordingly, one can distinguish or discriminate between the in-focus object’s points and out-of-focus object points as follows:
  • (i) For object points located at the input plane of the imaging system, $z=0$ and $\Delta x = 0$, the intensity is therefore averaged over the radius $\delta x$ and the contrast is $K=1$.
  • (ii) For object points located at $z>0$ or $z<0$, i.e. out-of-focus planes, $\Delta x$ increases and consequently the contrast decreases until reaching $K=0$ for $\Delta x\geq \delta x$.
With the design parameters of the setup and the spatial coherence properties of the illumination adjusted so that $\Delta x_{max}\geq \delta x$, there are values of $z$ where $K > 0$ and values of $z$ where $K=0$. The bandwidth of the contrast curve obtained from Eq. (5) of [23] is
$$z_{w} \approx \frac{r_{coh}}{\textrm{NA}_{img}},$$
where $r_{coh}= 1.22\lambda z_{ill}/r_s$ is the spatial coherence length of a circular light source according to the Van Cittert-Zernike theorem [30] and $\textrm {NA}_{img}=D/2f$ is the numerical aperture of the $4f$-imaging system. The narrower the bandwidth of the contrast curve, the better the resolution of the depth scan. The resolution can be adjusted by either reducing the spatial coherence and/or increasing the numerical aperture of the system. The contrast model given by Eq. (11) will be compared to measurements of the Michelson contrast in Sec. 4.

This principle operates in analogy to confocal microscopy where contributions of light from out-of-focus planes are blocked at the image plane [31]. Here contributions of light from out-of-focus planes are not blocked but used to distinguish/discriminate between in-focus and out-of-focus. This feature will be used to retrieve the 3D shape from intensity measurements with limited depth of focus as will be discussed in Sec.4.2.

4. Experimental results and discussions

4.1 Experimental setup

Based on the discussion presented in Sec.(3) and following the sketch introduced in Fig. 1, the experimental setup shown in Fig. 2 is constructed. The test object is placed at the working distance (WD) of a microscope objective having a magnification $(M)$ of $10\times$ and a numerical aperture of $0.21$. Thus, the field of view and the numerical aperture are limited to $1\times 1$ m$^{2}$ and $1.83\,\mu$m, respectively. Hence, the propagation distance at the object plane $\Delta z$ and at the image plane $z$ are related by $z =M^{2} \Delta z$. The illumination has a mean wavelength of $\lambda = 630$ nm and a temporal coherence length of $l_{coh}\approx 10\,\mu$m. The fiber has a diameter of $r_s=400\, \mu$m $\pm \, 8\,\mu$m and has at its end a fixed collimator with a fixed focal length of $f_c = 6.17$ mm producing a beam diameter of approximately two millimeters. Thus, the angular diameter of the source is $\theta _{ill}= 0.032$. Based on these parameters the spatial coherence length is found to be $r_{coh}= 11.85\mu$m$\,\pm 0.23\,\mu$m. The $4f$ processor consists of a $4f$-imaging system with two identical lenses having the same focal length of $f=105$ mm and an ETL having a band limiting aperture with a diameter $D = 10$ mm integrated in the common Fourier plane. Accordingly, the imaging numerical aperture is $\textrm {NA}_{img}=0.048$. Based on these design parameters the full width at half maximum (FWHM) of $K(z)$ curve is approx. $109\,\mu$m. The ETL is EL-10-30-C-VIS-LD-MV supplied by Optotune Switzerland, which has a switching time of $<2.5$ ms and the propagation is implemented by electrically adjusting its $F_L$ [22]. Based on the microscope objective and the ETL we used, the depth scan is in the range of $8$ mm. A high speed camera (VKT Fastcam Nini UX $100$) with a resolution of $1024\times 1024$ pixels and a pixel pitch of $10\,\mu$m is located at the image plane. The camera is able to capture up to $4.000$ frames/s at full resolution. However, the multiple intensity images of the field generated at the camera plane within the depth scan process can only capture with approximately $500$ frames/s since the ETL has a comparatively high switching time. Please note, that the scan depth is only limited by the ETL and the focal length of the $4f$-system.

 figure: Fig. 2.

Fig. 2. Experimental setup of the 3D profiler based on the sketch presented in Fig. 1. A plate with an optically rough surface is used as a test object and is illuminated by collimated light emitted from a fiber coupled LED. The object is tilted by an angle of $62^{\circ }$ with respect to the input plane of the imaging system and is placed at the object plane of a microscope objective.

Download Full Size | PDF

4.2 Measurements and 3D form reconstruction

In this subsection, we discuss in detail the measurements and the form reconstruction processes of a 3D test object. For this purpose, a metallic plate having a surface roughness of $4\,\mu$m is placed on a rotation stage and then is tilted to imitate a 3D object. The plate is tilted by an angle of $62^{\circ }\,\pm 0.0008^{\circ }$ (the rotation stage has a precision of $\approx 15\,\mu$rad ) with regard to the optical axis. Consequently, it is illuminated with the previously designed spatially partially light emitted from the LED. Based on the camera resolution, the pixel size and the magnification of the microscope objective, the plate projected part on the camera has a width of $1024\,\mu$m. From the geometry of the plate, i.e. the tilt angle and the object’s projected width, the height of the object to be scanned is $h = 544.5\,\mu$m $\pm \,0.02\,\mu$m. Such a height value will be used in the evaluation as the known height of the object. Utilizing the design parameters of the system according to Eq (9) we obtain $\Delta x_{max} = h r_s/(2f_c) = 17.65\,\mu$m $\pm \, 0.35\,\mu$m and $\delta x = 8\,\mu$m. As $\Delta x_{max}\geq \delta x$ there are areas with high contrast and others with vanishing contrast across the tilted plate. Within the depth scan, the contrast envelope moves and its maximum defines the focus part of the plates and the corresponding height. For demonstration purposes, the object is inserted so that its left side in accord with the camera field of view is located at the WD of the objective. In consequence, $F_L$ is electrically tuned to scan the height of the test object. For form determination, a set of $84$ intensity images with an axial step of $6.5\,\mu$m is captured. The time required to capture each frame is $2.5$ ms and thus the whole set of images is captured in $210$ ms.

Figure 3 shows two examples of the captured images generated at the image plane for two different focal lengths of $F_L = 630$ mm and $245$ mm which are corresponding to a propagation distance of $175 \,\mu$m and $450 \,\mu$m at the object plane, respectively. The images have a limited depth of focus, where intensity distributions with high contrast are ensured only for object points located within the input plane of the imaging system. However, for out-of-focus object points the contrast decreases and eventually drops to the minimum as it is predicted. During the depth scanning the position of the high contrast areas is moving as shown in Visualization 1. The next step is to reconstruct the 3D-form from the captured images with limited depth of focus.

 figure: Fig. 3.

Fig. 3. Two examples of intensity images captured by the high speed camera for a) $F_L = 630$ mm and b) $F_L = 245$ mm. The depth scanning process used to scan the whole object by tuning $F_L$ is shown in Visualization 1.

Download Full Size | PDF

4.2.1 Contrast based reconstruction

For this purpose, we use the Michelson contrast as a focus measure to perform depth discrimination and thus reconstructing the 3D form. Figure 4 illustrates the process of finding the maximum contrast which is implemented in the following steps:

  • (i) The reconstruction process starts with the axial scan to find where the contrast is maximum. This is achieved by calculating the contrast within a window of $16\times 16$ pixels for the whole depth captured images. For visualization, this widow is represented by the colored boxes in Fig. (4).
  • (ii) Consequently, the Michelson contrast given by Eq (8) versus the scanned height $z$ curve is plotted. Hereafter, the maximum contrast is determined by fitting the central part of the curve using the theoretical model of the contrast given by Eq (11). This represents by the red box in Fig. (4) for illustration. As a result, the height and the intensity part within this window is determined.
  • (iii) Two new images represent the container images for the height map and the extended focus image of the object are reconstructed using the height and intensity within the considered window.
  • (iv) Now the sliding process is starting to transverse scan the the the images. This is achieved by laterally shifting the window by one pixel. Then steps ii) and iii) are repeated to define the height and the intensity image across the new window. Using these These information the two container images are updated.
  • (v) Step iv) is repeated until the sliding process is finished. It is noted, that within the determination of the contrast maximum, the total minimum variation principle is applied comparing the obtained value, i.e. height, with the founded surrounding ones. This overcomes problem arsing for contrast curves with multiple maximum peaks.

 figure: Fig. 4.

Fig. 4. Scheme to illustrate the processes of the axial and the lateral scan to find the height map and the extended focus image of the test object. We calculate within the red and yellow boxes, which have the same size and lateral position, where the contrast is maximum. This is visualized by means the black curve which shows the variation of $K$ as a function of $z$. Here, the maximum contrast is found to be within the red box. The intensity and the depth information for the red window are used to reconstruct the extended focus image and the height map of the object, respectively. Consequently, the boxes are laterally shifted to define where the contrast is maximum for another position and thus the extended focus image and the height map are pixel by pixel reconstructed.

Download Full Size | PDF

4.2.2 3D form of the tilted plate

Figure 5 shows the central result of this work. We use the captured images of the plate and implement the reconstruction procedure to retrieve the form of the tilted plate. It contains two contrast curves calculated for a single window during the axial scan. The black points show the experimentally measured variation of Michelson contrast $K$ against the scanned depth of the object while the black curve gives the Gauss fit to theses points. The red curve shows the behaviour of the Michelson contrast during the depth scan based on the contrast model given by Eq (11) which uses Eq (5) and Eq (9) and considers all design parameters of the system. These parameters are $\lambda ,\, \theta _{ill},\, f,\, D$ and $z$ which varies from $0$ to $h$ and them values are mentioned in Sec.4.1 and Sec.4.2. The shift of the maximum contrast of the theoretical curve from $0$ is defined as follow. We started with an image like the one shown in Fig. 3(a). Then, a sub-image having a resolution of $16 \times 1024$ is generated. Consequently, the sub-image is divided into $64$ window where each has a resolution of $16 \times 16$. Finally, Michelson contrasted of each window curve based on Eq (8) is calculated and the lateral position of the maximum contrast is defined. Such a lateral position as the width in the object geometrical model to calculate the corresponding object height indicating the offset shift to be applied to shift the theory curve from $z = 0$.

 figure: Fig. 5.

Fig. 5. Variation of Michelson contrast versus depth within the scan process. The calculated contrast from the measurements (black points) fitted by a Gauss fit (black curve) agrees well with the contrast model given by Eq (11) which is represented by the red line.

Download Full Size | PDF

It is noted, that the depth $z$ which corresponds to the maximum of Eq (11) is defined from a single image by finding the position where the Michelson contrast is maximum and then projecting this position on the tilted plate. The theoretical and experimental curves agree very well. Based on Gauss fitting of the experimental data points, the residual standard deviation $M_{fit}$ of the experimental points and the fit is determined. $M_{fit}$ is defined as $M_{fit}= \sqrt {\sum (z-z_{fit})^{2}/({n-2})}$, where $z$ and $z_{fit}$ refer to the measured depth and the estimated depth from the fit and $n$ here refers to the number of data points used to plot the curves. Accordingly, $M_{fit}=\pm 1.55\,\mu$m and is calculated within the envelope of the curves.

For comparison, two additional focus measures are calculated from alternative procedures. The first shows the variation of the focus measure $F_z$ based on the first order Gaussian derivative (GDER) where the energy content within the window after applying a Gaussian derivative filter for smoothing is calculated by [32]

$$F_z = \frac{1}{NM}\sum \left[I_{w,z}(\vec{x})\otimes G_{x_i}(\vec{x}, \gamma)\right]^{2}+\left[I_{w,z}(\vec{x})\otimes G_{x_j}(\vec{x}, \gamma)\right]^{2},$$
where $I_{w,z}$ the intensity distribution of the selected window of the image captured at a certain depth $z$. $G_{x_i}(\vec {x}, \gamma )$ and $G_{x_j}(\vec {x}, \gamma )$ are the first order Gaussian derivatives in the $x _i$ and $x_j$ directions at scale $\gamma$, $\otimes$ refers to a convolution and $N,M$ are the total number of pixels of the window.

The second focus measure represented is based on the focus variation (FV) method where the variance within the windows is determined using [33]

$$F_z = \frac{1}{NM}\sum |I_{w,z}(\vec{x})-\overline{I}_{w,z}(\vec{x})|^{2}\,.$$
Here, $\overline {I}_{w,z}(\vec {x})$ is the averaged intensity within the window. Table 2 compares the results of the different methods including the measurement uncertainty which is based on $M_{fit}$. The results indicate that using Michelson contrast and the Gaussian derivative leads to an accurate depth sampling, while the focus variation method seems to be less precise.

Tables Icon

Table 2. Full width half maximum (FWHM) obtained from Michelson contrast theory (based on Eq. (11)) and different measurement evaluation methods based on GDER given by Eq. (13) and FV based on Eq. (14): MC $=$ Michelson contrast, GDER $=$ Gaussian derivative, FV $=$ focus variation, all as described above.

Figure 6 shows the 3D form and an extended depth of focus image reconstruction of the tilted plate based on the above description of the reconstruction approach. Please note that the focused image, Fig. 6(a), is given as evidence to verify that the recovered ramp of the tilted plate is correct. The sliding process and the reconstruction of the extended focus image and the 3D height map are shown in Visualization 2 for illustration. From the result, the recovered height of the object is $539\pm 1.55\,\mu$m. In order to validate these results, we compare the recovered height value with the ones obtained from the geometrical model of the tilted plate which results in a height of $544.5\,\mu$m. We therefore estimate the maximum height deviation to around $5.5\pm 1.55\,\mu$m. In the following subsection, the reconstruction process is evaluated. The height deviation is in the range of the step width of the depth scan. In order to evaluate whether decreasing the step width of the depth scan decreases the height deviation, we are evaluating the proposed approach in the following section.

 figure: Fig. 6.

Fig. 6. Results from speckle contrast variation: a) An extended depth of focus image calculated from the set of all captured images, and b) the reconstructed height map. The sliding process and the reconstruction of the extended focus image and the 3D height map are illustrated in Visualization 2.

Download Full Size | PDF

4.3 Evaluation of the reconstruction

Figure 7 shows the step standard object used to evaluate the approach presented here. The step object has a diameter of $25$ mm and contains circular tracks each with a track width of $0.5$ mm. It is realized using a diamond turned machining by Labor für Mikrozerspanung in Bremen. Based on the step height, the object has $4$ groups with each group consisting of $4$ plateaus having the same step difference. From the centre, the $4$ groups have a step height difference of $5$, $10$, $20$ and $50\,\mu$m height, respectively.

 figure: Fig. 7.

Fig. 7. Step height standard object having a diameter $25$ mm, a track width of $0.5$ mm and is fabricated by Labor für Mikrozerspanung (LFM) using diamond turned: left) An image of the object and right) is an illustration. The measured filed is $1.0\times 1.0$ mm$^{2}$.

Download Full Size | PDF

The object is placed so that the third track of the first group with $10\,\mu$m step height is placed at the WD of the LDM objective. Consequently, it is illuminated with the spatially partially coherent illumination used in the previous subsection. As a result, Fig. 8(a) shows an image obtained for a propagation distance of $5\,\mu$m, which results in the middle step plateau being in focus. From that image, one can see $3$ step plateaus of the first group with $5\,\mu$m step height difference. On each step plateau there are several thin bright lines which represent traces of the diamond cutting tool. The bright lines on the dark background behave like fringes of an interference pattern. This enables the use of Michelson contrast as a focus measure for the reconstruction. However, there are two thick bright lines between the three steps showing the effect of the sharp edges of the step heights. Around these lines, the contrast focus measure can not be used. Therefore, we used the mask shown in Fig. 8(b) as a weighting to omit the unsuitable areas.

 figure: Fig. 8.

Fig. 8. (a) images captured by introducing a propagation of $5\,\mu$m by means of ETL. Within the three yellow boxes from up to down, Michelson contrast is $0.80$, $0.92$ and $0.081$ respectively. The complete axial scan process is illustrated in Visualization 3. (b) shows a mask which used as a weighting to omit the unwanted area around the sharp edges. (c) and (d) show the reconstructed extended focus image and the 3D height map of the step standard object, respectively.

Download Full Size | PDF

A set of $50$ images are captured in $120$ ms by electrically tuning the focal length of the ELT to realize propagation between $-10\,\mu$m and $10\,\mu$m with a propagation step interval of $0.25\,\mu$m. Within the three yellow boxes shown in Fig. 8(a) from up to down, Michelson contrast values are $0.80$, $0.92$ and $0.81$ respectively. This shows that the approach can discriminate between the three steps. The axial scan process is visualized in Visualization 3 for illustration. The reconstruction procedure presented in Sec. 4.2.1 is used to recover the extended focus image and the 3D form of the step standard object. The obtained result is shown in Figs. 8(c) and (d). It is worth to mention here that by utilizing keyence laser scanning confocal microscope (VK9700) with $100$X microscope objective the surface roughness across the plateaus and the depth of the diamond cutting tool traces are found to be $35$ nm and $200$ nm, respectively. These values are too small to be detected using the current setup and this indicates why the surface profile across the plateaus is smooth.

In order to find the measurement uncertainty, the measurement as well as the reconstruction procedures are repeated $10$ times. In each time the step standard object is removed from the setup and then readjusted again. Each time we use the captured images to reconstruct the 3D step height. Consequently, we calculate the average (mean) height from the $10$ reconstructed height maps and calculate the standard deviation $\sigma$. On each of the three plateaus a step height difference of $5\,\mu \,\textrm {m} \pm 1.75\,\mu$m ($1\sigma$) is determined. Finally, the measurements uncertainty $M_r$ according to Guide to the Expression of Uncertainty in Measurement norm (GUM) which is given by $M_r= \sigma /\sqrt {10}$ gives a measurement uncertainty of $M_r= 0.55\,\mu$m $(1\sigma )$. This corresponds, for the case described above, to a relative measurement error of $5.5\%$.

Compared to the focus variation technique [34], where the measurement uncertainty is in the nanometer range, the measurement uncertainty presented here is comparatively large. One way to decrease the measurement uncertainty is to reduce the spatial coherence length at the object plane or to increase the numerical aperture of the imaging system as given by Eq (12). The potential for further improvement will be considered in our future work.

4.4 Form of technical micro-parts

We demonstrate the potential of the proposed approach to reconstruct the 3D height map of a technical micro-part. The technical test object is a micro-cup machined within the Collaborative Research Center 747 “Micro cold forming- Process, Characterization and optimization”. It is fabricated by combining blanking and cold forming die wear in micro deep drawing. Such a machining process leads to shape failures which must be measured to optimize the machining process [2]. To demonstrate the applicability in industrial processes, the shape of a thin cold formed micro cup will be determined. For this purpose, we use the same setup and the same illumination configuration as proposed previously.

Figure 9 shows two images captured at two different focal lengths of $F_L = \infty$ (i.e. no propagation), and $F_L=635$ mm are corresponding to a propagation distance of $0 \,\mu$m and $174 \,\mu$m at the object plane, respectively. The geometry of the micro cup consists of a combination of a cylinder, a torus and a plane at the top. Due to the tilt of the object, all three parts of the geometry can be seen to some extend. The contrast is high only for object’s areas located at the input plane of the setup visualizing the depth discrimination within the depth scan. However, as can be seen from the images, one direction of illumination is not enough to capture an image with a homogeneous intensity distribution at the camera plane for such a complex object. This is because the captured images are low dynamic range (LDR) images, since in the brighter areas features appear fairly white, while other camera pixels in the darker areas are relatively black. Thus, object features outside this dynamic range are not visible. Increasing camera exposure time leads to a saturation of the illumination across the torus (brighter areas) of the micro cup.

 figure: Fig. 9.

Fig. 9. Measured intensity distributions: a) Intensity image at the input plane of the 4f imaging system for the test object and b) the intensity after modulating the wave field by lenses with a) $F_L = \infty$ mm, b) $F_L = 635$ mm. Dynamic range of the images are adapted between $0$ and $1$.

Download Full Size | PDF

For form determination, a set of $330$ intensity images with an axial step of $1\,\mu$m is captured. The axial sampling step is selected based on the validation analysis presented in the previous subsection. The time required to capture each frame is $3$ ms and thus the whole set of images is captured in $990$ ms. We use the Michelson contrast, as discussed above in Sec. 3, as a measure to perform depth discrimination and thus reconstructing the 3D form. Accordingly, the 3D form and the extended depth of focus image of the micro cup are reconstructed and shown in Fig. 10. The best reconstruction is located within the area highlighted by the yellow line. This area has a good intensity dynamic range compared with areas located outside the yellow border. One can see clearly the three basic geometrical elements of the micro-cup. However, the results shown so far indicate the importance of multiple illumination directions for enhancing the dynamic range of the entire image in order to capture images with a high dynamic range (HDR). This is also a topic of future work. It is noted that, the reconstruction process time for the $330$ captured set of images is $3$ minutes using a standard computer with i$7$ Core and $32$ GM RAM. This could be drastically enhanced if the reconstruction process is parallelized and a graphic processing unit (GPU) is used.

 figure: Fig. 10.

Fig. 10. Results of (a) calculated extended depth of focus image. Here the area framed by the yellow line indicates the area of the best reconstruction. (b) shows the reconstructed 3D height map.

Download Full Size | PDF

5. Conclusion

We presented a new method for the form measurement of technical objects based on depth discrimination within the object’s depth scan under a broadband LED illumination. Within this method, a fast depth scan is realized using an electrically tunable lens located at the common plane of a telecentric $4f$-imaging systems. Since no mechanical parts are used during the depth scan the setup is robust against mechanical vibrations. The depth discrimination is implemented utilizing light with limited spatial coherence emitted from an LED, which also ensures eye-safety of the system. We correlate the spatial coherence properties of the illumination and the design parameters of the system to define the breadth of the full width half maximum (FWHM) of the focus measure curve. Accordingly, the depth scan resolution can be adjusted. We propose the use of the Michelson contrast of the captured speckle images as a focus measure to reconstruct the 3D height map of the test objects. The potential of the approach with regarding industrial application was demonstrated by measuring the the 3D height map of a cold formed micro-cup. The approach offers an extended depth of focus which depends on the utilized focal range of the tunable lens and the focal length of the $4f$-imaging system. Currently shape measurement with a height error of $\pm 1.75\,\mu$m $(1 \sigma )$ is achieved. This value may be enhanced by further decreasing the spatial coherence length of the illumination at the object plane or by increasing the numerical aperture of the imaging system.

Funding

Deutsche Forschungsgemeinschaft (Project no. 284158589, SPICE).

Acknowledgments

We thank Reiner Klattenhoff for his support in the construction of the experiment setup.

Disclosures

The authors declare no conflicts of interest.

References

1. C. v. Kopylow and R. B. Bergmann, “Optical metrology,” in Micro Metal Forming (Springer, 2013), pp. 392–404.

2. P. Maaß, I. Piotrowska-Kurczewski, M. Agour, A. v. Freyberg, B. Staar, C. Falldorf, A. Fischer, M. Lütjen, M. Freitag, G. Goch, R. B. Bergmann, A. Simic, M. Mikulewitsch, B. Köhler, B. Clausen, and H.-W. Zoch, “Quality control and characterization,” in Cold Micro Metal Forming (Springer, 2020), pp. 253–310.

3. A. Simic, H. Freiheit, M. Agour, C. Falldorf, and R. B. Bergmann, “In-line quality control of micro parts using digital holography,” Proc. SPIE 10233, 1023311 (2017). [CrossRef]  

4. G. Dussler, B. Broecher, and T. Pfeifer, “Inspection of microsystems with a laser scanning microscope,” Proc. SPIE 3825, 144–150 (1999). [CrossRef]  

5. C. Falldorf, C. v. Kopylow, and R. B. Bergmann, “Wave field sensing by means of computational shear interferometry,” J. Opt. Soc. Am. A 30(10), 1905–1912 (2013). [CrossRef]  

6. C. Falldorf, M. Agour, and R. B. Bergmann, “Digital holography and quantitative phase contrast imaging using computational shear interferometry,” Opt. Eng. 54(2), 024110 (2015). [CrossRef]  

7. C. Falldorf, M. Agour, C. v. Kopylow, and R. B. Bergmann, “Phase retrieval by means of a spatial light modulator in the fourier domain of an imaging system,” Appl. Opt. 49(10), 1826–1830 (2010). [CrossRef]  

8. M. Agour, C. Falldorf, and R. B. Bergmann, “Investigation of composite materials using SLM-based phase retrieval,” Opt. Lett. 38(13), 2203–2205 (2013). [CrossRef]  

9. T. Kreis, Handbook of holographic interferometry: optical and digital methods (John Wiley & Sons, 2006).

10. U. Schnars, C. Falldorf, J. Watson, and W. Jüptner, Digital holography and wavefront sensing (Springer, 2016).

11. J. C. Wyant, “White light interferometry,” Proc. SPIE 4737, 98–107 (2002). [CrossRef]  

12. U. Schnars and W. Jüptner, “Direct recording of holograms by a ccd target and numerical reconstruction,” Appl. Opt. 33(2), 179–181 (1994). [CrossRef]  

13. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22(16), 1268–1270 (1997). [CrossRef]  

14. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]  

15. M. Szkulmowski, A. Szkulmowska, T. Bajraszewski, A. Kowalczyk, and M. Wojtkowski, “Flow velocity estimation using joint spectral and time domain optical coherence tomography,” Opt. Express 16(9), 6008–6025 (2008). [CrossRef]  

16. M. Hering, S. Herrmann, M. Banyay, K. Körner, and B. Jähne, “One-shot line-profiling white light interferometer with spatial phase shift for measuring rough surfaces,” Proc. SPIE 6188, 61880E (2006). [CrossRef]  

17. T. Klein, W. Wieser, L. Reznicek, A. Neubauer, A. Kampik, and R. Huber, “Multi-mhz retinal oct,” Biomed. Opt. Express 4(10), 1890–1908 (2013). [CrossRef]  

18. C. McCutchen, “Generalized source and the van cittert–zernike theorem: a study of the spatial coherence required for interferometry,” J. Opt. Soc. Am. A 56(6), 727–733 (1966). [CrossRef]  

19. V. Ryabukho, D. Lyakin, and M. Lobachev, “Longitudinal pure spatial coherence of a light field with wide frequency and angular spectra,” Opt. Lett. 30(3), 224–226 (2005). [CrossRef]  

20. P. Pavliček, M. Halouzka, Z. Duan, and M. Takeda, “Spatial coherence profilometry on tilted surfaces,” Appl. Opt. 48(34), H40–H47 (2009). [CrossRef]  

21. C. Falldorf, M. Agour, F. Thiemicke, and R. B. Bergmann, “Lens-based phase retrieval under spatially partially coherent illumination - part I: Theory,” Opt. Lasers Eng. 139, 106507 (2021). [CrossRef]  

22. M. Agour, C. Falldorf, F. Thiemicke, and R. B. Bergmann, “Lens-based phase retrieval under spatially partially coherent illumination - part II: Shape measurements,” Opt. Lasers Eng. 139, 106407 (2021). [CrossRef]  

23. M. Agour, C. Falldorf, and R. B. Bergmann, “Fast form measurements using digital micro-mirror device in imaging with partially coherent illumination,” Opt. Lett. 45(22), 6154–6157 (2020). [CrossRef]  

24. M. Agour, P. Huke, C. v. Kopylow, and C. Falldorf, “Shape measurement by means of phase retrieval using a spatial light modulator,” AIP Conf. Proc. 1236, 265–270 (2010). [CrossRef]  

25. C. Falldorf, C. V. Kopylow, R. B. Bergmann, and M. Agour, “Measurement of thermally induced deformations by means of phase retrieval,” in Proceedings of IEEE International Symposium on Optomechatronic Technologies, (IEEE, 2010), pp. 1–5.

26. A. Doblas, E. Sánchez-Ortiga, G. Saavedra, J. Sola-Pikabea, M. Martínez-Corral, P.-Y. Hsieh, and Y.-P. Huang, “Three-dimensional microscopy through liquid-lens axial scanning,” Proc. SPIE 9495, 949503 (2015). [CrossRef]  

27. C.-S. Kim, W. Kim, K. Lee, and H. Yoo, “High-speed color three-dimensional measurement based on parallel confocal detection with a focus tunable lens,” Opt. Express 27(20), 28466–28479 (2019). [CrossRef]  

28. H. jun Jeong, H. Yoo, and D. Gweon, “High-speed 3-D measurement with a large field of view based on direct-view confocal microscope with an electrically tunable lens,” Opt. Express 24(4), 3806–3816 (2016). [CrossRef]  

29. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

30. B. E. Saleh and M. C. Teich, Fundamentals of photonics (John Wiley & Sons, 2019).

31. J. Pawley, Handbook of biological confocal microscopy, vol. 236 (Springer Science & Business Media, 2006).

32. J.-M. Geusebroek, F. Cornelissen, A. W. Smeulders, and H. Geerts, “Robust autofocusing in microscopy,” Cytometry 39(1), 1–9 (2000). [CrossRef]  

33. W. Kapłonek, K. Nadolny, and G. M. Królczyk, “The use of focus-variation microscopy for the assessment of active surfaces of a new generation of coated abrasive tools,” Meas. Sci. Rev. 16(2), 42–53 (2016). [CrossRef]  

34. R. Danzl, F. Helmli, and S. Scherer, “Focus variation–a robust technology for high resolution optical 3d surface metrology,” Strojniski Vestnik / J. Mech. Eng. 57, 245–256 (2011). [CrossRef]  

Supplementary Material (3)

NameDescription
Visualization 1       The media shows the contrast variation within the depth scan process of a tilted plate.
Visualization 2       This video illustrates how the reconstruction process works.
Visualization 3       The video shows the contrast variation between three steps plateau of a step standard object.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Scheme of the proposed profiler: An extended light source with a diameter of $r_s$ illuminates a test object with a height profile of $h(\vec {y})$ located at a distance of $z_{ill}$ from the source. The numerical aperture of the illumination is $\theta _{ill}=r_s/2z_{ill}$. If $z_{ill} \gg r_s$ the emitted waves can be described as plane waves with wave vectors $\vec {k}_n$ and directions $\vec {e}_{I,n}$, with two examples shown in red and green. Each plane wave generates a field $U_{O}(\vec {y})$ in the front of the object which is manipulated using a $4f$-system having two identical lenses with focal length of $f$ and a light modulator located at common Fourier plane. This generates a shifted propagated field $U_{I}(\vec {x}+\Delta \vec {x},z)$ at the image plane, where the lateral shift $\Delta \vec {x}$ depends on the corresponding illumination direction. A high speed camera captures the intensity $I(\vec {r})$ of the generated fields $U_{I,n}(\vec {r})$ at the image plane.
Fig. 2.
Fig. 2. Experimental setup of the 3D profiler based on the sketch presented in Fig. 1. A plate with an optically rough surface is used as a test object and is illuminated by collimated light emitted from a fiber coupled LED. The object is tilted by an angle of $62^{\circ }$ with respect to the input plane of the imaging system and is placed at the object plane of a microscope objective.
Fig. 3.
Fig. 3. Two examples of intensity images captured by the high speed camera for a) $F_L = 630$ mm and b) $F_L = 245$ mm. The depth scanning process used to scan the whole object by tuning $F_L$ is shown in Visualization 1.
Fig. 4.
Fig. 4. Scheme to illustrate the processes of the axial and the lateral scan to find the height map and the extended focus image of the test object. We calculate within the red and yellow boxes, which have the same size and lateral position, where the contrast is maximum. This is visualized by means the black curve which shows the variation of $K$ as a function of $z$. Here, the maximum contrast is found to be within the red box. The intensity and the depth information for the red window are used to reconstruct the extended focus image and the height map of the object, respectively. Consequently, the boxes are laterally shifted to define where the contrast is maximum for another position and thus the extended focus image and the height map are pixel by pixel reconstructed.
Fig. 5.
Fig. 5. Variation of Michelson contrast versus depth within the scan process. The calculated contrast from the measurements (black points) fitted by a Gauss fit (black curve) agrees well with the contrast model given by Eq (11) which is represented by the red line.
Fig. 6.
Fig. 6. Results from speckle contrast variation: a) An extended depth of focus image calculated from the set of all captured images, and b) the reconstructed height map. The sliding process and the reconstruction of the extended focus image and the 3D height map are illustrated in Visualization 2.
Fig. 7.
Fig. 7. Step height standard object having a diameter $25$ mm, a track width of $0.5$ mm and is fabricated by Labor für Mikrozerspanung (LFM) using diamond turned: left) An image of the object and right) is an illustration. The measured filed is $1.0\times 1.0$ mm$^{2}$.
Fig. 8.
Fig. 8. (a) images captured by introducing a propagation of $5\,\mu$m by means of ETL. Within the three yellow boxes from up to down, Michelson contrast is $0.80$, $0.92$ and $0.081$ respectively. The complete axial scan process is illustrated in Visualization 3. (b) shows a mask which used as a weighting to omit the unwanted area around the sharp edges. (c) and (d) show the reconstructed extended focus image and the 3D height map of the step standard object, respectively.
Fig. 9.
Fig. 9. Measured intensity distributions: a) Intensity image at the input plane of the 4f imaging system for the test object and b) the intensity after modulating the wave field by lenses with a) $F_L = \infty$ mm, b) $F_L = 635$ mm. Dynamic range of the images are adapted between $0$ and $1$.
Fig. 10.
Fig. 10. Results of (a) calculated extended depth of focus image. Here the area framed by the yellow line indicates the area of the best reconstruction. (b) shows the reconstructed 3D height map.

Tables (2)

Tables Icon

Table 1. Properties of phase evaluation, white light interferometry (WLI), optical coherence tomography (OCT) and spatial coherence profilometry (SCP) relevant for application in industrial metrology. oe-29-1-385-i001 = suitable, oe-29-1-385-i002 = unsuitable.

Tables Icon

Table 2. Full width half maximum (FWHM) obtained from Michelson contrast theory (based on Eq. (11)) and different measurement evaluation methods based on GDER given by Eq. (13) and FV based on Eq. (14): MC = Michelson contrast, GDER = Gaussian derivative, FV = focus variation, all as described above.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

U I ( x , t ) = F { U ^ o ( ξ , t ) H ( ξ ) } .
H ( ξ ) = M L M F ( ξ ) W ( ξ )
M L T F ( ξ ) = exp [ i k f 2 F L λ 2 | ξ | 2 ]
W ( ξ ) = circ ( | ξ | / ξ B ) .
δ x = 1.22 λ f / D .
I ( r ) = | n = 1 N U I , n ( r , t ) | 2 T = n = 1 N I n ( x + Δ x n , z ) .
Δ x n = z e I , n .
K = I m a x I m i n I m a x + I m i n .
Δ x m a x = z θ i l l = z r s / 2 z i l l
I m a x = 1 ( Δ x / δ x ) 2 I ¯  and  I m i n = ( Δ x / δ x ) 2 I ¯ .
K ( z ) = { 1 ( Δ x δ x ) 2 , for Δ x δ x 0 , otherwise
z w r c o h NA i m g ,
F z = 1 N M [ I w , z ( x ) G x i ( x , γ ) ] 2 + [ I w , z ( x ) G x j ( x , γ ) ] 2 ,
F z = 1 N M | I w , z ( x ) I ¯ w , z ( x ) | 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.