Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Surface defect measurement of ICF capsules under a limited depth of field

Open Access Open Access

Abstract

A surface defect detection device based on null interferometric microscopy (NIM) enables the measurement of surface defects in inertial confinement fusion (ICF) capsules. However, the microscope objective with a large numerical aperture in NIM causes the depth of field (DOF) of the system to be shallow, limiting the field of view (FOV) of the measurement. To expand the measurement FOV, a reconstruction method for the defocused surface defects in the FOV is presented, the angular spectrum diffraction model from the surface to the tilted plane is established, and the phase recovery method of the defocused surface defects is proposed by the theory of angular spectrum diffraction. Both the simulated and experimental results show that the proposed method can achieve the phase recovery of the surface defects in the defocused state and expand the measurement FOV, which improves the measurement accuracy and efficiency of the surface defects of the ICF capsules.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Laser inertial confinement fusion (ICF) experiments are important for current research on efficient clean energy, astrophysics, and thermonuclear explosion simulations. The ICF capsules are the core elements, and any surface defects on it may result in the failure of the experiment. Therefore, high-precision measurement of the surface defects of microspheres, such as ICF capsules, is of great significance [14]. At Nanjing University of Science and Technology, a surface defect detection system based on a null interferometric microscope (NIM) was developed, which achieved stable measurement of ICF capsules of different diameters with high lateral and axial resolution [5,6]. To achieve high lateral resolution, microscopic objectives with high numerical apertures are used in the interferometric system, resulting in a lower depth of field (DOF) and small effective measurement field of view (FOV). Therefore, in this study, the methods for DOF expansion are investigated to expand the measurement FOV.

The methods for DOF expansion are divided into two categories: a non-image synthesis method that changes the structure of the system, and an image synthesis method that processes multiple defocus images. In this study, we focus on the diffraction reconstruction method in the second category. Diffraction reconstruction technology in the field of digital holography is primarily based on holographic imaging. Because the recorded hologram contains the amplitude and phase information of the light field, processing the hologram using a diffraction numerical calculation method can obtain three-dimensional information of the object in a wide FOV [7]. DOF extension methods have been used in the micro-holograms of single-celled organisms [8] and digital holographic microscopy systems [9]. Some methods have been developed to achieve the construction, such as a convolutional neural network (CNN) approach [10] and an image fusion algorithm based on complex discrete wavelet transform [11]. Simultaneously, the application of micro-holography has been extended to continuous surfaces with tilted angles, which has led to several studies on diffracted light field reconstruction and DOF extension for tilted surfaces. The Fresnel approximation diffraction algorithm was first investigated to calculate the diffraction field of the tilted plane in 1988 [12]. Diffraction reconstruction and DOF expansion of the tilted object plane were achieved using the fast Fourier transform algorithm twice based on the plane wave angle spectrum theory [13].

In this study, aiming at the problem of limited DOF, an extension method based on the ICF capsule measuring model with NIM is presented. The curved surface was approximately replaced with a set of tilted planar segments, and a numerical calculation model of diffraction from plane to curved surface was established, allowing the effective measurement FOV to be expanded and the measurement efficiency and accuracy of the ICF capsules to be improved.

2. Algorithm principle

A schematic of the Linnik-type NIM is shown in Fig. 1, where the green and red parts are the imaging and interference light paths, respectively [5]. $W$ is the surface of the microsphere to be measured, which is spherical, and $W^{\prime}$ is the corresponding imaging surface. According to the imaging magnification relationship of the microscope objective, its axial magnification is the square of the vertical magnification, therefore the image surface $W^{\prime}$ is no longer a sphere, but a paraboloid. Because a microscope with a high numerical aperture has a low depth of focus, only the part within the DOF can be imaged clearly, while the defects outside are defocused, as shown in Fig. 2(a). For a microsphere with a diameter of 1 mm, the effective measurement FOV is less than half of the FOV of the system. The acquired interferogram is shown in Fig. 2(b), and the comparison between Fig. 2(c) and 2(d) shows that diffraction fringes outside the DOF have obvious defocus characteristics.

 figure: Fig. 1.

Fig. 1. Schematic of the Linnik-type null interferometric microscope (NIM).

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. The FOV of the microscope (a) Relationship between clear imaging area and FOV; (b) Interferogram in the imaging FOV; (c) Defect image of defocus; and (d) Defect image in focus.

Download Full Size | PDF

In this study, the angular spectrum diffraction model from the surface to the tilted plane was firstly established. Then the acquisition method of complex amplitudes of diffraction surface testing light and the optimization method of the diffraction distance were proposed. Finally, the phase recovery of the defocused surface defects was achieved by the theory of angular spectrum diffraction.

2.1 Principle of angular spectrum diffraction between tilted planes

The classical diffraction formula of scalar diffraction theory can be expressed as follows:

$$U(x,y,d) = {F^{ - 1}}\{{F\{{{U_0}({x_0},{y_0},0)} \}H({f_x},{f_y})} \}$$
where ${U_0}({x_0},{y_0},0)$ and $U(x,y,d)$ represent the complex amplitude distributions on the diffraction and observation planes, respectively;$d$ represents the diffraction distance, ${f_x}$, ${f_y}$ represent the frequency domain coordinates, and $H({f_x},{f_y})$ represents the diffraction transfer function. The diffraction process can be regarded as the transmission of a light wave field through a linear space-invariant system. For the complex amplitude on the observation plane, the Fourier transform of the complex amplitude on the diffraction plane is multiplied by the diffraction transfer function, and then the Fourier inverse transform is performed. Simultaneously, the diffraction process is reversible. By changing the form of the transfer function, the complex amplitude distribution of the diffracted surface ${U_0}({x_0},{y_0},0)$ can be calculated from the observed surface $U(x,y,d)$ by the inverse calculation of diffraction. The Fourier transform and Fourier inverse transform of Eq. (1) are obtained as follows:
$${U_0}({x_0},{y_0},0) = {F^{ - 1}}\left\{ {F\{{U(x,y,d)} \}\times \frac{1}{{H({f_x},{f_y})}}} \right\}.$$

The classical diffraction formulas of the scalar diffraction theory include the angular spectrum diffraction formula, Kirchhoff formula, Rayleigh-Sommerfield formula, and the paraxial approximation of these three formulas, that is, the Fresnel diffraction integral. Among them, the first three formulas have more accurate solutions than the Fresnel diffraction integral for diffraction problems. The transfer functions of the angular spectrum diffraction formula and its inverse have analytical solutions, whereas the transfer function of the inverse calculations of the Kirchhoff formula and Rayleigh-Sommerfield formula can only be expressed in the form of a Fourier transform without specific expressions, and the calculations are relatively complicated. Therefore, this study focused on the angular spectrum diffraction formula for diffraction calculations.

Angular spectrum diffraction theory is primarily used for diffraction calculations between parallel planes. When the diffraction plane is not parallel to the observation plane, the diffraction calculation can be achieved after the frequency-domain coordinate transformation of the parallel plane [14,15]. As shown in Fig. 3, three coordinate systems are defined: the principal coordinate system $(\hat{x},\hat{y},\hat{z})$, the diffraction plane coordinate system $({x_0},{y_0},{z_0})$, and the observation plane coordinate system $(x,y,z)$, where the optical axis is the $\hat{z}$ axis of the principal coordinate system. The diffraction plane coordinate system has the same origin as the principal coordinate system, and the plane where the complex amplitude ${g_0}({x_0},{y_0})$ of the diffraction plane is located at the plane $({x_0},{y_0},0)$. The origin of the observation plane coordinate system is located at $\hat{z} = d$, and the complex amplitude ${g_0}({x_0},{y_0})$ of the observation plane is located at the plane $(x,y,0)$. The diffraction calculation process from the complex amplitude $g(x,y)$ at the tilted diffraction plane to the complex amplitude $g(x,y)$ at the tilted observation plane can be roughly divided into the following steps:

  • (1) Fourier transform is performed to the complex amplitude ${g_0}({x_0},{y_0})$ of the diffraction plane to obtain its spectrum ${G_0}({f_{x0}},{f_{y0}})$ in the source plane $({x_0},{y_0},0)$.
  • (2) By transforming the coordinates in the frequency domain, the spectrum ${F_0}({\hat{f}_x},{\hat{f}_y})$ of the plane $(\hat{x},\hat{y},0)$ in the principal coordinate system is obtained from ${G_0}({f_{x0}},{f_{y0}})$.
  • (3) The spectrum ${F_d}({\hat{f}_x},{\hat{f}_y})$ of the plane $(\hat{x},\hat{y},d)$ in the principal coordinate system is calculated by using the spectral diffraction transfer function.
  • (4) The spectrum $G({f_x},{f_y})$ of the observation plane $(x,y,0)$ is acquired by the coordinate transformation in the frequency domain.
  • (5) The inverse Fourier transform is performed to obtain the complex amplitude $g(x,y)$ of the optical field in the observation plane $(x,y,0)$.

 figure: Fig. 3.

Fig. 3. Coordinate relationship between the tilted diffraction plane and the tilted observation plane.

Download Full Size | PDF

2.2 Equivalent substitution model of tilted planar segments to curved surface segments

When the diffracted surface is curved, an equivalent substitution model of a set of tilted planar segments is established, so that the surface can be calculated by the method of diffraction between tilted planes. Because the Fourier transform of the triangular hole has an analytic solution when the plane wave is irradiated, and a plane can be determined by the three vertices of a triangle, the triangular planar segments are used as the tilted planar segments to approximate the curved surface. As shown in Fig. 4(b), the measured surface area of the microsphere was calculated based on the parameters of the microsphere and the imaging system, and the area outside the DOF is divided into N layers equally along the axial direction, which means that D is equally divided into N parts. Then, the microsphere surface is divided into M regions by rotating the Z-axis clockwise at angle $\beta$. The subregion where the axial layering intersects with the direction of rotation can be divided into two triangular planar segments, thus dividing the microsphere surface into $N \times 2M$ triangular planar segments ${\delta _{nm}}$ (where $n = 0,1,2 \ldots ,N;$ $m = 0,1,2 \ldots ,2M$), as shown in Fig. 4(a). As shown in Fig. 4(c), each triangular tilted planar segment ${\delta ^{\prime}_{nm}}$ and its corresponding planar segment distribution ${\delta ^{\prime\prime}_{nm}}$ in the detector plane are determined by the ray tracing method, and its corresponding position and the spatial coordinate transformation relationship with the detector plane are calculated using the Gaussian formula of near-axis imaging to roughly determine the defocused distance. Finally, the light field information of the entire image plane was obtained by combination.

 figure: Fig. 4.

Fig. 4. Schematic of the division of the tilted plane model of the object space and the image space (a) segmentation of the direction of rotation of the object; (b) axial layering diagram of the object; (c) a triangular tilted planar segment in the image space.

Download Full Size | PDF

The method of coordinate transformation of the tilted triangular segment with respect to the detector plane is shown in Fig. 5(a). The principal coordinate system $(x,y,z)$ considers the Z-axis as the optical axis, and the diffraction coordinate system $({x_{0k}},{y_{0k}},{z_{0k}})$ for the kth tilted planar segment is established, which considers the pendant foot ${O_k}$ of ${P^{\prime}_3}$ on the bottom edge ${P^{\prime}_2}$ as the origin, ${O_k}{P^{\prime}_3}$ is the X-axis and ${P^{\prime}_1}{P^{\prime}_2}$ is the Y-axis. To simplify the coordinate transformation, the triangular planar segment is translated such that the origin of the diffraction coordinate system ${O_k}$ is located at $(0,0,{z_k})$ of the principal coordinate system, as shown in Fig. 5(b).

 figure: Fig. 5.

Fig. 5. Spatial coordinate transformation relationship between diffraction plane coordinate system and principal coordinate system. (a) initial relative position of the two coordinate systems; (b) translation of the origin of the diffraction plane coordinate system to the principal coordinate system at $(0,0,{z_k})$.

Download Full Size | PDF

The transformation relationship between the diffraction coordinate system and the principal coordinate system is as follows:

$$\left[ {\begin{array}{{c}} {{x_{0k}}}\\ {{y_{0k}}}\\ {{z_{0k}}} \end{array}} \right] = T\left[ {\begin{array}{{c}} x\\ y\\ z \end{array}} \right]$$
where T represents the transformation matrix, x, y, and z represent the axes of the principal coordinate system, and ${x_{0k}},{y_{0k}},{z_{0k}}$ represent the axes of the diffraction coordinate system. Assuming that the angles between x and ${x_{0k}},{y_{0k}},{z_{0k}}$ are ${\alpha _{xk}},{\alpha _{yk}},{\alpha _{zk}},$ the angles between y and ${x_{0k}},{y_{0k}},{z_{0k}}$ are ${\beta _{xk}},{\beta _{yk}},{\beta _{zk}},$ and the angles between z and ${x_{0k}},{y_{0k}},{z_{0k}}$ are ${\gamma _{xk}},{\gamma _{yk}},{\gamma _{zk}},$ then the transformation matrix T is as follows:
$$T = \left[ {\begin{array}{{ccc}} {{\alpha_{xk}}}&{{\beta_{xk}}}&{{\gamma_{xk}}}\\ {{\alpha_{yk}}}&{{\beta_{yk}}}&{{\gamma_{yk}}}\\ {{\alpha_{zk}}}&{{\beta_{zk}}}&{{\gamma_{zk}}} \end{array}} \right].$$

For each triangular planar segment, three vertex coordinates can be determined, and the vertex coordinates of the corresponding image triangular planar segment can be calculated by ray tracing. Therefore, the tilted plane where the planar segment is located can be determined by the coordinates of the three vertices, and the tilt angles in the matrix T are obtained by calculating the angles between the tilted plane and the X, Y and Z axes respectively.

2.3 Acquisition of complex amplitudes of diffraction surface testing light

A polarization camera was used in the surface defect detection system to collect the phase-shifting interferograms, which only record the intensity information. However, the numerical calculation of diffraction can only calculate the complex amplitude distribution of the light field, therefore it is necessary to first calculate the complex amplitude distribution of the test light based on the phase-shifting interferograms, that is, the complex amplitude distribution of the light field on the diffracted plane.

According to the interference theory, the intensity distribution of two coherent beams superimposed on each other can be expressed as follows:

$$I(x,y) = {I_d}(x,y) + {I_a}(x,y)\cos ({\phi _\Delta }(x,y) + \delta (t))$$
where ${I_d}(x,y)$ represents the background intensity, ${I_a}(x,y)$ represents the intensity modulation, ${\phi _\Delta }(x,y)$ represents the phase difference between the two coherent beams, and $\delta (t)$ denotes the phase shift. The four-step phase-shifting algorithm can be applied to obtain ${I_a}$ and ${\phi _\Delta }$ of the test light. Then, the phase and amplitude of the test light were calculated, and the complex amplitude distribution of the test light was obtained by combining them as follows:
  • (1) Test light amplitude component
A hollow microsphere has an inner and outer double-layer structure, and its material has a high transmittance. Therefore, when a spherical wave is incident in the microsphere, in addition to the light reflected from the outer surface as the test light, the reflected light from other surfaces will also be superimposed with the reference light on the detection plane. As shown in Fig. 6, the incident spherical wave converges at the center of the microsphere, and the front inner surface, rear inner surface, and rear outer surface of the microsphere coincide with the reflected light of the front outer surface as the test light. It was assumed that the amplitude of the test light was ${A_t}$, the amplitudes of the reflected light from the other surfaces were ${A_{o1}},{A_{i1}}$ and ${A_{i2}}$, and the amplitude of the reference light was ${A_r}$. Because the coherence length of the light source is less than the thickness of the microsphere, when the system achieves a null interferometric condition, the reflected light from the remaining surfaces and the reference light are superimposed incoherently. Therefore, the interference intensity can be expressed as follows:
$$\begin{aligned} I &= A_{o1}^2 + A_{i1}^2 + A_{i2}^2 + A_{t{^2}} + A_r^2 + 2{A_t}{A_r}\cos {\phi _\Delta }\\ &{_{{}_{}}} = {I_{other}} + {I_t} + {I_r} + 2{A_t}{A_r}\cos {\phi _\Delta } \end{aligned}.$$

Comparing Eq. (6) with Eq. (5), it can be observed that ${I_d} = {I_{other}} + {I_t} + {I_r}$ and ${I_a} = 2{A_t}{A_r}$. To obtain the test light amplitude distribution ${A_t}$, the test light is blocked to obtain the intensity distribution ${I_r}$ of the reference light, and then ${A_t}$ can be calculated according to ${I_a}$ as follows:

$${A_t} = \frac{{{I_a}}}{{2\sqrt {{I_r}} }}.$$

 figure: Fig. 6.

Fig. 6. Schematic diagram of reflection on each surface of microspheres.

Download Full Size | PDF

The complex amplitude distribution of the test light is:

$${\tilde{U}_t} = {A_t}\textrm{exp} (i{\phi _t}) = \frac{{{I_a}}}{{2\sqrt {{I_r}} }}\textrm{exp} (i{\phi _t}).$$
(2) Test light phase component

The phase component ${\phi _t}$ of the test light can be expressed as follows:

$${\phi _t} = {\phi _\Delta } + {\phi _r}$$
where phase ${\phi _\Delta }$ is obtained by the four-step phase-shifting algorithm, and the reference phase ${\phi _r}$ can be considered as the standard spherical wave emitted with the beam convergence point $Q^{\prime}$ in Fig. 1. Assuming that the distance from $Q^{\prime}$ to the detector is ${R_s}$, the spherical phase ${\phi _r}$ can be expressed as follows:
$${\phi _r} = \textrm{exp} \left[ { - \frac{{ik}}{{2{R_s}}}({x^2} + {y^2})} \right].$$

The complex amplitude transmittance of a positive lens with focal length $f^{\prime}$ is expressed as follows:

$$t(x,y) = \textrm{exp} \left[ { - \frac{{ik}}{{2f^{\prime}}}({x^2} + {y^2})} \right].$$

From Eqs. (10) and (11), the spherical phase ${\phi _r}$ can be equated to a positive lens with a focal length ${R_s}$ on the detector plane.

As shown in Fig. 7, any point $P^{\prime}$ on the image surface $W^{\prime}$ can be regarded as the image point of a virtual object point $Q^{\prime}$ after being imaged by the equivalent lens, according to the object-image conjugate transformation method. Assuming that the defocused distance of point $P^{\prime}$ is ${d_k}$, the distance from point $Q^{\prime}$ to the equivalent lens, that is, the detector, is calculated by the Gaussian formula for lens imaging:

$${d^{\prime}_k} = \frac{{{d_k}f^{\prime}}}{{({d_k} - f^{\prime})}}.$$

Thus, ${\phi _r}$ and ${\tilde{U}_t}$ are obtained. Then, the complex amplitude ${\tilde{U^{\prime}}_t}(x,y)$ of the virtual object point $Q^{\prime}$ is calculated by the inverse operation of diffraction, and the complex amplitude ${\tilde{U}_d}(x,y)$ of the image point $P^{\prime}$ is obtained by the object-image conjugate relationship of the equivalent lens as follows:

$${\tilde{U}_d}(x,y) = \frac{1}{M}\textrm{exp} \left[ {i\frac{\pi }{{\lambda ({d_k} - f^{\prime})}}({x^2} + {y^2})} \right]{\tilde{U}^{\prime}_t}(\frac{x}{M},\frac{y}{M})$$
where $M ={-} {{{d_k}} / {{{d^{\prime}}_k}}}$ denotes the imaging magnification of the equivalent lens.

 figure: Fig. 7.

Fig. 7. Schematic of object-image conjugate of the equivalent lens.

Download Full Size | PDF

Equation (13) shows that the process of lens imaging primarily consists of enlarging the coordinate scale of the object space by M, then multiplying by a spherical wave phase factor with a radius of $({d_k} - f^{\prime})$, and changing the amplitude to ${1 / M}$ times the original.

2.4 Optimization of the diffraction distance

It is necessary to optimize the diffraction distance to obtain an accurate focusing result using diffraction inversion. According to the auto-focusing theory in digital holography [1618], the light field of the plane located at an accurate focusing position must have a maximum or minimum value in the local energy distribution or other parameters, such as image sharpness. Therefore, a series of plane light field distributions at different diffraction distances can be calculated by the inverse operation of diffraction, and then the accurate diffraction distance can be determined by setting an evaluation function.

As shown in Fig. 8, the process of auto-focusing in digital holography is as follows: first, an approximate diffraction distance interval $[{{L_1},{L_2}} ]$ is determined based on the initial results, and the specific diffraction distances ${d_1},{d_2}, \ldots ,{d_n}$ with $\Delta x$ as the interval are generated, and n times of inverse operation of diffraction are performed to obtain n planar light field distribution; then, the region to be evaluated as the characteristic region of interest (ROI) is selected, and the evaluation function is used to calculate the extreme value (Fmax), and the accurate diffraction distance is determined.

 figure: Fig. 8.

Fig. 8. Schematic diagram of the auto-focusing process in digital holography.

Download Full Size | PDF

In the proposed measurement system, because diffraction causes changes in the amplitude of the plane at different diffraction distances, the amplitude integral is calculated after extracting the corresponding characteristic region, that is, the triangular planar segment, and the distance corresponding to the maximum value is the accurate focusing distance.

2.5 Summary of the proposed algorithm

Based on the above theory, the flow chart of the proposed diffraction calculation algorithm for microsphere surface defect detection is shown in Fig. 9.

  • (1) Collect four phase-shifting interferograms ${I_1},{I_2},{I_3},{I_4}$ block the test light to obtain the reference light intensity ${I_r}$, and reconstruct the initial test light complex amplitude ${\tilde{U}_t}$.
  • (2) The microsphere surface is divided into $N \times 2M$ triangular planar segments, and for any planar segment ${\delta _k}$, the approximate defocused distance ${d_k}$, the coordinate transformation matrix ${T_k}$,and the corresponding distribution ${\delta ^{\prime\prime}_k}$ can be calculated using the imaging relationship.
  • (3) Extract the spherical wave phase of the initial diffracted complex amplitude of the kth triangular planar segment, obtain the complex amplitude distribution ${\tilde{U}_d}$, and calculate the new diffraction distance ${d^{\prime}_k}$ using the object-image conjugate relationship.
  • (4) Obtain the exact diffraction distance d based on ${d^{\prime}_k}$ and the auto-focusing algorithm.
  • (5) Obtain the complex amplitude distribution ${\tilde{U}_k}$ of the kth planar segment.
  • (6) Extract the phase distribution ${\varphi _k}$ of ${\tilde{U}_k}$.

 figure: Fig. 9.

Fig. 9. Algorithm flow chart.

Download Full Size | PDF

3. Simulation

The diffraction calculation of a tilted planar segment was simulated based on the above algorithm. First, a test wavefront and defect phase are constructed to simulate the distribution of defects on the surface of a microsphere. Then, according to the actual system parameters, the diffraction process of the wavefront from the image surface to the detector plane is simulated, and the defocusing state of the defect after imaging by the system is calculated to simulate the morphological changes and other characteristics of the defect after defocusing.

By studying the morphology of many single defects in clear imaging, it is found that the morphology conforms to the Gaussian distribution, therefore a two-dimensional Gaussian function is used to simulate an isolated defect. The two-dimensional Gaussian function can be expressed as follows:

$$G(x,y) = A\textrm{exp} \left[ { - (\frac{{{x^2}}}{{2\sigma_x^2}} + \frac{{{y^2}}}{{2\sigma_y^2}})} \right]$$
where A represents the peak of the Gaussian function, which corresponds to the height of the defect model, ${\sigma _x}$ and ${\sigma _y}$ represent the standard deviations in the X- and Y-directions, which represent the change in widths in different directions. The relationship between $\sigma ({\sigma _x}_{}o{r_{}}{\sigma _y})$ and the full width at half maximum (FWHM) H of the defect is
$$H = 2.355 \times \sigma.$$

Because the statistical results of the measured defects show that the height of most defects is between 100 and 500 nm, and the FWHM is between 5 and 20 µm, a defect with a height of 500 nm and a FWHM of 10 µm is considered as an example to simulate the defocusing effect on the defect morphology in this study, and the defocus distance is set to 20 mm. The three-dimensional phase distribution of the defects is shown in Fig. 10. In Fig. 10(b), two profiles in the X-and Y-directions are selected, and a comparison is made with the simulated unfocused profiles and defocused profiles, and the deviations between the reconstructed and unfocused profiles, the defocused and the unfocused profiles are also calculated, as shown in Fig. 11. It can be observed from the figures that the reconstructed profiles are consistent with the simulated unfocused one, and the maximum absolute error is 3.2 nm. Therefore, the proposed method can achieve the reconstruction of a defocused morphology with high accuracy.

 figure: Fig. 10.

Fig. 10. Defocused defect (a) Three-dimensional image of the defect; (b) Two-dimensional image of the defect.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Reconstruction results and errors in the X- and Y-directions (a) Reconstruction result in X-direction; (b) Reconstruction result in Y-direction; (c) Reconstruction error in X-direction; (d) Reconstruction error in Y-direction.

Download Full Size | PDF

4. Experiment and result analysis

In this study, a Linnik-type NIM was used to measure the microspheres. First, the interference intensity distribution $I(x,y)$ was recorded by the polarization camera. Then, the test light was blocked to obtain the corresponding reference light intensity ${I_r}(x,y)$. Meanwhile, the radius of curvature ${R_s}$ of the spherical wave phase in the test light can be roughly measured according to the convergence point of the beam in front of the polarizing camera. The measured interferograms and reference light intensity images are shown in Fig. 12.

 figure: Fig. 12.

Fig. 12. Images acquired by the polarization camera (a) interferogram; (b) reference light intensity image.

Download Full Size | PDF

4.1 Calculation process

The calculation sequence is discussed over the next three subsections.

  • (1) Obtaining the complex amplitude distribution of the initial surface

    Four phase-shifting interferograms, ${I_1},{I_2},{I_3},$ and ${I_4}$ were obtained by separating Fig. 12(a). Figure 12(b) was also separated into four images, and the first image at 0° phase shift was used as the reference light intensity ${I_r}$. Based on the above, the test light complex amplitude ${\tilde{U}_t}$ of the detection surface of the polarization camera was obtained, as shown in Fig. 13.

  • (2) Establishment of the tilted planar segments model

    A microsphere with a diameter of 0.8 mm was measured in the experiment. Based on the microsphere diameter and system parameters, the measurement area of the microsphere surface was 320 µm, and the corresponding axial distance range was 33 µm. Based on the numerical aperture of the microscope objective, magnification, and resolution of the polarizing camera, the DOF was 3.4 µm, and the corresponding clear imaging area was approximately 100 µm. The axial distance was divided into seven layers at equal intervals, and the rotation direction was divided into 10 areas in the clockwise direction with $\beta = 36^\circ$, which equates the image surface to 140 triangular planar segments. The red triangular segment with an isolated defect in Fig. 13 was tilted $34^\circ$ relative to the X-axis and $0^\circ$ relative to the Y-axis, and the approximate defocused distance was 3.1 mm. According to Eq. (4), the coordinate transformation matrix T is as follows:

    $$T = \left[ {\begin{array}{ccc} 1&{\cos 34^\circ }&{ - \sin 34^\circ }\\ 1&0&0\\ 1&{\sin 34^\circ }&{\cos 34^\circ } \end{array}} \right].$$

  • (3) Determination of diffraction distance and inverse operation of diffraction

 figure: Fig. 13.

Fig. 13. Distribution of test light complex amplitude in the plane of polarization camera.

Download Full Size | PDF

According to the algorithm flow, the spherical wave phase factor in ${\tilde{U}_t}$ was extracted, which is equivalent to a lens with a focal length $f^{\prime} = {R_s}$. The focal length of the 0.8 mm microsphere was approximately 117 mm. Then, the complex amplitude ${\tilde{U}_d}$ after extracting the spherical wave phase factor was obtained.

Because the equivalent objective changes the object-image relationship, the diffraction distance of the above defects was recalculated according to the Gaussian formula, and the diffraction distance interval was roughly determined as [−1.5, −5.5] mm, which was divided into 20 equidistant layers, and the interval between adjacent diffraction distances was 0.02 mm.

For the tilted plane model, the magnitude integral evaluation function can be obtained from the diffraction between the tilted planes. The Fourier transform was performed on ${\tilde{U}_t}(x,y)$ and multiplied by the transfer function ${H_n}({f_x},{f_y})$ to obtain the spectral distribution ${G_n}({f_x},{f_y})$ in the observation plane as follows:

$${G_n}({f_x},{f_y}) = F\{{{{\tilde{U}}_t}(x,y)} \}{H_n}({f_x},{f_y}).$$

The matrix T of Eq. (16) was used to obtain the coordinate system of the tilted plane from the observation plane.

$$\left[ {\begin{array}{{c}} {{{\hat{f}}_x}}\\ {{{\hat{f}}_y}}\\ {{{\hat{f}}_z}} \end{array}} \right] = T\left[ \begin{array}{l} {f_x}\\ {f_y}\\ {f_z} \end{array} \right]$$

The reverse Fourier transform was performed on ${\hat{G}_n}({\hat{f}_x},{\hat{f}_y})$ to obtain the complex amplitude distribution of the tilted plane as follows:

$${\hat{U}_n}(x,y) = {F^{ - 1}}\{{{{\hat{G}}_n}({{\hat{f}}_x},{{\hat{f}}_y})} \}.$$

Thus, the magnitude integral evaluation function of the tilted plane is expressed as follows:

$${M_d} = {\sum _x}{\sum _y}|{{{\hat{U}}_n}(x,y)} |.$$

Using the auto-focusing algorithm on ${M_d}$, the accurate diffraction distance of the tilted plane was obtained as −2.552 mm, and then the accurate complex amplitude was obtained, and its phase distribution was extracted.

4.2 Experimental results

For the defect in the triangular region of Fig. 13, the results obtained directly using the four-step phase-shifting algorithm in the defocused state are shown in Fig. 14. The adjustment mechanism was then rotated to the center of the FOV to reach the accurate focus state, as shown in Fig. 15. From Figs. 14 and 15, it can be observed that the phase of the defocused defect is inaccurate. The proposed algorithm was then used to calculate the same defect. The results are shown in Fig. 16. Figure 16(a) shows the calculated results of the triangular area, and Figs. 16(b) and 16(c) show the phase distribution of the defect.

 figure: Fig. 14.

Fig. 14. Calculation results of phase-shifting interferogram (a) Full FOV; (b) Three-dimensional image of defocused defects; and (c) Two-dimensional image of defocused defects.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Calculated results after FOV rotation (a) Full FOV; (b) Three-dimensional image of focus defect; and (c) Two-dimensional image of focus defect.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. Defect results of tilted plane model reconstruction (a) Triangular area test results; (b) Three-dimensional reconstruction results; and (c) Two-dimensional reconstruction results.

Download Full Size | PDF

To further verify the recovery effect and accuracy of the model, we extracted the phase profiles of the defect in the X- and Y-directions from Figs. 1416, as shown in Figs. 17(a) and 17(b), respectively. The deviations between the reconstructed and unfocused profiles, the defocused and unfocused profiles are then calculated, as shown in Figs. 17(c) and 17(d), with maximum absolute deviations of 77.5 nm and 179.8 nm respectively.

 figure: Fig. 17.

Fig. 17. Phase profiles in different directions (a) X-direction; (b) Y-direction; (c) reconstruction error in X-direction; and (d) reconstruction error in Y-direction.

Download Full Size | PDF

It can be observed from Figs. 14 to 17 that the defect morphology at the top changed because of defocusing, and the diffraction calculation algorithm based on the tilted planar segment model can reconstruct the defocused defect and recover the morphological characteristics of the Gaussian function. The one-dimensional form of Eq. (14) was fitted to the X-direction phase profile to verify the reconstruction effect of the tilted plane diffraction model. And Eq. (15) was used to obtain the defect half-height width. The defect fitting results are shown in Table 1. Considering the fitting results of the unfocused defect as the true values, the relative errors of the other two results are also shown in Table 1.

Tables Icon

Table 1. Results of Gaussian function fitting in X-direction

It can be observed that in the X-direction, the relative errors of the defect height of the defocused defect result and the reconstruction result were 4.2% and −1.2% respectively, and the relative errors of the half-height width of the two results were 23.8% and 0.9% respectively. The errors of the reconstruction results were small, indicating that the angular spectrum diffraction model from the surface to the tilted plane can achieve the reconstruction of defocus defects with high accuracy.

5. Summary

This study was based on a NIM surface defect detection device. Because the measurement FOV is limited by the DOF, the numerical calculation method of angular spectrum diffraction based on tilted planar segments was studies. The simulation results show that the reconstructed morphology is consistent with the simulated unfocused morphology, with a maximum absolute error of 3.2 nm. The experimental results also show that the method can recover the defocused morphology with high precision. The maximum absolute error of the reconstructed profiles is 77.5 nm, the relative error of the defect height of the reconstruction result is −1.2%, and the relative error of the half-height width is 0.9%. The FOV of the measurement is increased by more than three times from 100 to 320 µm, thus improving the measurement efficiency with high accuracy.

Funding

National Natural Science Foundation of China (U2031131, 61975079); Key Laboratory Foundation of Equipment Advanced Research (6142604200511).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. R. Betti and O. A. Hurricane, “Inertial-confinement fusion with lasers,” Nat. Phys. 12(5), 435–448 (2016). [CrossRef]  

2. B. D. Blackwell, J. F. Caneses, C. M. Samuell, J. Wach, J. Howard, and C. Corr, “Design and characterization of the magnetized plasma interaction experiment (MAGPIE): a new source for plasma-material interaction studies,” Plasma Sources Sci. Technol. 21(5), 055033 (2012). [CrossRef]  

3. R. Chen, J. Qi, X. Guo, D. Ye, X. Wang, Q. Shi, D. Wu, Z. Liao, and T. Lu, “Surface morphology and microstructure evolution of B4C ceramic hollow microspheres prepared by wet coating method on a pyrolysis substrate,” Ceram. Int. 45(6), 7916–7922 (2019). [CrossRef]  

4. K. Du, M. Liu, T. Wang, X. He, Z. Wang, and J. Zhang, “Recent progress in ICF target fabrication at RCLF,” Matter Radiat. at Extremes 3(3), 135–144 (2018). [CrossRef]  

5. C. Wei, J. Ma, L. Chen, J. Li, F. Chen, R. Zhu, R. Guo, C. Yuan, J. Meng, Z. Wang, and D. Gao, “Null interferometric microscope for ICF-capsule surfacer-defect detection,” Opt. Lett. 43(21), 5174–5177 (2018). [CrossRef]  

6. C. Wei, J. Li, J. Ma, M. Duan, Y. Zong, X. Miao, R. Zhu, C. Yuan, D. Gao, and Z. Wang, “High-efficiency full-surface defects detection for an ICF capsule based on a null interferometric microscope,” Appl. Opt. 60(4), A62–A72 (2021). [CrossRef]  

7. M. Marcella, M. Paturzo, and P. Ferraro, “Extended focus imaging in digital holographic microscopy: a review,” Opt. Eng. 53(11), 112317 (2014). [CrossRef]  

8. I. Bergoënd, T. Colomb, N. Pavillon, Y. Emery, and C. Depeursinge, “Depth-of-field extension and 3D reconstruction in digital holographic microscopy,” Proc. SPIE 7390, 73901C (2009). [CrossRef]  

9. T. Colomb, N. Pavillon, J. Kuhn, E. Cuche, C. Depeursinge, and Y. Emeryo, “Extended depth-of-focus by digital holographic microscopy,” Opt. Lett. 35(11), 1840–1842 (2010). [CrossRef]  

10. Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Gunaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018). [CrossRef]  

11. X. Huang, H. Yan, I. Robinson, and Y. Chu, “Extending the depth of field for ptychography using complex-valued wavelets: publisher's note,” Opt. Lett. 44(3), 662 (2019). [CrossRef]  

12. D. Leseberg and C. Frère, “Computer-generated holograms of 3D objects composed of tilted planar segments,” Appl. Opt. 27(14), 3020–3024 (1988). [CrossRef]  

13. S. De Nicola, A. Finizio, G. Pierattini, P. Ferraro, and D. Alfieri, “Angular spectrum method with correction of anamorphism for numerical reconstruction of digital holograms on tilted planes,” Opt. Express 13(24), 9935–9940 (2005). [CrossRef]  

14. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003). [CrossRef]  

15. M. Juan, C. Jimenez, O. Cesar, and M. Torres, “Optical Image Encryption System Using Several Tilted Planes,” Photonics 6(4), 116 (2019). [CrossRef]  

16. P. Langehanenberg, G. von Bally, and B. Kemper, “Autofocus in digital holographic microscopy,” 3D Res 2(1), 4 (2011). [CrossRef]  

17. H. Wang, D. Zhao, S. Wang, and Y. Lou, “Comparison of the refocus criteria for the phase, amplitude, and mixed objects in digital holography,” Opt. Eng. 57(05), 1 (2018). [CrossRef]  

18. F. Dubois, A. El Mallahi, J. Dohet-Eraly, and C. Yourassowsky, “Refocus criterion for both phase and amplitude objects in digital holographic microscopy,” Opt. Lett. 39(15), 4286–4289 (2014). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Schematic of the Linnik-type null interferometric microscope (NIM).
Fig. 2.
Fig. 2. The FOV of the microscope (a) Relationship between clear imaging area and FOV; (b) Interferogram in the imaging FOV; (c) Defect image of defocus; and (d) Defect image in focus.
Fig. 3.
Fig. 3. Coordinate relationship between the tilted diffraction plane and the tilted observation plane.
Fig. 4.
Fig. 4. Schematic of the division of the tilted plane model of the object space and the image space (a) segmentation of the direction of rotation of the object; (b) axial layering diagram of the object; (c) a triangular tilted planar segment in the image space.
Fig. 5.
Fig. 5. Spatial coordinate transformation relationship between diffraction plane coordinate system and principal coordinate system. (a) initial relative position of the two coordinate systems; (b) translation of the origin of the diffraction plane coordinate system to the principal coordinate system at $(0,0,{z_k})$.
Fig. 6.
Fig. 6. Schematic diagram of reflection on each surface of microspheres.
Fig. 7.
Fig. 7. Schematic of object-image conjugate of the equivalent lens.
Fig. 8.
Fig. 8. Schematic diagram of the auto-focusing process in digital holography.
Fig. 9.
Fig. 9. Algorithm flow chart.
Fig. 10.
Fig. 10. Defocused defect (a) Three-dimensional image of the defect; (b) Two-dimensional image of the defect.
Fig. 11.
Fig. 11. Reconstruction results and errors in the X- and Y-directions (a) Reconstruction result in X-direction; (b) Reconstruction result in Y-direction; (c) Reconstruction error in X-direction; (d) Reconstruction error in Y-direction.
Fig. 12.
Fig. 12. Images acquired by the polarization camera (a) interferogram; (b) reference light intensity image.
Fig. 13.
Fig. 13. Distribution of test light complex amplitude in the plane of polarization camera.
Fig. 14.
Fig. 14. Calculation results of phase-shifting interferogram (a) Full FOV; (b) Three-dimensional image of defocused defects; and (c) Two-dimensional image of defocused defects.
Fig. 15.
Fig. 15. Calculated results after FOV rotation (a) Full FOV; (b) Three-dimensional image of focus defect; and (c) Two-dimensional image of focus defect.
Fig. 16.
Fig. 16. Defect results of tilted plane model reconstruction (a) Triangular area test results; (b) Three-dimensional reconstruction results; and (c) Two-dimensional reconstruction results.
Fig. 17.
Fig. 17. Phase profiles in different directions (a) X-direction; (b) Y-direction; (c) reconstruction error in X-direction; and (d) reconstruction error in Y-direction.

Tables (1)

Tables Icon

Table 1. Results of Gaussian function fitting in X-direction

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

U ( x , y , d ) = F 1 { F { U 0 ( x 0 , y 0 , 0 ) } H ( f x , f y ) }
U 0 ( x 0 , y 0 , 0 ) = F 1 { F { U ( x , y , d ) } × 1 H ( f x , f y ) } .
[ x 0 k y 0 k z 0 k ] = T [ x y z ]
T = [ α x k β x k γ x k α y k β y k γ y k α z k β z k γ z k ] .
I ( x , y ) = I d ( x , y ) + I a ( x , y ) cos ( ϕ Δ ( x , y ) + δ ( t ) )
I = A o 1 2 + A i 1 2 + A i 2 2 + A t 2 + A r 2 + 2 A t A r cos ϕ Δ = I o t h e r + I t + I r + 2 A t A r cos ϕ Δ .
A t = I a 2 I r .
U ~ t = A t exp ( i ϕ t ) = I a 2 I r exp ( i ϕ t ) .
ϕ t = ϕ Δ + ϕ r
ϕ r = exp [ i k 2 R s ( x 2 + y 2 ) ] .
t ( x , y ) = exp [ i k 2 f ( x 2 + y 2 ) ] .
d k = d k f ( d k f ) .
U ~ d ( x , y ) = 1 M exp [ i π λ ( d k f ) ( x 2 + y 2 ) ] U ~ t ( x M , y M )
G ( x , y ) = A exp [ ( x 2 2 σ x 2 + y 2 2 σ y 2 ) ]
H = 2.355 × σ .
T = [ 1 cos 34 sin 34 1 0 0 1 sin 34 cos 34 ] .
G n ( f x , f y ) = F { U ~ t ( x , y ) } H n ( f x , f y ) .
[ f ^ x f ^ y f ^ z ] = T [ f x f y f z ]
U ^ n ( x , y ) = F 1 { G ^ n ( f ^ x , f ^ y ) } .
M d = x y | U ^ n ( x , y ) | .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.