Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Non-contact automated defect detection using a deep learning approach in diffraction phase microscopy

Open Access Open Access

Abstract

Precision measurement of defects from optical fringe patterns is a problem of significant practical relevance in non-destructive metrology. In this paper, we propose a robust deep learning approach based on atrous convolution neural network model for defect detection from noisy fringe patterns obtained in diffraction phase microscopy. The model utilizes the wrapped phase obtained from the fringe pattern as an input and generates a binary image depicting the defect and non-defect regions as output. The effectiveness of the proposed approach is validated through numerical simulations of various defects under different noise levels. Furthermore, the practical application of the proposed technique for identifying defects in diffraction phase microscopy experiments is also confirmed.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In the fields of experimental mechanics, non-destructive testing, and precision metrology, non-contact identification of surface defects of a test sample is a problem with significant practical relevance. For defect detection, optical interferometric techniques based on digital holography [14], electronic speckle pattern interferometry [57], shearography [810] and diffraction phase microscopy [1113] have become prominent; mainly due to the attractive features such as non-invasive operation, high throughput, full field measurement and good resolution. In these techniques, the information about the defect is encoded in a fringe pattern and the defect is marked by regions of high fringe density [14]. In other words, a defect is characterized as a region where the phase of the fringe exhibits a rapid alteration, and consequently, a distinct shift in the density of the fringe pattern becomes evident at the precise location of the defect. As a result, many strategies for finding defects from fringe patterns have been presented in literature. Methods based on fringe tracking and contouring [1517] have been suggested for identifying defects, but they involve time-consuming contour scanning operations and high noise susceptibility. The phase shifting approach [1820] is another well-known method for defect detection; but it requires recording of multiple images with accurate phase shifts. Thus, this approach has low throughput and is unsuitable for dynamic operations. Further, single frame methods based on numerical differentiation [21] and Fourier transform [22] have been proposed to increase the throughput; however, these methods are not robust against high noise levels. Over the years, another class of techniques based on analyzing local spatial fringe frequency variations have also been proposed for defect detection from fringe patterns. This class include methods such as windowed Fourier transform [2325], wavelet transform [26,27], Wigner-Ville distribution [28,29] and phase gradient retrieval [30,31] for defect detection. The main limitation of these methods is calculation of threshold parameter to classify defect regions, which requires careful operator control and monitoring. In addition, many of these methods are associated with high computational complexity due to intensive space-frequency spectrum calculations, phase gradient or fringe density computations, thresholding and iterative operations. Accordingly, there is strong need for defect identification methods with high noise robustness, good computational efficiency and automated operation without threshold requirement. Our goal in this paper is to address this challenge by introducing a neural network based deep learning approach which allows for fast, automated, threshold-free and noise tolerant detection of fringe pattern defects. Deep learning [3236] offers promising application potential in the field of optical metrology, and our proposed approach is a step towards realizing its prospective utility for defect detection.

In this work, we have mainly focused on the application of the proposed deep learning method in diffraction phase microscopy (DPM) for defect detection. Diffraction phase microscopy [11] is a quantitative phase imaging technique which offers the significant advantages of single shot measurement, compact setup, and robust design due to an off-axis common-path interferometric configuration. As a result, diffraction phase microscopy has found wide applicability [3741] in recent years for optical imaging and metrology. Next, we summarize the primary contributions of the proposed work as given below:

  • (1) Development of a deep learning based method for fast and automated defect detection from optical fringe patterns
  • (2) Demonstrate capability of the proposed method for identifying different types of defects even at severe noise levels
  • (3) Validate practical utility for experimental fringe patterns obtained in diffraction phase microscopy

The paper is organized as follows. In section 2, we provide theory of the proposed method for defect detection. In section 3, we present the numerical simulation and experimental results, where we show the performance of the proposed method for identifying several types of fringe defects under varying noise levels. Finally, the conclusions are presented.

2. Theory

In this section, we initially discuss the mathematical principles of fringe processing in diffraction phase microscopy. Subsequently, we outline the deep learning method for defect detection.

2.1 Fringe processing in DPM

In diffraction phase microscopy, the fringe pattern can be mathematically represented as [11]

$$F(x,y)= a_b(x,y) + m(x,y) \cos(2\pi\omega^c_x x + 2\pi\omega^c_y y + \phi(x,y))$$
where $a_b(x,y)$ represents the background illumination, the fringe modulation is depicted by $m(x,y)$, and $\phi (x, y)$ represents the phase. In addition, $\omega ^c_x$ and $\omega ^c_y$ represent the spatial carrier frequencies along horizontal and vertical dimensions. The presence of spatial carrier is attributed to the off-axis design of the diffraction phase microscope. Further, we also recorded a reference fringe pattern corresponding to the background with no test defect. We processed the object and reference fringe patterns using Hilbert transform [42] and performed reference conjugation [43] to minimize the effect of aberrations. Accordingly, we obtained a complex fringe signal of the form
$$Z(x,y) = a_c(x,y)e^{j\phi(x,y)}$$
where $a_c$ is the amplitude term. From above equation, the wrapped phase is obtained using the arctangent operation as,
$$\phi(x,y) = \arctan \left[\frac{\mathrm{Im}\{Z(x,y)}{\mathrm{Re}\{Z(x,y)}\right]$$
where "$\mathrm {Re}$" and "$\mathrm {Im}$" represent the real and imaginary parts, respectively.

2.2 Deep learning method

The proposed deep learning network is based on supervised learning. For the proposed method, the deep learning approach broadly relies on a two stage operation for defect detection. The first step is the training stage where the neural network model is trained using wrapped phase map as the input and the binary image as output which depicts the defect and defect-free regions. Subsequently, we generate a binary image that contains class labels for the fringe patterns. In this binary image, we assign the labels "1" and "0" to defective and non-defective pixels respectively. This binary image is the ground truth label for training the deep learning model.

In the proposed deep learning approach, we applied a convolutional neural network (CNN) model based on atrous spatial pyramid pooling architecture (ASPP) [4446] for identifying defects from noisy fringe patterns. In this model, atrous convolution offers an efficient method to vary the field of view using different dilation rates, effectively capturing multi-scale information to improve feature detection [46]. The schematic of the deep learning model for defect detection is shown in Fig. 1.

The architecture consists of 16 network blocks (NB), 3 add blocks (AB) and 3 MaxPooling layer (MP). Each NB contains a two-dimensional convolutional layer having a certain number of filters with varying kernel sizes. It is followed by batch normalization (BN), dropout, and rectified linear unit (ReLU) or softmax activation. BN distributes data uniformly across the network to achieve network convergence. In dropout, some nodes are randomly dropped out during each iteration of training, while others are preserved with a predetermined probability which helps prevent overfitting. The activation layer brings non-linearity in the network required to learn complex relations and produces more accurate results. NB1 and NB2 contain $32$ filters with kernel sizes of $11\times 11$ and $6\times 6$, respectively. After the operation of the convolution layers NB1 and NB2, the data comes into the MaxPooling layer (MP1). The primary function of the pooling layer is to reduce dimensionality, thereby reducing the number of parameters and computational complexity. NB3 and NB4 contain $64$ filters with kernel sizes of $3\times 3$ each. After NB4, the data comes into the MaxPooling layer (MP2). The NB5 consists of a 2D convolution layer having $64$ filters with a kernel size of $1\times 1$ and stride $2\times 2$. The outputs of MP2 are added with NB5 in add block (AB1). Further, NB6 and NB7 contain 2D convolution layers with 128 filters having a kernel size of $3\times 3$ followed by BN, Dropout, and ReLU activation. The output is downsampled using Maxpooling layer (MP3). The NB8 consists of a 2D convolution layer having 128 filters with a kernel size of $1\times 1$ and stride $2\times 2$. The outputs of NB7 are added with NB8 in add block (AB2). The NB9a-e has a 2D convolution layer comprising $128$ filters with a kernel size of $3\times 3$ and different dilation rates ($1,3,6,9,12$), forming a pyramid on top of the existing feature map. The output of these filters is concatenated, followed by BN, and fed into the NB10, which has $512$ filters with a $3\times 3$ kernel size. NB11 contains 256 filters with kernel size $3\times 3$. NB12 consists of a 2D transpose convolutional layer with $128$ filters with kernel size $3\times 3$ and stride of $2\times 2$. The 2D convolutional layer NB13 containing 64 filters having kernel size $3\times 3$ is added with the output of AB1 using skip connection. The NB14 layer consists of a 2D transpose convolution layer with 32 filters having a kernel size of $3 \times 3$, followed by NB15, a 2D convolution layer of 16 filters. The final layer consists of a single 2D convolutional layer having $2$ filters with size $3\times 3$ followed by softmax activation function. The predicted binary image is formed by choosing the label with the highest probability and assigning it to each pixel in the image. Essentially, we utilized a binary classification scheme which labels each pixel as pertaining to either non-defect or defect class by generating a two element vector of probabilities, such that their sum is unity. These probabilities effectively indicate the likelihood of a pixel’s association with a given class. Further, this vector is organized such that the first element of the vector corresponds to the non-defect class and the second element corresponds to the defect class. By computing the index or location of the maximum probability value in the vector, the pixel is labeled as defect-free or defective, and we obtain an overall binary image for defect classification. For modeling loss function, we used the cross entropy function [47] in our approach. Further, RMSprop network optimizer [48] was used in the proposed model to minimize the loss function, due to its fast convergence and fewer hyperparameters. The architecture of the deep learning model is summarized in Table 1.

 figure: Fig. 1.

Fig. 1. Deep learning architecture

Download Full Size | PDF

Tables Icon

Table 1. Deep learning model summary

For training and validation of the proposed model, we simulated 2000 fringe patterns, each with size of $512\times 512$ pixels, containing diverse defect patterns. We also added additive white Gaussian noise to the fringe patterns with a signal to noise ratio (SNR) varying from -10 to 20 dB. Among the simulated images, 1800 fringe patterns were designated for model training. The remaining 200 fringe patterns were utilized for model validation. Next, the complex fringe signal was obtained using Eq. (2). The wrapped phase corresponding to the interferograms was calculated using Eq. (3) and acts as input to the deep learning model. For defect detection output, we used binary images where the defective and non-defective pixels were assigned to ’1’ and ’0’ values. For data fitting we used maximum number of epochs as 20 and batch size of 4. The deep learning model and associated parameters can be summarized as follows:

  • Model Architecture:

    In the proposed deep learning approach, we have employed a convolutional neural network model based on the atrous spatial pyramid pooling architecture to detect defects within noisy fringe patterns. The architecture comprises of 16 network blocks, 3 add blocks, and 3 max pooling layers. Within each network block, there is a two-dimensional convolutional layer, equipped with a specific number of filters featuring diverse kernel sizes. Subsequently, the convolutional layer is followed by batch normalization, dropout, and rectified linear unit (ReLU) or softmax activation for the output layer.

  • Training Dataset & Parameters:
    • 1. Number of fringe patterns: 2000
    • 2. Fringe pattern size: 512 $\times$ 512 pixels
    • 3. Batch Size: $4$
    • 4. Number of Epochs: $20$
    • 5. Validation Split: $10\%$ $(0.1)$
  • Model Compilation:
    • 1. Optimizer: RMSprop.
    • 2. Loss Function: Cross entropy loss. The cross-entropy loss function for binary classification can be represented as:
      $$Loss={-} [y \log(p) + (1 - y) \log(1 - p)]$$
      where $"y"$ is the true label (0 or 1) for the binary class (0 and 1 represent non-defect and defect region respectively). $"p"$ is the predicted probability that the input belongs to the positive class (class 1). In the above equation, we effectively measure the dissimilarity between the predicted probability "p" and the true label "y" for each example in the dataset. The loss is minimized when the predicted probability is close to the true label.
For better visualization, the loss function versus the number of epochs is shown in Fig. 2. The computational time required for the training and validation step was approximately 34 minutes in our model. The model was built using the Keras and Tensorflow [49] libraries in a Python environment on a workstation with 64 GB of RAM and an NVIDIA RTX5000 GPU card.

 figure: Fig. 2.

Fig. 2. Loss function (log scale) versus epoch for the deep learning model

Download Full Size | PDF

3. Results

3.1 Simulation results

To investigate the effectiveness of the proposed method, we analyzed several noisy defect containing fringe patterns of size $512\times 512$ pixels using numerical simulations. For our analysis, we used groove, bend, compression, and displacement type defects [24] in simulations. Next, metrics such as accuracy [50], structured similarity index measure (SSIM) [51] and F-measure or $F_1$ score [50] were evaluated to quantify the proposed method’s performance with varying SNR values. The accuracy gives the percentage of correctly classified defects in the interferogram, and is mathematically represented as [50]

$$Accuracy=\frac{(TP+TN)}{(TP+FP+TN+FN)}$$

Here, TP represents true positives or the number of pixels correctly identified as having a defect. TN represents true negatives or the number of pixels correctly identified as having no defect. Likewise, FP represents false positives or the number of pixels incorrectly identified as defects, and FN represents the number of pixels incorrectly identified as non-defective.

In addition, we also evaluated the SSIM value to find pixel-wise similarity between ground truth and the predicted output as [51]

$$SSIM = \dfrac{\left(2m_xm_y + c_1\right) \left(2s_{xy} + c_2\right)}{\left(m_x^2m_y^2 + c_1\right) \left(s_x^2s_y^2 + c_2\right)}$$
where $m_x$ and $m_y$ denote the pixel sample means, $s_x$ and $s_y$ are the standard deviations, $s_{xy}$ indicates the covariance of $x$ and $y$, and the constants $c_1$ and $c_2$ indicate the stability factors. Next, we also evaluated the $F_1$ score to further quantify the performance of the proposed method for defect classification. The $F_1$ score is defined as [50]
$$F_1 = \frac{TP}{TP+\frac{1}{2}(FP+FN)}$$

For comparative assessment, we also used the state of the art wavelet transform (WT) method [26] for defect detection from the noisy fringe patterns. For this analysis, we used Mexican hat as mother wavelet, scale value of 2 and normalized the wavelet spectrum such that peak spectral value is unity. The main principle of wavelet transform method for defect detection is the concentration of higher spectrum values due to high fringe density in the vicinity of defect. Accordingly, a threshold parameter was used for the normalized wavelet spectrum to classify the defect regions. Subsequently, a binary image was generated by assigning spectrum values above the predetermined threshold as unity (labeled as defects) and values below the threshold as zero (labeled as non-defect). Finally, the performance metrics for different threshold values with $T= 50\%$ and $T=70\%$ were computed for comparison.

Similarly, we also used the windowed Fourier transform (WFT) method [23] for defect detection. Accordingly, we applied the windowed Fourier ridges algorithm using parameters $\sigma _x=\sigma _y=10$, $wxl=wyl=-0.5$, $wxi=wyi=0.1$, and $wxh=wyh=0.5$ to compute the phase derivatives and subsequently obtained a measure of fringe density from the phase gradient [23]. Next, a binary image highlighting defects was generated by applying a threshold value (T=70%) to the WFT phase gradient map.

Figure 3 displays the simulation outcomes for defect detection for a compression type defect from a noisy fringe pattern simulated at an SNR of 0 dB. In Fig. 3(a), we depict the simulated noisy fringe pattern. The binary image depicting the original or true defect region is shown in Fig. 3(b). The binary image depicting defect region generated using the proposed method is shown in Fig. 3(c). The binary images obtained from the wavelet transform method for preset threshold parameter values of T=$50\%$ and T=$70\%$ are shown in parts (d-e) of Fig. 3. The binary image depicting defect region generated using the WFT method is shown in Fig. 3(f). Figures 3(g-i) show the corresponding accuracy, SSIM and $F_1$ score values with varying noise levels. Note that higher values of performance metrics, closer to unity, indicate better performance of a method for defect detection.

 figure: Fig. 3.

Fig. 3. (a) Noisy fringe pattern with "compression" type defects. (b) Binary image corresponding to true defect patterns. (c) Defects identified using the proposed method. Defects identified by the wavelet transform method with threshold values (d) T=$50\%$ and (e) T=$70\%$. (f) Defects identified by the WFT method with threshold value of T=$70\%$. (g) Accuracy versus SNR. (h) SSIM versus SNR. (i) $F_1$ score versus SNR.

Download Full Size | PDF

Similarly, the simulation results obtained for analyzing noisy (SNR=0 dB) fringe pattern with displacement type defect are represented in Fig. 4. In Figs. 4(a,b), we depict the simulated noisy fringe pattern and the corresponding true defect patterns. The binary image depicting defect region generated using the proposed method is shown in Fig. 4(c). The binary images obtained from the wavelet transform method with different thresholds and the WFT method are shown in parts (d-f) of Fig. 4. Figures 4(g-i) show the corresponding accuracy, SSIM and $F_1$ score values with varying noise levels. Similar results for analysing the noisy fringe pattern (SNR=0 dB) with bend type defect with the different methods are shown in Fig. 5. In addition, corresponding results for analysing the noisy fringe pattern (SNR=0 dB) with groove type defect with the different methods are shown in Fig. 6.

 figure: Fig. 4.

Fig. 4. (a) Noisy fringe pattern with "displacement" type defects. (b) Binary image corresponding to true defect patterns. (c) Defects identified using the proposed method. Defects identified by the wavelet transform method with threshold values (d) T=$50\%$ and (e) T=$70\%$. (f) Defects identified by the WFT method with threshold value of T=$70\%$. (g) Accuracy versus SNR. (h) SSIM versus SNR. (i) $F_1$ score versus SNR.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. (a) Noisy fringe pattern with "bend" type defects. (b) Binary image corresponding to true defect patterns. (c) Defects identified using the proposed method. Defects identified by the wavelet transform method with threshold values (d) T=$50\%$ and (e) T=$70\%$. (f) Defects identified by the WFT method with threshold value of T=$70\%$. (g) Accuracy versus SNR. (h) SSIM versus SNR. (i) $F_1$ score versus SNR.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. (a) Noisy fringe pattern with "groove" type defects. (b) Binary image corresponding to true defect patterns. (c) Defects identified using the proposed method. Defects identified by the wavelet transform method with threshold values (d) T=$50\%$ and (e) T=$70\%$. (f) Defects identified by the WFT method with threshold value of T=$70\%$. (g) Accuracy versus SNR. (h) SSIM versus SNR. (i) $F_1$ score versus SNR.

Download Full Size | PDF

From the simulation results, we observe that the proposed method offers superior performance for defect detection at varying levels of noise. Even for severe noise levels, marked by SNR values in the range of -10 db to 0 dB, the accuracy, SSIM and $F_1$ score metrics for the proposed method are consistently closer to unity for diverse types of fringe pattern defects. This indicates strong robustness of the proposed method for defect detection. Further, the performance of the wavelet transform method significantly varies with the selection of threshold parameter. This necessitates careful monitoring of the threshold parameter which poses an important challenge for defect detection.

Next, to show the versatility of the proposed method for identifying different defects under severe noise conditions, we simulated noisy defect containing fringe patterns at SNR of -10 db. These noisy fringe patterns corresponding to compression, displacement, bend and groove types defects are shown in parts (a,f,k,p) of Fig. 7. The corresponding true defect patterns are shown in parts (b,g,l,q) of Fig. 7. The defects identified using the proposed method are shown in parts (c,h,m,r) of Fig. 7. Similarly, the defects identified using the wavelet transform method with preset threshold value of $T=70\%$ are shown in parts (d,i,n,s) of Fig. 7. Finally, the defects identified using the WFT method with preset threshold value of $T=70\%$ are shown in Fig.s 7(e,j,o,t). The performance metrics for defect detection using the different methods are shown in Table 2. The computational times taken by the different methods for processing a fringe pattern with size $512 \times 512$ pixels for defect detection was about 0.857 seconds for the proposed method, 0.056 seconds for the wavelet transform method and 7.142 seconds for the windowed Fourier transform method. These results show that the proposed method offers superior defect detection capabilities for diverse defects even for severe noise fringe pattern corruption.

 figure: Fig. 7.

Fig. 7. (a,f,k,p) Fringe patterns with severe noise (SNR of -10 dB) containing different types of defects. (b,g,l,q) Corresponding true defect patterns. (c,h,m,r) Corresponding defects identified using proposed method. (d,i,n,s) Corresponding defects identified using wavelet transform method. (e,j,o,t) Corresponding defects identified using WFT method.

Download Full Size | PDF

Tables Icon

Table 2. Accuracy (Acc), SSIM, and $F_1$ score for identifying various types of defects from fringe patterns with SNR of -10 dB using different methods.

3.2 Experimental results

Next, we applied the proposed approach to interferograms derived from experimental diffraction phase microscopy setup to demonstrate the practical utility. Our experimental system, shown in Fig. 8, employs a laser source (LS) to serve as the primary illumination source for the microscope. The microscope is constructed using an infinity-corrected objective lens (OL) in combination with a tube lens (TL). A beam expander (BE) consisting of two biconvex lenses is employed for expanding the beam. The lens C1 concentrates the expanded beam at the front focal plane of the OL, achieving an epi-illumination configuration. As a result, a collimated beam impinges on the test sample (TS), and the OL captures the scattered object beam. The objective lens has $40\times$ magnification and numerical aperture (NA) of $0.65$. A magnified image is formed at the back focal plane of the tube lens (TL). The magnified image is redirected to the DPM module using a beam splitter (BS). The DPM module encompasses a blazed diffraction grating (G), a $4f$ lens system comprising of two biconvex lenses (L1 and L2) with an overall magnification of 4, and a pinhole (P). The diffraction grating is positioned at the back focal plane of the tube lens, generating multiple diffraction orders. All orders, except the zeroth order and the first order beams, are prevented from entering the 4f lens system via optical blocking. The $10 \mu m$ diameter pinhole is located at the back focal plane of L1, effectively acting as a low-pass filter. A camera (C) (QImaging Retiga R3), equipped with resolution of $1920 \times 1460$ pixels and pixel size of 4.54 microns $\times$ 4.54 microns, is positioned at the back focal plane of lens L2. It captures the interferogram formed by the superposition of the filtered first order (reference wave) and unfiltered zero order (object wave) beams. The experimental design and methodology of diffraction phase microscopy are described in greater detail in prior separate work [13].

For test sample, an aluminum thin film was coated on a glass substrate using thermal evaporation. Thermal evaporation is a common technique for coating thin films and improper deposition during the process generally leads to abnormalities in the form of micro-crack defects. In Fig. 9(a), we show an experimental interferogram containing defects induced by thermal evaporation. For defect analysis, we selected a region of interest, marked by red color in Fig. 9(a), and cropped the corresponding region. This region of interest is shown in Fig. 9(b). Further, to show the interference fringes occurring in diffraction phase microscopy, we selected a small region, marked by blue color in Fig. 9(b). The fringes corresponding to this marked region are shown in Fig. 9(c). Next, the binary image depicting defect and non-defect region of the cropped interferogram, obtained using the proposed method, is shown in Fig. 9(d). For comparison, the binary defect images obtained using the wavelet transform method and windowed Fourier transform method using pre-set threshold value of 70 % are shown in parts (d-f) of Fig. 9.

 figure: Fig. 8.

Fig. 8. Experimental Setup

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Defect detection for first experimental case. (a) Experimental interferogram containing defect. (b) Region of interest, marked by red color in (a), of the interferogram used for defect analysis. (c) Fringes corresponding to marked region (blue color) in (b). Defects identified using the (d) proposed method, (e) wavelet transform method and (f) WFT method.

Download Full Size | PDF

In Fig. 10(a), we show another experimental interferogram containing defects induced by thermal evaporation. As mentioned before, the cropped region of interest and interference fringes are shown in parts (b-c) of Fig. 10. Next, the binary image depicting defect and non-defect region of the cropped interferogram, obtained using the proposed method, is shown in Fig. 10(d). For comparison, the binary defect images obtained using the wavelet transform method and windowed Fourier transform method using pre-set threshold value of 70 % are shown in parts (d-f) of Fig. 10.

 figure: Fig. 10.

Fig. 10. Defect detection for second experimental case. (a) Experimental interferogram containing defect. (b) Region of interest, marked by red color in (a), of the interferogram used for defect analysis. (c) Fringes corresponding to marked region (blue color) in (b). Defects identified using the (d) proposed method, (e) wavelet transform method and (f) WFT method.

Download Full Size | PDF

From the experimental results, it is evident that the proposed method has better practical applicability for defect detection in experimental settings.

4. Conclusion

The proposed method provides an automated, single shot and robust approach for defect detection from fringe patterns. By eliminating the requirement of threshold operation, the proposed method paves the path of an automated defect identification methodology devoid of manual or operator intervention. The simulation results show the potential of the proposed method to identify different types of defects as well as resistance against severe noise. The experimental results demonstrate the practical validity of the proposed method for defect testing using diffraction phase microscopy. One limitation of the proposed method is the computational cost required for the training stage in the deep learning model. However, this cost in incurred only once during the training stage and can be further improved using better graphics processing unit hardware. In addition, once the model is trained, we observed that testing the proposed method involves computational time under a second for processing fringe pattern of size $512\times 512$ pixels, and thus leads to rapid defect detection. Overall, we believe that the proposed method has strong potential in the areas of non-destructive testing, precision metrology and material inspection and reliability studies.

Funding

Department of Science and Technology, Ministry of Science and Technology, India (DST/NM/NT/2018/2).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. U. Schnars and W. P. Jüptner, “Digital recording and numerical reconstruction of holograms,” Meas. Sci. Technol. 13(9), R85–R101 (2002). [CrossRef]  

2. V. Trivedi, M. Joglekar, S. Mahajan, et al., “Digital holographic imaging of refractive index distributions for defect detection,” Opt. Laser Technol. 111, 439–446 (2019). [CrossRef]  

3. M. Paturzo, V. Pagliarulo, V. Bianco, et al., “Digital holography, a metrological tool for quantitative analysis: Trends and future applications,” Opt. Lasers Eng. 104, 32–47 (2018). [CrossRef]  

4. T. Kreis, “Application of digital holography for nondestructive testing and metrology: A review,” IEEE Trans. Ind. Inform. 12(1), 240–247 (2016). [CrossRef]  

5. T. Yuan, Y. Ma, X. Dai, et al., “Internal defect detection method based on dual-channel speckle interferometry,” Opt. Laser Technol. 161, 109157 (2023). [CrossRef]  

6. X. Zhu, C. Tang, H. Ren, et al., “Image decomposition model BL-Hilbert-L2 for dynamic thermal measurements of the printed circuit board with a chip by ESPI,” Opt. Laser Technol. 63, 125–131 (2014). [CrossRef]  

7. G. Gu, Y. Pan, C. Qiu, et al., “Improved depth characterization of internal defect using the fusion of shearography and speckle interferometry,” Opt. Laser Technol. 135, 106701 (2021). [CrossRef]  

8. Y. Hung and H. Ho, “Shearography: An optical measurement technique and applications,” Mater. Sci. Eng.: R: Rep. 49(3), 61–87 (2005). [CrossRef]  

9. P. Yan, Y. Wang, F. Sun, et al., “Shearography for non-destructive testing of specular reflecting objects using scattered light illumination,” Opt. Laser Technol. 112, 452–457 (2019). [CrossRef]  

10. F. Sun, X. Dan, P. Yan, et al., “A spatial-phase-shift-based defect detection shearography system with independent adjustment of shear amount and spatial carrier frequency,” Opt. Laser Technol. 124, 105956 (2020). [CrossRef]  

11. B. Bhaduri, C. Edwards, H. Pham, et al., “Diffraction phase microscopy: principles and applications in materials and life sciences,” Adv. Opt. Photonics 6(1), 57–119 (2014). [CrossRef]  

12. G. Rajshekhar, B. Bhaduri, C. Edwards, et al., “Nanoscale topography and spatial light modulator characterization using wide-field quantitative phase imaging,” Opt. Express 22(3), 3432–3438 (2014). [CrossRef]  

13. S. Ajithaprasad and R. Gannavarpu, “Non-invasive precision metrology using diffraction phase microscopy and space-frequency method,” Opt. Lasers Eng. 109, 17–22 (2018). [CrossRef]  

14. W. Osten, W. P. Jueptner, and U. Mieth, “Knowledge-assisted evaluation of fringe patterns for automatic fault detection,” in Interferometry VI: applications, vol. 2004 (SPIE, 1994), pp. 256–268.

15. G. Reid, “Automatic fringe pattern analysis: a review,” Opt. Lasers Eng. 7(1), 37–68 (1986). [CrossRef]  

16. M. El-Morsy, “Improved accuracy and defect detection in contour line determination of multiple-beam fizeau fringes using fourier fringe analysis technique,” Opt. Quantum Electron. 52(3), 146 (2020). [CrossRef]  

17. V. Tornari, E. Tsiranidou, and E. Bernikola, “Interference fringe-patterns association to defect-types in artwork conservation: an experiment and research validation review,” Appl. Phys. A 106(2), 397–410 (2012). [CrossRef]  

18. F. C. I. Catalan, A. M. S. Maallo, and P. F. Almoro, “Fringe analysis and enhanced characterization of sub-surface defects using fringe-shifted shearograms,” Opt. Commun. 285(21-22), 4223–4226 (2012). [CrossRef]  

19. K.-S. Kim, K.-S. Kang, Y.-J. Kang, et al., “Analysis of an internal crack of pressure pipeline using ESPI and shearography,” Opt. Laser Technol. 35(8), 639–643 (2003). [CrossRef]  

20. Z. Li, M. O. Tokhi, R. Marks, et al., “Dynamic wind turbine blade inspection using micro-polarisation spatial phase shift digital shearography,” Appl. Sci. 11(22), 10700 (2021). [CrossRef]  

21. R. Zhou, C. Edwards, A. Arbabi, et al., “Detecting 20 nm wide defects in large area nanopatterns using optical interferometric microscopy,” Nano Lett. 13(8), 3716–3721 (2013). [CrossRef]  

22. J. Dhanotia, R. Disawal, V. Bhatia, et al., “Improved accuracy in slope measurement and defect detection using Fourier fringe analysis,” Optik 140, 921–930 (2017). [CrossRef]  

23. K. Qian., “Two-dimensional windowed Fourier transform for fringe pattern analysis: principles, applications and implementations,” Opt. Lasers Eng. 45(2), 304–317 (2007). [CrossRef]  

24. K. Qian, H. S. Seah, and A. Asundi, “Fault detection by interferometric fringe pattern analysis using windowed Fourier transform,” Meas. Sci. Technol. 16(8), 1582–1587 (2005). [CrossRef]  

25. S. Ajithaprasad, R. Velpula, and R. Gannavarpu, “Defect detection using windowed fourier spectrum analysis in diffraction phase microscopy,” J. Phys. Commun. 3(2), 025006 (2019). [CrossRef]  

26. X. Li, “Wavelet transform for detection of partial fringe patterns induced by defects in nondestructive testing of holographic interferometry and electronic speckle pattern interferometry,” Opt. Eng. 39(10), 2821–2827 (2000). [CrossRef]  

27. K. Lu, Z. Wang, H. Chun, et al., “Curved-edge diffractive fringe pattern analysis for wafer edge metrology and inspection,” in Metrology, Inspection, and Process Control XXXVII, vol. 12496 (SPIE, 2023), pp. 105–106.

28. G. Rajshekhar, S. S. Gorthi, and P. Rastogi, “Detection of defects from fringe patterns using a pseudo Wigner–Ville distribution based method,” Opt. Lasers Eng. 50(8), 1059–1062 (2012). [CrossRef]  

29. A. Vishnoi, A. Madipadaga, S. Ajithaprasad, et al., “Automated defect identification from carrier fringe patterns using Wigner–Ville distribution and a machine learning-based method,” Appl. Opt. 60(15), 4391–4397 (2021). [CrossRef]  

30. D. Pandey, J. Ramaiah, S. Ajithaprasad, et al., “Subspace analysis based machine learning method for automated defect detection from fringe patterns,” Optik 270, 170026 (2022). [CrossRef]  

31. S. K. Narayan, A. V. S. Vithin, and R. Gannavarpu, “Deep learning assisted non-contact defect identification method using diffraction phase microscopy,” Appl. Opt. 62(20), 5433–5442 (2023). [CrossRef]  

32. C. Zuo, J. Qian, S. Feng, et al., “Deep learning in optical metrology: a review,” Light: Sci. Appl. 11(1), 39 (2022). [CrossRef]  

33. S. Feng, C. Zuo, L. Zhang, et al., “Generalized framework for non-sinusoidal fringe analysis using deep learning,” Photonics Res. 9(6), 1084–1098 (2021). [CrossRef]  

34. K. Yan, Y. Yu, C. Huang, et al., “Fringe pattern denoising based on deep learning,” Opt. Commun. 437, 148–152 (2019). [CrossRef]  

35. G. Zhang, T. Guan, Z. Shen, et al., “Fast phase retrieval in off-axis digital holographic microscopy through deep learning,” Opt. Express 26(15), 19388–19405 (2018). [CrossRef]  

36. A. V. S. Vithin, J. Ramaiah, and R. Gannavarpu, “Deep learning based single shot multiple phase derivative retrieval method in multi-wave digital holographic interferometry,” Opt. Lasers Eng. 162, 107442 (2023). [CrossRef]  

37. G. Popescu, T. Ikeda, R. R. Dasari, et al., “Diffraction phase microscopy for quantifying cell structure and dynamics,” Opt. Lett. 31(6), 775–777 (2006). [CrossRef]  

38. B. Bhaduri, H. Pham, M. Mir, et al., “Diffraction phase microscopy with white light,” Opt. Lett. 37(6), 1094–1096 (2012). [CrossRef]  

39. M. Shan, M. E. Kandel, H. Majeed, et al., “White-light diffraction phase microscopy at doubled space-bandwidth product,” Opt. Express 24(25), 29033–29039 (2016). [CrossRef]  

40. A. V. S. Vithin, S. Ajithaprasad, and G. Rajshekhar, “Step phase reconstruction using an anisotropic total variation regularization method in a diffraction phase microscopy,” Appl. Opt. 58(26), 7189–7194 (2019). [CrossRef]  

41. J. Ramaiah, S. Ajithaprasad, and G. Rajshekhar, “Graphics processing unit assisted diffraction phase microscopy for fast non-destructive metrology,” Meas. Sci. Technol. 30(12), 125202 (2019). [CrossRef]  

42. T. Ikeda, G. Popescu, R. R. Dasari, et al., “Hilbert phase microscopy for investigating fast dynamics in transparent systems,” Opt. Lett. 30(10), 1165–1167 (2005). [CrossRef]  

43. T. Colomb, J. Kühn, F. Charrière, et al., “Total aberrations compensation in digital holographic microscopy with a reference conjugated hologram,” Opt. Express 14(10), 4300–4306 (2006). [CrossRef]  

44. L.-C. Chen, G. Papandreou, F. Schroff, et al., “Rethinking atrous convolution for semantic image segmentation,” arXiv, arXiv:1706.05587 (2017). [CrossRef]  

45. L.-C. Chen, Y. Zhu, G. Papandreou, et al., “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proceedings of the European conference on computer vision (ECCV), (2018), pp. 801–818.

46. L.-C. Chen, G. Papandreou, I. Kokkinos, et al., “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018). [CrossRef]  

47. C. Ferri, J. Hernández-Orallo, and R. Modroiu, “An experimental comparison of performance measures for classification,” Pattern Recognit. Lett. 30(1), 27–38 (2009). [CrossRef]  

48. S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv, arXiv:1609.04747 (2016). [CrossRef]  

49. A. Géron, Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow ("O’Reilly Media, Inc.", 2022).

50. T. Fawcett, “An introduction to roc analysis,” Pattern Recognit. Lett. 27(8), 861–874 (2006). [CrossRef]  

51. Z. Wang, A. C. Bovik, H. R. Sheikh, et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Deep learning architecture
Fig. 2.
Fig. 2. Loss function (log scale) versus epoch for the deep learning model
Fig. 3.
Fig. 3. (a) Noisy fringe pattern with "compression" type defects. (b) Binary image corresponding to true defect patterns. (c) Defects identified using the proposed method. Defects identified by the wavelet transform method with threshold values (d) T=$50\%$ and (e) T=$70\%$. (f) Defects identified by the WFT method with threshold value of T=$70\%$. (g) Accuracy versus SNR. (h) SSIM versus SNR. (i) $F_1$ score versus SNR.
Fig. 4.
Fig. 4. (a) Noisy fringe pattern with "displacement" type defects. (b) Binary image corresponding to true defect patterns. (c) Defects identified using the proposed method. Defects identified by the wavelet transform method with threshold values (d) T=$50\%$ and (e) T=$70\%$. (f) Defects identified by the WFT method with threshold value of T=$70\%$. (g) Accuracy versus SNR. (h) SSIM versus SNR. (i) $F_1$ score versus SNR.
Fig. 5.
Fig. 5. (a) Noisy fringe pattern with "bend" type defects. (b) Binary image corresponding to true defect patterns. (c) Defects identified using the proposed method. Defects identified by the wavelet transform method with threshold values (d) T=$50\%$ and (e) T=$70\%$. (f) Defects identified by the WFT method with threshold value of T=$70\%$. (g) Accuracy versus SNR. (h) SSIM versus SNR. (i) $F_1$ score versus SNR.
Fig. 6.
Fig. 6. (a) Noisy fringe pattern with "groove" type defects. (b) Binary image corresponding to true defect patterns. (c) Defects identified using the proposed method. Defects identified by the wavelet transform method with threshold values (d) T=$50\%$ and (e) T=$70\%$. (f) Defects identified by the WFT method with threshold value of T=$70\%$. (g) Accuracy versus SNR. (h) SSIM versus SNR. (i) $F_1$ score versus SNR.
Fig. 7.
Fig. 7. (a,f,k,p) Fringe patterns with severe noise (SNR of -10 dB) containing different types of defects. (b,g,l,q) Corresponding true defect patterns. (c,h,m,r) Corresponding defects identified using proposed method. (d,i,n,s) Corresponding defects identified using wavelet transform method. (e,j,o,t) Corresponding defects identified using WFT method.
Fig. 8.
Fig. 8. Experimental Setup
Fig. 9.
Fig. 9. Defect detection for first experimental case. (a) Experimental interferogram containing defect. (b) Region of interest, marked by red color in (a), of the interferogram used for defect analysis. (c) Fringes corresponding to marked region (blue color) in (b). Defects identified using the (d) proposed method, (e) wavelet transform method and (f) WFT method.
Fig. 10.
Fig. 10. Defect detection for second experimental case. (a) Experimental interferogram containing defect. (b) Region of interest, marked by red color in (a), of the interferogram used for defect analysis. (c) Fringes corresponding to marked region (blue color) in (b). Defects identified using the (d) proposed method, (e) wavelet transform method and (f) WFT method.

Tables (2)

Tables Icon

Table 1. Deep learning model summary

Tables Icon

Table 2. Accuracy (Acc), SSIM, and F 1 score for identifying various types of defects from fringe patterns with SNR of -10 dB using different methods.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

F ( x , y ) = a b ( x , y ) + m ( x , y ) cos ( 2 π ω x c x + 2 π ω y c y + ϕ ( x , y ) )
Z ( x , y ) = a c ( x , y ) e j ϕ ( x , y )
ϕ ( x , y ) = arctan [ I m { Z ( x , y ) R e { Z ( x , y ) ]
L o s s = [ y log ( p ) + ( 1 y ) log ( 1 p ) ]
A c c u r a c y = ( T P + T N ) ( T P + F P + T N + F N )
S S I M = ( 2 m x m y + c 1 ) ( 2 s x y + c 2 ) ( m x 2 m y 2 + c 1 ) ( s x 2 s y 2 + c 2 )
F 1 = T P T P + 1 2 ( F P + F N )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.