Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deformation reconstruction by means of surface optimization. Part II: time-resolved electronic speckle pattern interferometry

Open Access Open Access

Abstract

An analysis of time-resolved electronic speckle pattern interferograms using an optimization algorithm is shown to provide full-field measurements of transient surface deformation. The arrangement uses a continuous-wave laser and high-speed camera to capture speckle images, with the recovery of the time-resolved deformation achieved by spatiotemporal processing using an optimization algorithm. It is shown that the process allows imaging of high-speed non-monotonic out-of-plane displacements with sub-micrometer amplitude. Time-resolved amplitude and phase recovery is demonstrated by analyzing the out-of-plane deformation of harmonic and transient events in a friction membranophone.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. INTRODUCTION

Quantification of non-harmonic deformations by optical experimental methods has been an attractive research topic for decades. One process that can accomplish this is electronic speckle pattern interferometry (ESPI), which is often used for full-field dynamic deformation measurements [1]. ESPI captures whole deforming surfaces in a single recording and provides vibrational analysis with high spatial resolution, allowing real-time measurements [24]. Furthermore, ESPI can be applied to visualize transient events in the deforming surfaces [5].

Early time-resolved ESPI (TRESPI) arrangements involved stroboscopic and pulsed laser illumination, which required the repetition of the observed event. Using a high-speed camera for the ESPI measurements is more suitable for non-repeatable events [5,6], and initially the process was based on phase-shifting interferometry, which introduces temporal phase modulation into the experimental arrangement [7]. This method requires a series of phase-shifted images and may not be robust enough to analyze dynamic phenomena [8]. Another approach, spatial phase shifting [9,10], is a promising tool, which introduces spatial shift between two interfering light beams, but yet does not tackle high-speed events.

Recent advances in TRESPI involve advanced mathematical tools such as Hilbert transform [11,12], Fourier transform [13], and empirical mode decomposition (EMD) [14]. Hilbert transform and EMD can be combined for phase extraction in dynamic measurements [13,15], but this method does not perform well when the deformation is non-monotonic [8,16]. EMD is a powerful but not fully developed technique which has several drawbacks, such as non-uniqueness of the decomposition and boundary issues. An a priori knowledge is required to resolve sign ambiguity and to process the images spatially and temporally at the same time [17].

Most of the existing phase recovery algorithms used with ESPI interferograms are aimed at direct processing of the pixel intensity to solve the inverse problem, i.e., using the experimental results to reconstruct the parameters that describe the system under investigation. Pixel processing may be temporal, spatial or spatiotemporal [18]. Another method is to apply a surface optimization algorithm. Surface optimization has been successfully applied by the authors for the case of time-averaged ESPI (TAESPI) in the first part of the paper [19]. It was shown that the intensities can be calculated from the predicted surface deformation shape. The goal of the optimization is to iteratively change the calculated deformation profile to obtain the minimum of an objective function, which characterizes the mismatch between the calculated and the recorded intensity. This is the second part of the two-part investigation of deformation reconstruction by means of surface optimization.

In the past 10 years, ESPI methods have been widely used in the area of music acoustics [20,21]. One of the applications is the quality control for manufacturers. Another application is comparison between musical instruments with equal geometrical characteristics but made from different materials. The scope of the study goes beyond music acoustics and targets research areas in which the reproducibility of the experiments that involve transient response can be complex, such as automotive design.

In what follows we describe the TRESPI methodology for out-of-plane deformation recovery. The aim of this work is to recover the temporal out-of-plane deformation of the surface of a Brazilian cuíca drum under various types of excitation from recorded high-speed interferometric images. The method does not apply phase unwrapping for a single differential frame nor does it deal with the whole frame sequence. It is an iterative method with a spatial processing algorithm intended for phase recovery with minimal cumulative error. The computation is performed using MATLAB (MATLAB and Image Processing Toolbox R2014a, The MathWorks, Inc., Natick, Massachusetts, United States).

Section 2 describes the features of the experimental arrangement for TRESPI. Section 3 presents necessary mathematical derivations. Section 4 discusses the optimization algorithm for the temporal deformation recovery in the case of a harmonically excited object. Section 5 shows how the total deformation can be recovered. Section 6 presents the results of the deformation reconstruction for several excitation conditions, including excitation by hand. Section 7 discusses possible limitations of the system. Section 8 discusses the performance of the methodology, possible extensions and applications of the developed method.

2. EXPERIMENTAL ARRANGEMENT

For the experiments, a diode-pumped solid-state laser Coherent Verdi V5 with the main frequency λ=532nm and maximum output power of 5 W was utilized. To achieve high signal with a short exposure time, the laser power was set to 1.5 W. Recording was performed with a CMOS high-speed camera Phantom v12.1 in a 12-bit mode. The pixel size of the camera sensor is 20 μm.

Figure 1 shows the experimental arrangement for the TRESPI measurement. A polarizing beam splitter with a split ratio 50/50 divides a continuous-wave laser beam into two beams. One of the beams is expanded by a microscopic objective and illuminates the object. The second beam is expanded onto a piece of ground glass and interferes with the light coming from the object after reflection from a beam splitter placed in front of a camera. A high-speed camera records the interferometric images at a frame rate of 21 K frames per second at a resolution of 512 by 512 pixels with exposure time 47 μs. The small angle between the illumination and the observation direction was achieved by placing the microscopic objective as close to the camera viewing angle as possible. In general, the maximum frame rate depends on the resolution of the camera. In TAESPI, the decorrelation of the reference beam is necessary to reduce the random speckle and to decrease the influence of the low-frequency environmental noise [19]. This can be achieved by modulating the reference beam using a piezoelectrically driven mirror. During the high-speed measurements, the interference pattern is resolved in time, so any decorrelation of consecutive frames will add to the relative deformation of the surface. Thus, the only differences between the TAESPI and TRESPI experimental arrangements are that the reference beam is not decorrelated during the recording and the image is captured on a time scale that is short compared to the characteristic time scale of the motion of the object.

 figure: Fig. 1.

Fig. 1. Experimental arrangement for a TRESPI measurement. BS, beam splitter; PBS, polarizing beam splitter; M, mirror; L, lens (microscopic objective); λ/2, half-wave plate; G, ground glass.

Download Full Size | PDF

The object chosen for the experiments described here is a Brazilian cuíca drum from Meinl Percussion, shown in Fig. 2. The membrane is 15 cm in diameter and is made of goat skin. A wooden stick is attached to the center of the clamped membrane. The excitation is accomplished using an electrodynamic shaker (Brüel and Klaer, 4810) or by rubbing the wooden stick by hand. The distance between the object and the camera was approximately 60 cm.

 figure: Fig. 2.

Fig. 2. Brazilian cuíca.

Download Full Size | PDF

3. THEORY

This section describes the theoretical aspects of the phase deformation recovery. Some of the details have been covered in the first part of the paper [19].

A. Intensity Normalization

The interference of the light reflected from the object and the reference beam from the same source results in the intensity at a time t being given by

I(t)=A+B·cos(ϕ(t)),
where A is the background intensity, B is the modulation, and the phase ϕ is referred to a global phase, which involves the relative phase difference between the object and the reference beam, object deformation, and the random speckle phase [22]. The images are captured by a high-speed camera with 12-bit gray levels, and the intensity is recorded for every pixel with coordinates (x,y), which are omitted in the following equations. When the speckle size is smaller than the pixel size, a single pixel spatially integrates intensity over several speckles. Given the pixel size of 20 by 20 μm, a single pixel accumulates 10–100 speckles, depending on the F-number of the camera. Assuming that the deformation during the exposure time is small enough, no integration of Eq. (1) is required, the intensity is discrete in time. An example of the recorded intensity captured by a single pixel in the case of harmonic and non-harmonic excitation is shown in Fig. 3. Figure 3(a) shows the recorded intensity of a single pixel in the central area of the drum excited at 466 Hz by an electrodynamic shaker. Slightly harmonic behavior can be observed, but the peaks of the intensity are affected by camera noise and the vibration period cannot be clearly defined using a single pixel information. Figure 3(b) shows the result of excitation by hand, in which no periodic pattern in a time period of 0.015 s is observed. Note the difference between the periodic and non-periodic behavior of the intensity recorded by a single pixel: much higher amplitude difference can be caused by speckle drift due to large surface deformation.

 figure: Fig. 3.

Fig. 3. Recorded intensity of a single pixel during the deformation of the surface with sample rate of 21,000 frames per second. Intensity is presented in analog-to-digital units (ADU).

Download Full Size | PDF

In TAESPI the exposure time is much longer than the period of vibration of the object, and the random phase is eliminated by temporal, spatial, and frequency filtering [4]. The recording of TRESPI images is a time-resolved process, hence the intensity is affected by noise. The noise consists of both external environmental noise and internal camera noise. Unfortunately, any filtering to remove the noise affects the information used to calculate the displacement and therefore is not applied.

The first step of the image processing is the normalization of the intensity to remove coefficients A and B for every pixel. In the case of harmonic excitation (under the assumption of a linear system), the coefficients A and B can be considered constant in time during the image acquisition and are given by

A=(max(I)+min(I)2),
B=(max(I)min(I)2).

However, if the total deformation during the entire oscillation is less than half of the wavelength of illuminating light λ/2, which corresponds to the phase shift 2π, the normalization and detection of the coefficients A and B cannot be performed with Eqs. (2) and (3). In this case the periodicity in the intensity cannot be tracked, and the upper and lower values of the temporal intensity do not represent the pixel modulation. Therefore, measurements from regions of such a low deformation are not considered to be valid and the modulation value is assigned to the noise level. During the image processing, the deformation values at those invalid pixels are interpolated. One of the approaches to this issue is to introduce a temporal carrier in the first milliseconds of the recording time by inducing a phase shift in the reference beam that is greater than 2π [18]. Alternatively, an EMD could be applied [1315], but this would change the phase recovery procedure and therefore it is not applied in the current method.

If the frame rate is large enough so that the deformation can be considered linear in a certain time interval, the intensity on a single pixel is a cosine oscillation in time with varying modulation, which comes from the speckle drift. Figure 3(a) shows that for harmonic oscillation the background intensity and modulation can be regarded constant for a small period of time (here around 5 ms, around two periods for a given frequency). When the phase of the deformation exceeds 2π during the recording time, the coefficients A and B can be calculated using Eqs. (2) and (3). However, due to the presence of camera noise it is more convenient to use average peak extrema instead of global extrema. This is achieved by calculating a histogram of the intensity values for a given time interval and assigning the lower and upper bin histogram values to the maximum and minimum intensity [23]. With given assumptions, the analysis can be applied to transients as well. Applying surface optimization would reduce inference of camera noise and when the proper boundary conditions are set, temporal deformation can be recovered for the whole surface.

Figure 4 shows recorded pixel intensity in time for three neighboring pixels under harmonic excitation at 627 Hz. Average intensity and modulation vary from pixel to pixel, but the temporal profile shows common repetitive behavior. Intensity peaks are shifted with respect to each other due to random initial phase. However, the effect of camera noise is smaller in the middle graph due to higher modulation, and the information from this particular pixel would be more valuable than from the others.

 figure: Fig. 4.

Fig. 4. Pixel intensity from three neighboring pixels chosen in the middle part of the drum harmonically excited at 627 Hz. Intensity is measured in ADU.

Download Full Size | PDF

B. Pixel Quality

The quality of the signal from every pixel is defined by the matrix Q given by

Q=B/N,
where N is the camera noise level. The relationship between the modulation B and the noise N indicates how much the modulation exceeds the noise and operates as a signal-to-noise ratio to detect low-modulated and low-intensity pixels.

The camera noise is assumed to be the standard deviation of the amplitude of the background in the image sample over time. If Q is less than 1 for a given pixel, the pixel is considered to be not valid and is discarded. The camera noise was calculated at a 20-by-20 pixel region in the background near the drum area. An example of the matrix Q is shown in Fig. 5. To increase the visibility during the measurements the object has been painted matte white.

 figure: Fig. 5.

Fig. 5. Pixel quality matrix Q.

Download Full Size | PDF

C. Calculation of the Phase Difference

The normalized intensity Ij of a single image recorded at a time tj for every pixel is

Ij=cos(ϕi+Δϕi,j)=cos(ϕj),
where ϕi indicates the phase from the ith frame in a sequence, i<j, and Δϕi,j is the phase difference between the ith and the jth frame. The phase value with one index describes the absolute phase at the time and two subscripts represent a relative phase difference between two points. In the case of small angle between illumination and observation directions the deformation Di,j is linearly related to the phase difference Δϕi,j and the wavelength of light λ by
Di,j=λ4πΔϕi,j.

The noisy speckled structure comes from different values of the random speckle phase for every pixel. The intensity is additionally affected by camera noise but pixel-by-pixel temporal filtering (averaging, Golay filter, etc.) reduces peak heights and affects the normalizing procedure [19], so no temporal filtering is applied.

The first frame of the recorded sequence is the starting point of the calculation of the deformation. The speckle phase is defined as ϕ0 and the corresponding intensity is I0. The phase recovery can start at any frame and move forward or backward in time. The modulus of the speckle phase can be evaluated as the inverse cosine of the first frame,

|ϕ0|=arccos(I0).

The modulus of the random phase is bounded by 0 and π, which limits the recovered phase to only positive values. At the first glance, if the deformation direction during the time between the first two frames is constant and the phase difference is less than 2π, the random speckle phase can be estimated as

ϕ0=sign(Δϕ0,1)·sign(I1I0)·arccos(I1).

However, if the absolute deformation at a particular pixel exceeds twice the initial speckle phase, Eq. (8) is not valid. The equation holds only for cases when Δϕ and ϕ0 have the same sign and their absolute values are lower than π/2. Possible outcomes are presented in Table 1. More detailed analysis can be found in [23].

Tables Icon

Table 1. Conditions on Deformation Phase and Relation Between the Sign of the Initial Speckle Phase and the Intensity Difference

The qualitative assessment of the deformation shape at a certain moment in time is accomplished by subtracting the image of interest from the first image. The intensity difference between the first and the jth image is

IjI0=cos(ϕ0+Δϕ0,j)cos(ϕ0).

In the subtracted image black fringes are fringes of equal displacement and represent a deformation of λ/2, corresponding to a 2π phase difference [Eq. (6)]. Figure 6 shows the result of such a subtraction of images of the cuíca drum excited harmonically at 627 Hz by a shaker. The black lines can be regarded as isolines, i.e., lines of equal displacement. This image is used to verify the recovered phase—the black lines in the experimentally obtained image should coincide with those in the reconstructed image.

 figure: Fig. 6.

Fig. 6. Normalized ESPI image of the cuíca during harmonic excitation at 627 Hz. Subtraction intensity corresponds to a time difference of 1.2 ms (25 frames).

Download Full Size | PDF

It is not always possible to predict the phase difference from only two frames, because the sign of ϕ0 is unknown. The 2π phase difference can be mistaken for a nodal line and vice versa, and the black lines in the images are noisy. In this case direct recovery of the phase will always result in a non-zero amplitude. Assessment of a series of frames helps to resolve the 2π ambiguity. The algorithm for deformation recovery between two subsequent frames is described in the following section, followed by a discussion of the total phase recovery.

4. SINGLE-PHASE DIFFERENCE RECOVERY

This section describes the algorithm used for the recovery of a single-phase difference. The problem is ill-posed since the direction of the deformation cannot be obtained from the interferometric image. However, it may be determined by considering the mechanical properties of the object. The speckle noise and internal camera noise create additional problems.

A. Surface Representation

The surface under study is represented using an adapted grid based on the concept of quadtree division [24]. The implementation of such an approach is discussed in detail in the first part of this work [19]. The surface deformation is represented by a grid which is adapted during the optimization procedure (a feature that was not required in the work reported in [19]) so it becomes more dense in the areas where more precision is required. The grid point values are the optimization parameters and the surface map is produced by bilinear interpolation of the deformation values at the grid points.

B. Adaptive Time Step

During the motion some parts of the surface may deform much more slowly than the rest of the object, so the intensity difference in these regions between adjacent images may not exceed the camera noise and the recovery procedure will fail. If one part of the object deforms noticeably after one recorded frame while the other part moves insignificantly, the intensity variation of part of the image is treated as noise and it is not possible to determine the deformation. This means that more images are required to analyze certain regions of the object. In this case, the surface can be divided into regions of similar “deformation speed” and the frame after which the intensity difference from the first frame is greater than the camera noise (or exceeds a certain signal-to-noise ratio) is determined for every pixel. The matrix of the arguments is denoised, spatially averaged, and clustered. An example of such an output matrix is shown in Fig. 7. The objective function Fobj is found for every region separately and is summed afterwards; therefore, different frames participate in the formation of the mismatch error and the objective function.

 figure: Fig. 7.

Fig. 7. (a) Matrix with frame numbers at which the intensity difference exceeds the camera noise, and (b) matrix of the segmented regions with indicated frame numbers. Dark blue is the background which is cut off by masking.

Download Full Size | PDF

In the example shown in Fig. 7(b) the lowest value is 2, which means that the deformation between frames 1 and 3 is recovered over the entire object. The deformation is considered linear between frames 1 and 3 and the next optimization procedure starts at the frame 3.

C. Optimization

To understand the optimization procedure it is useful to consider three consecutive frames I0,I1,I2. Assuming a linear out-of-plane displacement of the surface during the time required to capture the three images, the intensities are described by

I0=cos(ϕ0),
I1=cos(ϕ0+Δϕ),
I2=cos(ϕ0+2Δϕ),
where Δϕ is the out-of-plane spatial phase deformation between the first and the second frame, which is assumed to be smooth and continuous. The goal of the optimization algorithm is to predict the value of Δϕ such that the difference between the experimentally obtained images and the calculated intensities I1 and I2 is minimal. The optimization is performed using the Rosenbrock optimization algorithm, which is a zeroth-order search algorithm that has proved to be more efficient than the general gradient search algorithm [25].

The Rosenbrock optimization compares the simulated intensity patterns with the measured intensities I1 and I2. The objective function contains two error functions E1,E2, the pixel quality matrix Q, and boundary conditions C and is given by

Fobj=(E1+E2)·(1+|E1E2|2)·Q·C,
E1=(I2cos(ϕ0+Δϕ))2,
E2=(I3cos(ϕ0+2Δϕ))2.

The matrix C is calculated for every pixel and contains boundary conditions that assume the phase difference is bounded from the upper and lower sides by given values U, L such that

C=103(Θ(ΔϕU)Θ(ΔϕL)),
where Θ is the Heaviside step function and values of the upper and lower boundaries U and L are input parameters. In general, the phase is bounded from 0 to 2π.

The total objective function is a sum over all the pixel values normalized to the total number of pixels:

FobjΣ=12klx=1ky=1lFobj(x,y).

The initial guess for the optimization is the flat phase map with four nodes on the edges. Thus, the starting direction of the Rosenbrock search is described by the 4-by-4 unit matrix (4 nodes) and all the nodes have the same weight in the optimization algorithm. The values at the nodes are changed according to the Rosenbrock algorithm until every point has at least one successful and one unsuccessful optimization step. There is no basis rotation as in the original Rosenbrock method to facilitate the computation. Testing has shown that this simplification has no effect on the resulting calculation of the surface deformation. The grid is rearranged and is divided as a quadtree when a better spatial resolution is necessary, i.e., when the local objective function is higher than FobjΣ/e (e2.71).

After the quadtree division, the number of parameters of the optimization changes. The algorithm can be generalized and applied to any number of frames with the assumption of a linear deformation in time. In the case where multiple frames are considered, the standard deviation is used as the error function instead of the difference between E1 and E2. Taking a series of frames helps to recover nodal lines and speeds up the process. The result of the single optimization (Δϕ1,2) is shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. Optimized phase difference between two frames of the recorded sequence. The optimization procedure recovers only positive values of the phase.

Download Full Size | PDF

D. Phase Sign Detection

The optimization algorithm produces only positive deformation values; therefore, a sign detection procedure is necessary. The proposed method for the sign detection is based on the assumption that the phase of regions separated by a nodal line will have different signs. The sign detection procedure is fully presented in the first part of this work [19]. The nodal lines are detected by thresholding and skeletonization, the nearby regions are separated, and all possible combinations of the positive and negative phase values are considered. The criteria for the mechanically stable solution is that the sum of the pixel values near the nodal lines should be minimal since the nodal line should separate regions that move out-of-phase with respect to each other. Naturally, two possible complementary solutions will appear, so the total resulting phase may be inverted. The ambiguity can be solved by placing a laser Doppler vibrometer behind the object to determine the correct absolute phase at a single point.

After the sign of the phase is corrected for the first recovered phase difference, ϕ0 is found using Eq. (8). The result of the deformation recovery between two frames is shown in Fig. 9 together with the resulting experimental and calculated intensities. In Fig. 9(c) the lines of zero displacement are clearly defined and do not contain speckle noise. This method allows the deformation recovery between consecutive frames with the correct location of the nodal lines from a series of interferometric images with an assumption of linear displacement between frames.

 figure: Fig. 9.

Fig. 9. (a) Result of the phase sign correction, (b) experimental subtraction intensity, and (c) calculated intensity.

Download Full Size | PDF

5. TOTAL PHASE RECOVERY

During the optimization the grid is adapted spatially during the calculation of the phase difference between two moments in time. Temporal step is calculated before the spatial optimization procedure and can be updated after each optimization step.

When all of the phase differences are found, it is possible to determine the total phase between the first and the jth frame by simple summation of the phases:

ϕ1,j=i=0j1ϕi,i+1.

However, direct summation has several drawbacks. First, it may result in a cumulative error, especially at the boundaries where small values can add up to violate the fixed boundary condition. Second, the sign detection algorithm may fail in the regions where the deformation changes the sign. Finally, the addition of any incorrect phase difference will result in an incorrect value of the final phase.

An alternative method to construct the total phase is to use the initial phase value that is found during the first phase difference recovery with a proper sign. Instead of using Eq. (8) to find ϕ1 from the intensities, ϕ1 is composed of the previously defined values:

ϕ1=ϕ0+ϕ0,1.

Thus, the next phase difference is added to the previous phase and becomes a starting phase for the next series of images. This procedure provides the certainty in the sign of the phase as long as the first frame is correctly determined. If the phase of the first frame is correct, the following frames do not need an additional sign detection. After the phase differences are obtained for every frame, they are summed to determine the absolute value of the phase. This method requires an assumption that coefficients A and B are considered constant for a particular time interval. The coefficients can be recalculated for higher precision at any given time step. Then the normalized intensity and the phase sign must be updated, and a new ϕ1 must be established.

6. RESULTS

In this section results of the experiments performed under two types of excitation are presented. The drum has been either excited by a shaker or played by hand. During harmonic excitation, the electrodynamic shaker was attached to the wooden stick behind the drum membrane. The temporal resolution of the high-speed camera was 50 μs. Figures 10 and 11 show the results of the temporal displacement recovery at two frequencies. The time between the selected frames is approximately 0.2 ms. For analysis of the surface deformation under playing conditions, the drum was excited by slightly rubbing the wooden stick that is attached to the center of the membrane by hand. Total relative deformation is shown in Fig. 12 with a 5 ms interval between images.

 figure: Fig. 10.

Fig. 10. Results of temporal reconstruction during the harmonic excitation of the cuíca drum at 627 Hz with interval of 0.2 ms.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Results of temporal reconstruction during the harmonic excitation of the cuíca drum at 787 Hz with interval of 0.2 ms.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Results of temporal reconstruction during the excitation of the cuíca drum by hand with interval of 5 ms.

Download Full Size | PDF

In Figs. 1012 the fringe patterns (c) are directly calculated from the deformations (a). The fringe pattern Icalc is calculated at each pixel as

Icalc(t)=|cos(ϕ0+ϕ(t))cos(ϕ0)|·B,
where ϕ(t) is the total phase at time t, and ϕ0 is the speckle phase calculated from the starting frame at time t=0.

When the optimization method is applied to noisy interferograms there is a large error in the reconstructed image. The error appears to be larger in the areas of low surface velocity, where the camera noise exceeds the intensity difference caused by motion. However, the areas of equal displacement are correctly defined and the calculated intensity patterns show good agreement with experimentally obtained interferograms.

In the current experiments the phase could be retrieved without recalculating the initial speckle phase for 20 frames which corresponds to 1 ms, and up to 20 ms in the case of a very small deformation during a slight rubbing of the wooden stick of the drum.

Figure 13 shows a block diagram with the main steps of the image processing routine for the total phase l. The step “intensity normalization” is described in Section 3, the steps preceding the iterative phase recovery process refer to Section 4, and “total phase” calculation follows from Section 5.

 figure: Fig. 13.

Fig. 13. Block diagram describing the main steps of the processing for the temporal displacement retrieval.

Download Full Size | PDF

Optimization stops when it hits the plateau of the objective function or the surface grid division cannot go lower than the particular size (in this case it is 32 or 64 pixels). The number of iterations (e.g., calculation of the optimization function and following parameter update) is limited to 5000.

Single-phase difference processing takes 10–15 min on 512 by 512 pixels images on a standard laptop computer (Intel Core i7-7500U with 8 Gb RAM). The processed datasets consist of 20–60 images. When the deformation is very low, frames for processing can be taken with a larger temporal interval between them (in the case of hand excitation, every fifth frame was taken).

7. DISCUSSION

As in every experimental method, certain limitations can be identified. The main limiting factor for the proposed deformation retrieval method is the surface velocity with respect to the camera frame rate. At large displacements speckles can drift in space so that the initial phase value is no longer valid and the objective function increases. Another limitation could be the uncertainty in definition of the phase sign, which is obtained using assumptions on the mechanical properties of the object under deformation. The deformation between frames is considered linear in a sense that the direction of the deformation does not change between two exposures. The phase differences are recovered at discrete times, and temporal deformation of a single pixel is represented by a function that is not inherently smooth. This can be corrected by a polynomial fitting of the deformation in time, a process that may also increase the performance of the optimization algorithm. Furthermore, it may be required to paint an object to remove specular reflections and provide an even reflection of the laser (this can be performed with dissolving “snow frost ”).

The limitation of the linearity between more than two consequent frames may be resolved by acquiring a camera with a higher frame rate. In the high-speed camera used in the experiments, the frame rate was related to its spatial resolution. The limit of the 21 K frames per second comes from the fact that the resolution of the image was kept 512 by 512 pixels. Nevertheless, the proposed TRESPI method appears to be an extension of time-averaged ESPI with opportunities for further enhancement.

8. CONCLUSIONS

The results reported here show that the optimization method can be applied in TRESPI for out-of-plane deformation analysis with both harmonic and non-harmonic excitation. The phase recovery procedure uses a flexible time step to increase the precision of the optimization and quadtree spatial grid division to reduce the number of optimization parameters. Future improvements involve full automation of the deformation recovery procedure, application to high-amplitude cases (more than 2π phase difference between the frames), reduction of the cumulative error, and extensional error analysis.

Funding

European Commission (EC), ITN Marie Curie Action project BATWOMAN, Seventh Framework Programme (FP7) (605601); Austrian Science Fund (FWF) (P28655-N32).

REFERENCES

1. L. Yang, X. Xie, L. Zhu, S. Wu, and Y. Wang, “Review of electronic speckle pattern interferometry (ESPI) for three dimensional displacement measurement,” Chin. J. Mech. Eng. 27, 1–13 (2014). [CrossRef]  

2. C. Buckberry, M. Reeves, A. J. Moore, D. P. Hand, J. S. Barton, and J. D. C. Jones, “The application of high-speed TV-holography to time-resolved vibration measurements,” Opt. Lasers Eng. 32, 387–394 (2000). [CrossRef]  

3. H. V. D. Auweraer, H. Steinbichler, S. Vanlanduit, C. Haberstok, R. Freymann, D. Storer, V. Linet, H. V. Der Auweraer, and H. Van der Auweraer, “Application of stroboscopic and pulsed-laser electronic speckle pattern interferometry (ESPI) to modal analysis problems,” Meas. Sci. Technol. 13, 451–463 (2002). [CrossRef]  

4. T. Statsenko, V. Chatziioannou, T. Moore, and W. Kausel, “Methods of phase reconstruction for time-averaging electronic speckle pattern interferometry,” Appl. Opt. 55, 1913–1918 (2016). [CrossRef]  

5. A. J. Moore, D. P. Hand, J. S. Barton, and J. D. Jones, “Transient deformation measurement with electronic speckle pattern interferometry and a high-speed camera,” Appl. Opt. 38, 1159–1162 (1999). [CrossRef]  

6. F. Chen, G. M. Brown, and N. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39, 10–22 (2000). [CrossRef]  

7. B. Bowe, S. Martin, V. Toal, A. Langhoff, and M. Whelan, “Dual in-plane electronic speckle pattern interferometry system with electro-optical switching and phase shifting,” Appl. Opt. 38, 666–673 (1999). [CrossRef]  

8. P. Etchepareborda, A. Bianchetti, A. L. Vadnjal, A. Federico, and G. H. Kaufmann, “Simplified phase-recovery method in temporal speckle pattern interferometry,” Appl. Opt. 53, 7120–7128 (2014). [CrossRef]  

9. J. Burke and H. Helmers, “Performance of spatial vs. temporal phase shifting in ESPI,” Proc. SPIE 3744, 188–199 (1999). [CrossRef]  

10. X. Xie, L. Yang, X. Chen, N. Xu, and Y. Wang, “Review and comparison of temporal- and spatial-phase shift speckle pattern interferometry for 3D deformation measurement,” Proc. SPIE 8916, 89160D (2013). [CrossRef]  

11. V. Madjarova, H. Kadono, and S. Toyooka, “Dynamic electronic speckle pattern interferometry (DESPI) phase analyses with temporal Hilbert transform,” Opt. Express 11, 617–623 (2003). [CrossRef]  

12. X. Li, Z. Huang, M. Zhu, J. He, and H. Zhang, “Spatio-temporal phase retrieval in speckle interferometry with Hilbert transform and two-dimensional phase unwrapping,” Opt. Eng. 53, 124104 (2014). [CrossRef]  

13. F. A. Marengo Rodriguez, A. Federico, and G. H. Kaufmann, “Hilbert transform analysis of a time series of speckle interferograms with a temporal carrier,” Appl. Opt. 47, 1310–1316 (2008). [CrossRef]  

14. N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. A 454, 903–995 (1998).

15. S. Equis and P. Jacquot, “Phase extraction in dynamic speckle interferometry with empirical mode decomposition and Hilbert transform,” Strain 46, 550–558 (2010). [CrossRef]  

16. P. Etchepareborda, A. Bianchetti, F. E. Veiras, A. L. Vadnjal, A. Federico, and G. H. Kaufmann, “Comparison of real-time phase-reconstruction methods in temporal speckle-pattern interferometry,” Appl. Opt. 54, 7663–7672 (2015). [CrossRef]  

17. Y. H. Huang, Y. S. Liu, S. Y. Hung, C. G. Li, and F. Janabi-Sharifi, “Dynamic phase evaluation in sparse-sampled temporal speckle pattern sequence,” Opt. Lett. 36, 526–528 (2011). [CrossRef]  

18. R. Kulkarni and P. Rastogi, “Optical measurement techniques-A push for digitization,” Opt. Lasers Eng. 87, 1–17 (2016). [CrossRef]  

19. T. Statsenko, V. Chatziioannou, T. Moore, and W. Kausel, “Deformation reconstruction by means of surface optimization. Part I: Time-averaged electronic speckle pattern interferometry,” Appl. Opt. 56, 654–661 (2017). [CrossRef]  

20. T. R. Moore and S. A. Zietlow, “Interferometric studies of a piano soundboard,” J. Acoust. Soc. Am. 119, 1783–1793 (2006). [CrossRef]  

21. A. Morrison and T. D. Rossing, “The extraordinary sound of the hang,” Phys. Today 62(3), 66–67 (2009). [CrossRef]  

22. J. N. Petzing and J. R. Tyrer, “Recent developments and applications in electronic speckle pattern interferometry,” J. Strain Anal. 33, 153–169 (1998). [CrossRef]  

23. T. Statsenko, “Computer aided testing methods,” Ph.D. thesis (Universität für Musik und darstellende Kunst Wien, 2017).

24. R. A. Finkel and J. L. Bentley, “Quad trees: a data structure for retrieval on composite keys,” Acta Inform. 4, 1–9 (1974). [CrossRef]  

25. M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming Theory and Algorithms (Wiley, 2006).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Experimental arrangement for a TRESPI measurement. BS, beam splitter; PBS, polarizing beam splitter; M, mirror; L, lens (microscopic objective); λ / 2 , half-wave plate; G, ground glass.
Fig. 2.
Fig. 2. Brazilian cuíca.
Fig. 3.
Fig. 3. Recorded intensity of a single pixel during the deformation of the surface with sample rate of 21,000 frames per second. Intensity is presented in analog-to-digital units (ADU).
Fig. 4.
Fig. 4. Pixel intensity from three neighboring pixels chosen in the middle part of the drum harmonically excited at 627 Hz. Intensity is measured in ADU.
Fig. 5.
Fig. 5. Pixel quality matrix Q .
Fig. 6.
Fig. 6. Normalized ESPI image of the cuíca during harmonic excitation at 627 Hz. Subtraction intensity corresponds to a time difference of 1.2 ms (25 frames).
Fig. 7.
Fig. 7. (a) Matrix with frame numbers at which the intensity difference exceeds the camera noise, and (b) matrix of the segmented regions with indicated frame numbers. Dark blue is the background which is cut off by masking.
Fig. 8.
Fig. 8. Optimized phase difference between two frames of the recorded sequence. The optimization procedure recovers only positive values of the phase.
Fig. 9.
Fig. 9. (a) Result of the phase sign correction, (b) experimental subtraction intensity, and (c) calculated intensity.
Fig. 10.
Fig. 10. Results of temporal reconstruction during the harmonic excitation of the cuíca drum at 627 Hz with interval of 0.2 ms.
Fig. 11.
Fig. 11. Results of temporal reconstruction during the harmonic excitation of the cuíca drum at 787 Hz with interval of 0.2 ms.
Fig. 12.
Fig. 12. Results of temporal reconstruction during the excitation of the cuíca drum by hand with interval of 5 ms.
Fig. 13.
Fig. 13. Block diagram describing the main steps of the processing for the temporal displacement retrieval.

Tables (1)

Tables Icon

Table 1. Conditions on Deformation Phase and Relation Between the Sign of the Initial Speckle Phase and the Intensity Difference

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

I ( t ) = A + B · cos ( ϕ ( t ) ) ,
A = ( max ( I ) + min ( I ) 2 ) ,
B = ( max ( I ) min ( I ) 2 ) .
Q = B / N ,
I j = cos ( ϕ i + Δ ϕ i , j ) = cos ( ϕ j ) ,
D i , j = λ 4 π Δ ϕ i , j .
| ϕ 0 | = arccos ( I 0 ) .
ϕ 0 = sign ( Δ ϕ 0,1 ) · sign ( I 1 I 0 ) · arccos ( I 1 ) .
I j I 0 = cos ( ϕ 0 + Δ ϕ 0 , j ) cos ( ϕ 0 ) .
I 0 = cos ( ϕ 0 ) ,
I 1 = cos ( ϕ 0 + Δ ϕ ) ,
I 2 = cos ( ϕ 0 + 2 Δ ϕ ) ,
F obj = ( E 1 + E 2 ) · ( 1 + | E 1 E 2 | 2 ) · Q · C ,
E 1 = ( I 2 cos ( ϕ 0 + Δ ϕ ) ) 2 ,
E 2 = ( I 3 cos ( ϕ 0 + 2 Δ ϕ ) ) 2 .
C = 10 3 ( Θ ( Δ ϕ U ) Θ ( Δ ϕ L ) ) ,
F obj Σ = 1 2 k l x = 1 k y = 1 l F obj ( x , y ) .
ϕ 1 , j = i = 0 j 1 ϕ i , i + 1 .
ϕ 1 = ϕ 0 + ϕ 0,1 .
I calc ( t ) = | cos ( ϕ 0 + ϕ ( t ) ) cos ( ϕ 0 ) | · B ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.