Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-camera microscopic stereo digital image correlation using a diffraction grating

Open Access Open Access

Abstract

A simple, cost-effective but practical microscopic 3D-DIC method using a single camera and a transmission diffraction grating is proposed for surface profile and deformation measurement of small-scale objects. By illuminating a test sample with quasi-monochromatic source, the transmission diffraction grating placed in front of the camera can produce two laterally spaced first-order diffraction views of the sample surface into the two halves of the camera target. The single image comprising negative and positive first-order diffraction views can be used to reconstruct the profile of the test sample, while the two single images acquired before and after deformation can be employed to determine the 3D displacements and strains of the sample surface. The basic principles and implementation procedures of the proposed technique for microscopic 3D profile and deformation measurement are described in detail. The effectiveness and accuracy of the presented microscopic 3D-DIC method is verified by measuring the profile and 3D displacements of a regular cylinder surface.

© 2013 Optical Society of America

1. Introduction

Digital image correlations (DIC) [1, 2] has been established as a popular and powerful non-interferometric optical metrology for full-field motion, deformation and shape measurements at various temporal and spatial scales, and has found numerous successful applications in various scientific research and engineering fields. In general, the DIC techniques can be classified into 2D-DIC [2] and 3D-DIC [3]. 2D-DIC using a single camera is easy-to-implement, cost-effective and very useful, but it is generally limited to in-plane deformation measurement of nominal planar objects, and the measured in-plane displacements are susceptible to out-of-plane displacements of a test sample occurred after loading [4, 5]. To overcome the limitations of 2D-DIC, 3D-DIC based on the use of two synchronized digital cameras and the principle of binocular stereovision has been developed [6, 7]. Compared with 2D-DIC, 3D-DIC is considered to be more accurate and practical, because it can simultaneously measure the shape and all the three displacement components of both planar and non-planar specimen surfaces.

It is worth noting that, to meet the requirement of measuring shape and deformation of small-scale objects, 3D-DIC has been implemented on light stereo microscopes as well as fluorescent stereo microscopy systems [810], which enables shape and deformation measurement of small objects with size varying from several millimeters to several centimeters. Based on this principle, commercial microscopic 3D-DIC system (e.g., Vic-3D Micro system, Correlated Solutions, Inc, USA) is also available in market [11]. However, except for the complicate optical setup and relatively high cost of an optical stereo microscope, accurate calibration of the extrinsic and intrinsic parameters of a stereo microscope has been considered as a very cumbersome and challenging task. Considering the great potential applications in various research areas, such as mechanics, material research and biological engineering, a low-cost, easy-to-implement yet effective microscopic 3D-DIC technique for measuring the shape and 3D deformation of small objects is therefore highly desirable and undoubtedly has important scientific value and application prospect.

Recently, a novel diffraction assisted image correlation (DAIC) method using a single camera and a transmission diffraction grating was proposed for shape and 3D deformation measurement of small objects with a size varying from submillimeters to a few centimeters [12]. Compared with existing microscopic stereo-DIC method using two digital cameras and a light stereo microscope, the DAIC method offers two attractive advantages: (1) simple and low-cost measuring system: The optical system only requires a single camera, a light microscope, a quasi-monochromatic source and a transmission diffraction grating. The price of the overall system is much cheaper than that of existing microscopic 3D-DIC systems using a light stereo microscope and two digital cameras. In addition, as the two first-order diffracted images of the specimen can be recorded by a single camera, the need for hardware or software synchronization of two cameras in time-resolved measurements can be avoided. (2) Easy and accurate measurements without the stereovision calibration: In DAIC method, the coordinate correspondence of the points in two diffraction images is governed by the diffraction rule, thus eliminating the need for calibrating the extrinsic and intrinsic parameters of the imaging system and greatly simplifying the microscopic 3D shape and deformation measurements.

However, in the initial work presented by Xia et al. [12], the DAIC method can only be used to measure 3D deformation of a flat transparent sample. In their work, though simple formulations have been derived for determining all the three displacement components of a flat sample based on the image displacements computed by 2D-DIC, we found that the assumed optical imaging model, mathematic derivations and the final formulas are less rigorous, and may lead to large measurement errors, especially in the out-of-plane displacement components. For the purpose of extending the applicability of this single-camera microscopic 3D-DIC method and making the method more substantial and accurate, we will demonstrate that the DAIC method can be greatly improved by extending its applicability scope from flat transparent samples to more general non-planar opaque objects. Also, by using the pinhole imaging model, correct equations that provide accurate 3D deformation measurements are derived. The rest of this paper is organized as follows. In section 2, the measuring system, basic principles and measurement procedures of the single-camera microscopic 3D-DIC method are described in detail. In section 3, two experiments were conducted to verify the effectiveness and accuracy of the proposed microscopic 3D-DIC technique for shape and 3D displacements measurement, respectively.

2. Principles of the single-camera microscopic 3D-DIC method

2.1 Measuring system

Figure 1 schematically illustrates the optical arrangement of the single-camera microscopic 3D-DIC method for shape and 3D deformation measurement of small-scale objects. It consists of a digital CCD camera, a long working distance micro lens, a transmission diffraction grating and a quasi-monochromatic light source. The diffraction grating is vertically placed between the micro lens and the test object, and is aligned to be perpendicular to the optical axis of the imaging system. When the object is illuminated with the quasi-monochromatic light, high-order (i.e., positive and negative first-order) diffraction images of the object appear symmetrically replicated with respect to the ordinary (zero-order) image. In this work, we focus on the positive and negative first-order images, which can be thought of as virtual images formed from two virtual objects placed symmetrically beside the real object [12, 13] as shown in Fig. 1.

 figure: Fig. 1

Fig. 1 Schematic diagram of optical arrangement of the single-camera microscopic 3D-DIC method

Download Full Size | PDF

For the purpose of clarity, a coordinate system is established with its origin O(0,0,0) located at the intersecting point of the optical axis and the diffraction grating. Assume that XY plane coincides with the grating plane with X axis perpendicular to and Y axis parallel to the grating rulings. The positive direction of Z axis is defined to point from the grating to the test object. Using this coordinate system, the real coordinate of a point on the test object surface is denoted as P(X, Y, Z). Moreover, the coordinates of its corresponding two first-order diffracted points are represented by P-1 (X-1, Y-1, Z-1) and P+1 (X+1, Y+1, Z+1), respectively. By using a backward ray-tracing method, the distances between the grating and points P-1, P+1 are found to be [12]

Z±1=Zcos3θ
where θ=sin1(λp) is the first-order diffraction angle with λ being the wavelength of the quasi-monochromatic light source and p denoting the pitch of the grating.

Since the primary direction of the diffraction grating is parallel to Y axis, the Y and X coordinates of point P, P-1, P+1 can be determined as

X±1=XZtanθ
Y±1=Y

By combining Eqs. (1)-(3), the real coordinates of the two first-order diffraction points P-1 and P+1 should be P±1(XZtanθ,Y,Zcos3θ).

In the following derivation, the ideal pinhole camera model without considering the lens distortion is used to depict the coordinate correspondence between an object point and its corresponding image point, as shown in Fig. 2. Here, we assume that the intersection point of the optical axis and the image plane is located at center of the image (cx, cy); the image distance of the optical system is Limg; the object distance of the optical system is Zobj. Also, it is quite important to mention here that, using a long working distance microscope, the object distance Zobj is much larger than the height variations of the test micro-scale sample surface. For this reason, it is reasonable to assume that the object distance of each point on the sample surface equals to Zobj. Based on these assumptions and approximations, the image coordinates (in units of pixels) of P, P-1, P+1, i.e.,x0,x1,x+1, are of the following forms:

x0=(x0cx,y0cy)=(LimgXZobjPsz,LimgYZobjPsz)=(MX,MY)
x1=(x1cx,y1cy)=(Limg(X+Ztanθ)ZobjPsz,LimgYZobjPsz)=(M(X+Ztanθ),MY)
x+1=(x+1cx,y+1cy)=(Limg(XZtanθ)ZobjPsz,LimgYZobjPsz)=(M(XZtanθ),MY)
where (x+1, y+1), (x-1, y-1) are the image coordinates of the two virtual points located in the positive first-order ( + 1) and the negative first-order (−1) diffracted images of the test specimen, respectively; Psz is the pixel size of the camera; M = Limg/(Zobj × Psz) (in unit of pixel/mm) is the magnification factor of the optical system, which can be calibrated in advance. Note that the coordinate correspondence of the two diffracted points can be obtained by matching the positive first-order and the negative first-order images of the test specimen using well-established subset-based 2D-DIC algorithm [2, 14].

 figure: Fig. 2

Fig. 2 The imaging model of the single-camera microscopic 3D-DIC method

Download Full Size | PDF

2.2 3D profile measurement

To reconstruct the 3D shape of a test sample, the exact 3D coordinates of a point, i.e.,P(X,Y,Z), on the test sample surface should be obtained. By adding and subtracting the x-coordinate of Eq. (5) to or from Eq. (6) first, and then rearranging the resulting equations, the X and Z coordinates can be determined. Likewise, the Y coordinates can be determined by adding the y-coordinate of Eq. (6) to that of Eq. (5). In such a way, the real 3D coordinates of a point are written as

X=(x+1+x12cx)2M
Y=(y+1+y12cy)2M
Z=(x+1x1)2Mtanθ

2.3 3D deformation measurement

Assume that the displacement vector of a measurement point P(X, Y, Z) is denoted as d=(U,V,W). After loading, P moves to P’(X + U, Y + V, Z + W). Then, according to the grating equation and Eq. (1), the exact coordinates of P-1 and P+1 can be written as P±1(X+U(Z+W)tanθ,Y+V,Z+Wcos3θ). According to the pinhole imaging model described above, the image coordinates of these diffracted points after deformation are expressed as

x0=(x0cx,y0cy)=(Limg(X+U)(Zobj+W)Psz,Limg(Y+V)(Zobj+W)Psz)
x1=(x1cx,y1cy)=(Limg(X+U+(Z+W)tanθ)(Zobj+Wcos3θ)Psz,Limg(Y+V)(Zobj+Wcos3θ)Psz)
x+1=(x+1cx,y+1cy)=(Limg(X+U(Z+W)tanθ)(Zobj+Wcos3θ)Psz,Limg(Y+V)(Zobj+Wcos3θ)Psz)

Based on the wavelength of the illuminated quasi-monochromatic light and the pitch of the grating used in this work, the first-order diffracted images θ of the grating is calculated as 3.99°, thus we have cosθ = 0.9976≈1. In addition, the object distance Zobj of the long working distance microscope being used is much larger than the out-of-plane displacement component W (i.e., W << Zobj). Based on these approximations, Eqs. (11) and (12) can be simplified as

(x1cx,y1cy)(Limg(X+U+(Z+W)tanθ)ZobjPsz(1WZobj),Limg(Y+V)ZobjPsz(1WZobj))=(M(X+U+(Z+W)tanθ)(1WZobj),M(Y+V)(1WZobj))
(x+1cx,y+1cy)(Limg(X+U(Z+W)tanθ)ZobjPsz(1WZobj),Limg(Y+V)ZobjPsz(1WZobj))=(M(X+U(Z+W)tanθ)(1WZobj),M(Y+V)(1WZobj))

Subtraction the x-directional displacements of the negative first-order diffracted images (i.e., u-1) from those of the positive first-order diffracted images (i.e., u+1) gives

u+1u1=(x+1x+1)(x1x1)=2Mtanθ(Z+W)(1WZobj)2MtanθZ=2Mtanθ[(Z+W)(1WZobj)Z]=2Mtanθ(1Z+WZobj)W2Mtanθ(1ZZobj)W

In a similar manner, by adding the x- and y-directional displacements of the negative first-order diffracted images (i.e., u-1, v-1) to those of the positive first-order diffracted images (i.e., u+1, v+1) and taking consideration of Eqs. (5), (6), (13) and (14) as well as the assumption W << Zobj, we have

u+1+u1=(x+1x+1)+(x1x1)=2MX2M(X+U)(1WZobj)2MXWZobj2MU2MU
v+1+v1=(y+1y+1)+(y1y1)=2M(Y+V)(1WZobj)2MY2MV2MYWZobj2MV

Rearrange Eqs. (15)-(17), the three displacement components of a measurement point P can be obtained as

Wu+1u12Mtanθ(ZobjZobjZ)
Uu+1+u12M
Vv+1+v12M
where Z is the distance of each measurement point to the grating, which can be obtained after shape reconstruction. Zobj is the object distance of the imaging system, which can be determined by the calibration approach described in the following section.

We should note that the final formulas derived in Ref [12] were based on a simple but less practical parallel projection. In this work, the formulas given in Eqs. (18)-(20) are derived based on perspective projection using the pinhole imaging model, it is clear that more parameters (i.e., Z, M, Zobj) are involved in determining the three displacements of the measured object surface. As will be shown below, the formulas derived in this work can provide more accurate results.

2.4 Determine image displacements of the diffracted images using 2D-DIC

To reconstruct the profile of a sample surface of an object, a region of interest (ROI) should be first specified, within which all the calculation points are defined, in the left (negative first-order) diffraction image. Then, these calculation points with known image coordinates (x-1, y-1) are searched in the right (positive first-order) diffracted image using subset-based 2D-DIC to determine its corresponding image coordinates (x+1, y+1), as schematically shown in the top half part of Fig. 3. Afterwards, these mapped image coordinates are submitted to Eqs. (7)-(9) for reconstruction of the profile of the ROI.

 figure: Fig. 3

Fig. 3 Calculation of image displacements for profile and 3D displacement measurement using subset-based 2D-DIC.

Download Full Size | PDF

Figure 3 also indicates the detailed procedure for 3D displacement measurements. After defining a ROI in the negative first-order diffraction image of the reference image, each measurement point within ROI is then searched in other three different diffraction images (i.e., + 1 diffraction image in the reference state, −1 and + 1 diffraction images in the deformed state) using subset-based matching method. Clearly, the difference between (x’-1, y’-1) and (x-1, y-1) provides the in-plane displacements (u-1, v-1) of the point in negative first-order images, while the difference between (x’+1, y’+1) and (x+1, y+1) gives the desired in-plane displacements (u+1, v+1) of the point in positive first-order images.

2.5 Calibration of the imaging system

As shown in the final formulas given in Eqs. (7)-(9) and Eqs. (18)-(20), to determine the measured surface morphology and three-dimensional deformation of a small object, two parameters of the imaging system, the magnification factor M and the object distance Zobj, must be calibrated. In practice, M can be easily determined by measuring the image displacements of a speckle pattern with prescribed in-plane translations. Specifically, a glass plate decorated with speckle pattern can be translated using a precision translation stage along the horizontal direction (in millimeters), the in-plane motions (in pixels) of the zero-order image of the sample can be detected by regular 2D-DIC method. Afterwards, the magnification M of the imaging system can be computed by fitting the actual and measured displacements using linear least squares.

In addition, in order to determine the object distance Zobj of the imaging system, the sample can be translated sequentially along the Z-direction with a distance of △Zi. Apparently, when the sample moves towards or away from the camera, the recorded image will be enlarged or reduced uniformly. By measuring the average tensile or compressive normal strains across the image at each translation using 2D-DIC, which are related to the out-of-plane motion and the object distance εxx,i=εyy,iΔZiZobjas derived in previous works [4, 5], the object distance Zobj can also be accurately determined using linear least squares.

3. Experimental validation

3.1 Experimental details

Two experiments were performed to verify the effectiveness and accuracy of the proposed single-camera microscopic 3D-DIC method for profile and deformation measurement. A photograph of the established measuring system is shown in Fig. 4. Quasi-monochromatic light, which would be used to illuminate the test specimen surface during experiment, was provided by a dual gooseneck fiber optic illuminator equipped with two ultra-narrow bandpass optical filters (632.8 ± 1.0 nm bandpass, NT65-753, Edmund Optics, NJ, USA). It is quite worthy to note that the bandwidth of the bandpass filter for producing quasi-monochromatic illumination has an important influence on the quality of the diffraction images, because a quasi-monochromatic light with broader bandwidth tends to yields blurred first-order diffracted images of the specimen. A transmission diffraction grating of 110 lines/mm groove density (NT46073, Edmund Optics, NJ, USA) was used to achieve multiple views of the specimen. The value of the first-order diffracted angle of the diffraction grating was calculated as 3.99° using the grating equation. The distance between the sample and the grating was fixed at a distance of about 32 mm, which ensured that the diffracted images were not overlapping. The diffraction images were captured by a long distance micro lens (12X Zoom, working distance 165mm, Navitar Inc, USA) connected to a 1624 × 1236 pixels 8-bit industrial digital CCD (TXG20, Baumer Company, Switzerland). Also, using the calibration procedure described above, the magnification factor M and object distance Zobj of the image system were calibrated to be 168.14 pixel/mm, and 214.46 mm, respectively.

 figure: Fig. 4

Fig. 4 Experimental setup for the proposed single-camera microscopic 3D-DIC method

Download Full Size | PDF

In the following validation experiments, a regular cylinder with a nominal diameter of 2.00 mm was used as a test sample. Prior to measurements, random speckle patterns were decorated to the cylinder surface by spraying white and black paints. The shape of the cylinder was first measured. The fitted radius of the reconstructed profile was compared with the value measured by a caliper to verify the accuracy of the proposed technique for shape measurement. Out-of-plane (z-directional) rigid body translations were also performed on the same cylinder. The motions were applied by a two-axis translation stage with a positioning accuracy of 5 μm, and the 3D displacements were then calculated using the presented technique. Subsequently, the measured displacements were compared with applied values to validate the effectiveness and accuracy of the proposed technique for 3D deformation measurements.

3.2 Shape measurement of a cylinder surface

Using the established measuring system shown in Fig. 4, the zero-order image, positive and negative first-order diffraction images of the cylinder surface can be simultaneously recorded in a single image, as shown in Fig. 5(a). To measure the shape of the cylinder, a rectangular ROI, comprising 12648 = 186 × 68 regularly spaced calculation points, were specified in the middle of the negative first-order diffraction image, i.e. left part of Fig. 5(a). These calculation points were then searched using a subset size of 31 × 31 pixels and a grid step of 5 pixels in the positive first-order diffraction image, i.e. right part of Fig. 5(a), using 2D-DIC. To determine the image displacement more accurately and robustly, a zero-mean normalized sum of squared difference (ZNSSD) criterion, which is insensitive to the offset and scale changes in the intensity of the target subset [15], is employed as the similarity measure. Also, a twelve-parameter second-order shape function [16], which is capable of approximating more complicate deformation of a target subset, is adopted to related the correspondence of the same points in reference and target subsets. Afterwards, the displacements of each calculation point were computed using the fast Newton-Raphson algorithm we proposed recently [14]. The computed x-, y-directional displacement fields and zero-mean normalized cross-correlation (ZNCC) coefficient map (Note that the ZNCC coefficient falls into a range of [-1, 1] is more straightforward to show the similarity degree and matching quality between the reference subset and target subset. The ZNCC coefficient can be directly converted from the ZNSSD coefficient) are shown in Fig. 5(b)-5(e). It is clear from Fig. 5 that the u-displacements are much larger than v-displacements due to the lateral diffraction effect of the vertically placed diffraction grating. The ZNCC map shown in Fig. 5(d) are all larger than 0.95, indicating that a reliable and accurate matching has been obtained.

 figure: Fig. 5

Fig. 5 (a) Diffraction images of a cylinder rod; (b) computed x-directional image displacements of the ROI; (c) computed y-directional image displacements of the ROI; (d) computed ZNCC coefficients of the ROI

Download Full Size | PDF

Based on these computed image-plane displacements and the calibrated magnification of the imaging system, the profile of specified ROI can be reconstructed using Eqs. (7)-(9) as shown in Fig. 6(a). For the purpose of better viewing, a plane fitting approach has been used to remove the non-parallelism between the cylinder and the camera, then the Z-coordinates of the 3D surface are inverted. The final profile is indicated in Fig. 6(b), which is in good agreement with the practical situation, confirming the correctness of the proposed technique for profile measurement. The radius of the cylinder can be computed by fitting the reconstructed height data using an nonlinear iterative least square approach given in Ref [17]. The radius of the cylinder is calculated to be 2.015 mm. By compassion, it is found that the relative percentage error is less than 1.0%, verifying the accuracy of the proposed technique for 3D shape measurement.

 figure: Fig. 6

Fig. 6 (a) Reconstructed surface topography of the cylinder rod using Eqs. (7)-(9); (b) Inversed surface topography after applying coordinate transformation based on best plane fitting.

Download Full Size | PDF

3.3 3D displacement measurement

To verify the validity and accuracy of the proposed single-camera microscopic 3D-DIC method for displacement measurement, rigid body translations of the same cylinder sample along Z directions were exerted using a two-axis translation stage. The prescribed translations in Z-directions range from −0.5 mm to 0.5 mm with a 0.1 mm increment between consecutive translations. Diffraction speckle image of the cylinder sample was recorded at each translation. Note that the image in its original position (W = 0 mm) was used as the reference image. All the rest of the images were subsequently compared with the reference image using the same calculation settings to determine the full-field image displacements within the specified ROI. Afterwards, real displacements of each object point were obtained using Eqs. (18)-(20)).

Figure 7(a) shows the mean value of the measured Z-directional displacements and the prescribed displacements. For comparison, the average of the Z-displacements calculated by Eq. (8) in Ref [12] are also plotted. Obviously, the measured displacement results by the measurement technique proposed in this work are in perfect agreement to the prescribed displacements, while the displacements by the original equation in Ref [12] have a systematic error that increases with the increase of out-of-plane (Z-directional) displacements. Figure 7(b) illustrates the displacement vector of each calculated point in the specified ROI when the applied Z-displacement is 0.5mm. We can clearly see that all of the measured displacement vectors are along the positive Z-direction, and the value of each displacement vector approximately equals to 0.5 mm, which are consistent with the prescribed displacements.

 figure: Fig. 7

Fig. 7 (a) Comparison of Z-displacements calculated by Eq. (18) of this work and Eq. (8) of Ref [12] respectively; (b) Measured displacement vector field at a Z-displacement of 0.5mm

Download Full Size | PDF

To quantitatively verify the accuracy of the theoretical formulas derived in this work, Table 1 lists the mean values of the Z-directional displacements measured by Eq. (18) of this work and by Eq. (8) of Ref [12] at different translations. It is noted that the mean absolute percentage error of the Z-displacements measured by the proposed technique is only 1.56%, with a maximum relative error of 4%. By contrast, the mean relative error of the displacements measured by existing work [12] is over 14.95%, with a maximum relative error exceeding 18%. By comparing these two equations (i.e., Eq. (18) of this work and by Eq. (8) of Ref [12]), it is found that the percentage error of Eq. (8) in Ref [12] should be Z/(Zobj-Z). When the corresponding values (i.e., Zobj = 214.46mm, Z≈32mm) are substituted in this equation, a factor of 17.58% is obtained. This value is in approximate agreement with the percentage errors given in the last column of Table 1.

Tables Icon

Table 1. Comparison of measured Z displacements by the original and proposed methods for out-of-plane translations

4. Conclusion

In conclusion, we described a single-camera microscopic 3D-DIC technique for measuring shape and 3D deformation of small-scale objects. By combining quasi-monochromatic illumination and grating diffraction imaging, multiple diffracted views of a test object surface can be obtained. Based on the simple but practical pinhole imaging model, theoretical formulas for reconstructing the profile and measuring 3D displacements of the test object surface are derived. Compared with existing microscopic 3D-DIC implemented on a stereo light microscope, the proposed technique features simple and cost-effective optical arrangement, easy implementation with simple calibration. Moreover, it is worth noting that, since only one camera is used, the proposed technique is preferable in investigating dynamic deformation behavior of small-scale objects. The validity and accuracy of the method have been successfully demonstrated through 3D profile measurement of a regular cylinder and rigid-body translation tests of the same specimen. Comparisons of the prescribed displacements and the measured values are given, and the good agreement clearly demonstrates the effectiveness and accuracy of the proposed technique.

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grants nos. 11002012, 11272032, 11322220 and 91216301), the Program for New Century Excellent Talents in University (Grant no. NCET-12-0023), the Science Fund of State Key Laboratory of Automotive Safety and Energy.

References and links

1. M. A. Sutton, J. J. Orteu, and H. W. Schreier, Image correlation for shape, motion and deformation measurements (Springer, 2009).

2. B. Pan, K. M. Qian, H. M. Xie, and A. Asundi, “Two-dimensional Digital Image Correlation for In-plane Displacement and Strain Measurement: A Review,” Meas. Sci. Technol. 20(6), 062001 (2009). [CrossRef]  

3. J. J. Orteu, “3-D computer vision in experimental mechanics,” Opt. Lasers Eng. 47(3-4), 282–291 (2009). [CrossRef]  

4. M. A. Sutton, J. H. Yan, V. Tiwari, W. H. Schreier, and J. J. Orteu, “The effect of out-of-plane motion on 2D and 3D digital image correlation measurements,” Opt. Lasers Eng. 46(10), 746–757 (2008). [CrossRef]  

5. B. Pan, L. P. Yu, and D. F. Wu, “High-accuracy 2D digital image correlation measurements with bilateral telecentric lenses: error analysis and experimental verification,” Exp. Mech. , doi:. [CrossRef]  

6. P. F. Luo, Y. J. Chao, M. A. Sutton, and W. H. Peters III, “Accurate measurement of three-dimensional displacement in deformable bodies using computer vision,” Exp. Mech. 33(2), 123–132 (1993). [CrossRef]  

7. B. Pan, D. F. Wu, and L. P. Yu, “Optimization of a three-dimensional digital image correlation system for deformation measurements in extreme environments,” Appl. Opt. 51(19), 4409–4419 (2012). [CrossRef]   [PubMed]  

8. H. W. Schreier, D. Garcia, and M. A. Sutton, “Advances in light microscope stereo vision,” Exp. Mech. 44(3), 278–288 (2004). [CrossRef]  

9. M. A. Sutton, X. Ke, S. M. Lessner, M. Goldbach, M. Yost, F. Zhao, and H. W. Schreier, “Strain field measurement on mouse carotid arteries using microscopic three-dimensional digital image correlation,” J. Biomed. Mater. Res. 84A(1), 178–190 (2008). [CrossRef]  

10. Z. X. Hu, H. Y. Luo, Y. J. Du, and H. B. Lu, “Fluorescent stereo microscopy for 3D surface profilometry and deformation mapping,” Opt. Express 21(10), 11808–11818 (2013). [CrossRef]   [PubMed]  

11. http://www.correlatedsolutions.com

12. S. Xia, A. Gdoutou, and G. Ravichandran, “Diffraction assisted image correlation: a novel method for measuring three-dimensional deformation using two-dimension digital image correlation,” Exp. Mech. 53(5), 755–765 (2013). [CrossRef]  

13. M. Trivi and H. J. Rabal, “Stereoscopic uses of diffraction gratings,” Appl. Opt. 27(6), 1007–1009 (1988). [CrossRef]   [PubMed]  

14. B. Pan and K. Li, “A fast digital image correlation method for deformation measurement,” Opt. Lasers Eng. 49(7), 841–847 (2011). [CrossRef]  

15. B. Pan, H. M. Xie, and Z. Y. Wang, “Equivalence of digital image correlation criteria for pattern matching,” Appl. Opt. 49(28), 5501–5509 (2010). [CrossRef]   [PubMed]  

16. H. Lu and P. D. Cary, “Deformation measurement by digital image correlation: implementation of a second-order displacement gradient,” Exp. Mech. 40(4), 393–400 (2000). [CrossRef]  

17. P. F. Luo and J. N. Chen, “Measurement of curved-surface Deformation in cylindrical coordinates,” Exp. Mech. 40(4), 345–350 (2000). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Schematic diagram of optical arrangement of the single-camera microscopic 3D-DIC method
Fig. 2
Fig. 2 The imaging model of the single-camera microscopic 3D-DIC method
Fig. 3
Fig. 3 Calculation of image displacements for profile and 3D displacement measurement using subset-based 2D-DIC.
Fig. 4
Fig. 4 Experimental setup for the proposed single-camera microscopic 3D-DIC method
Fig. 5
Fig. 5 (a) Diffraction images of a cylinder rod; (b) computed x-directional image displacements of the ROI; (c) computed y-directional image displacements of the ROI; (d) computed ZNCC coefficients of the ROI
Fig. 6
Fig. 6 (a) Reconstructed surface topography of the cylinder rod using Eqs. (7)-(9); (b) Inversed surface topography after applying coordinate transformation based on best plane fitting.
Fig. 7
Fig. 7 (a) Comparison of Z-displacements calculated by Eq. (18) of this work and Eq. (8) of Ref [12] respectively; (b) Measured displacement vector field at a Z-displacement of 0.5mm

Tables (1)

Tables Icon

Table 1 Comparison of measured Z displacements by the original and proposed methods for out-of-plane translations

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

Z ±1 = Z cos 3 θ
X ±1 =XZtanθ
Y ±1 =Y
x 0 = ( x 0 c x , y 0 c y ) = ( L i m g X Z o b j P s z , L i m g Y Z o b j P s z ) = ( M X , M Y )
x 1 = ( x 1 c x , y 1 c y ) = ( L i m g ( X + Z tan θ ) Z o b j P s z , L i m g Y Z o b j P s z ) = ( M ( X + Z tan θ ) , M Y )
x + 1 = ( x + 1 c x , y + 1 c y ) = ( L i m g ( X Z tan θ ) Z o b j P s z , L i m g Y Z o b j P s z ) = ( M ( X Z tan θ ) , M Y )
X= ( x +1 + x 1 2 c x ) 2M
Y= ( y +1 + y 1 2 c y ) 2M
Z= ( x +1 x 1 ) 2Mtanθ
x 0 =( x 0 c x , y 0 c y )=( L img ( X+U ) ( Z obj +W ) P sz , L img ( Y+V ) ( Z obj +W ) P sz )
x 1 =( x 1 c x , y 1 c y )=( L img ( X+U+( Z+W )tanθ ) ( Z obj + W cos 3 θ ) P sz , L img ( Y+V ) ( Z obj + W cos 3 θ ) P sz )
x +1 =( x +1 c x , y +1 c y )=( L img ( X+U( Z+W )tanθ ) ( Z obj + W cos 3 θ ) P sz , L img ( Y+V ) ( Z obj + W cos 3 θ ) P sz )
( x 1 c x , y 1 c y )( L img ( X+U+( Z+W )tanθ ) Z obj P sz ( 1 W Z obj ), L img ( Y+V ) Z obj P sz ( 1 W Z obj ) ) =( M( X+U+( Z+W )tanθ )( 1 W Z obj ),M( Y+V )( 1 W Z obj ) )
( x +1 c x , y +1 c y )( L img ( X+U( Z+W )tanθ ) Z obj P sz ( 1 W Z obj ), L img ( Y+V ) Z obj P sz ( 1 W Z obj ) ) =( M( X+U( Z+W )tanθ )( 1 W Z obj ),M( Y+V )( 1 W Z obj ) )
u +1 u 1 =( x +1 x +1 )( x 1 x 1 )=2Mtanθ( Z+W )( 1 W Z obj )2MtanθZ =2Mtanθ[ ( Z+W )( 1 W Z obj )Z ]=2Mtanθ( 1 Z+W Z obj )W2Mtanθ( 1 Z Z obj )W
u +1 + u 1 =( x +1 x +1 )+( x 1 x 1 )=2MX2M( X+U )( 1 W Z obj ) 2M XW Z obj 2MU2MU
v +1 + v 1 =( y +1 y +1 )+( y 1 y 1 )=2M( Y+V )( 1 W Z obj )2MY 2MV2M YW Z obj 2MV
W u +1 u 1 2Mtanθ ( Z obj Z obj Z )
U u +1 + u 1 2M
V v +1 + v 1 2M
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.