Abstract
We present a method for full-field 3D measurement of substrate warpage and ball grid array coplanarity, which is suitable for inline back-end inspection and process monitoring. For evaluating the performance of the proposed system, the linearity between our system and a reference confocal microscope is studied by repeating measurements 35 times with a particular substrate sample (). The point-to-point correlation coefficient with between two methods is , and the difference is for warpage measurement. repeatability of the substrate warpage is 4.2 μm. For BGA coplanarity inspection the bump level correlation coefficient is and the difference is . repeatability of BGA coplanarity is 3.7 μm. Data acquisition takes about 0.2 s for full field measurements.
© 2014 Optical Society of America
1. Introduction
In the semiconductor industry, electronic packaging plays an essential role for improving the performance of electronic devices. The goal for the production of a high-performance electronic system is packaging devices as densely as possible in order to minimize circuit path length [1]. For achieving this goal, the trend in integrated circuit (IC) packaging is to increase the input/output (I/O) count and to decrease the size of packaging [2]. The ball grid array (BGA) is the most common packaging technique used in industry because of its high I/O density and shorter electrical paths. Due to high-density packaging, however, process controls for assembly become critical for reducing problems such as connection failures between BGA and a circuit board. Thus it is important to measure IC package surface profile for decreasing device failure.
Two important quality metrics for package inspection are the substrate warpage and the BGA coplanarity. Figure 1 shows the schematics of an IC package. Due to thermal cycling during manufacturing process and materials with different expansion rates, a substrate is warped. In order to calculate the BGA coplanarity, the coordinates of each ball are required, and a regression plane is defined based on these locations. Coplanarity is defined as the distance between the maximum and the minimum from the best-fit plane. The BGA coplanarity directly affects solder joint reliability, and the causes of large coplanarity are substrate warpage and ball height differences. The substrate warpage is typically the major contributor to any lack of coplanarity since the solder ball heights are relatively uniform [3]. Therefore, the substrate warpage is one of the key metrics for the quality control of IC packages.
Optical-based profilers have been used as nondestructive measurements for a long time. Common optical inspection tools used in IC package characterization are confocal microscopes, white light interferometers (WLI), laser devices [4,5], fringe projection devices [6], and machine vision techniques [7]. Depending on the purpose of measurements, an appropriate metrology should be employed in order to maximize output performances. For example, confocal microscopes or WLI are widely used in laboratories to characterize sampled IC packages because measurement accuracy is more important than throughput. On the other hand, factories use a machine vision system for large volume inspection due to its high throughput and cost advantage. For a quality-control perspective, the high-speed inspection systems used in factories play a key role to monitor production yield. Thus we are focusing on inline inspection system development used in the factories, rather than those used in the laboratories, to meet the demand for measuring high density BGA packages.
Stereo vision is used to reconstruct a 3D object by finding matching pixels (point correspondences) between images captured by two cameras from different view angles and converting these 2D pixel coordinates into the 3D depth. In computer vision, the point correspondence algorithm has been one of the most widely studied subjects [8–10]. For accurate reconstructions, transformation relationships between a camera lens and an image plane as well as between a camera and a scene should be determined. This process is called camera calibration. Tsai [11] and Zhang [12,13] have developed the most commonly used calibration methods in computer vision. Although there are a number of applications for the 3D measurements [14–20], the studies of the BGA coplanarity, substrate warpage, and bump height measurements using stereo vision are limited [21,22].
In this paper, we propose the inline stereo vision system for BGA coplanarity and substrate warpage inspection. In Section 2, theoretical aspect of stereo vision is discussed. In Section 3, we describe hardware setup and calibration procedure as well as the computer simulation and experimental results for the substrate warpage and the BGA coplanary. Finally the conclusion is given in Section 4.
2. Theory
Figure 2 shows the epipolar geometry [23]. Stereo vision employs two cameras viewing an object from different angles. The world coordinates are given by , , and . The camera coordinates are given by , and , for a camera 1 and a camera 2, respectively. The points and are the camera center of each camera. The object point on the world coordinate is imaged to for the camera 1, for the camera 2. The points , , and construct the plane called the epipolar plane. The line connecting the and is called the base line, and its intersection points with each image plane are called epipoles and . The epipolar plane intersects the image planes, whose intersections are called the epipolar line and .
We can write the relationship between , , and as follows.
where is known as the homogeneous camera projection matrix, which maps a point on the world coordinate to a corresponding point on the camera coordinate.Given known point correspondences and , the matrix can be reconstructed by using direct linear transformation (DLT) [24] as,
or simply, . This can be solved by singular value decomposition (SVD): Then is the last column of [25]. Before applying SVD, it is important to perform appropriate normalization to obtain meaningful results [26].Once the system parameters are determined, object heights can be reconstructed from these P matrices and a set of corresponding points and at each image plane. The simplest approach for height reconstruction is linear triangulation [27]. For each camera, we have and , which can also be expressed as and . These equations can be combined as,
where denotes each element of or matrix. Similarly, this equation can be solved by SVD.Now consider a ray that is back-projected from point to the 3D scene (A″–A′–A) in Fig. 2. Given a point at the image plane, we want to find a set of points that construct a ray passing through the Camera center . To construct a ray in space, we need two points. One is the Camera center , and the other point can be obtained from Eq. (1) as,
is pseudoinverse of . Since , a point lies on the ray. This ray is imaged by Camera 2 through Camera center and constructs the line . The line can be written as Since, the projection of to Camera 2 is the epipole , Eq. (7) becomes The matrix is called a fundamental matrix in the machine vision community. Since the point lies on the line , we can write, From Eqs. (8) and (9), For the point correspondence and , the fundamental matrix satisfies the above condition, and this is called an epipolar constraint.3. Simulation and Experiment Results
A. Hardware Setup
Figure 3 illustrates the system setup. We have two CMOS cameras (, 25 ftp) with a pixel size of . Three diffuse illumination sources are used in this setup. An on-axis light source is located above the IC package and used as a masking purpose when image processing is performed. A centroid of the reflected light is used to locate and coordinates for each bump. Two light sources, Light 1 and Light 2, are used to obtain good contrast images. The angle and height of these two light sources need to be adjusted in order to obtain an optimal image contrast.
B. System Calibration
System calibration is carried out in order to determine matrices for both cameras. We use a calibration board, which has uniformly distanced cross targets. Two image distortions should be corrected: one is perspective distortion and the other is radial distortion. In order to calculate transformation matrix, or homography, we use four crosses at each corner and its corresponding ideal points. From these pairs, perspective distortion is corrected. The next image correction is a radial distortion. Again, a set of measured points and ideal points should be determined. Figure 4(a) shows these sets. We assume that the image center or the principal point is near the center of the image and radial distortion is really small around the center. With this assumption, the ideal locations (green circles) are calculated from the unit square near the image center. The red dots are the centroid of each cross. Figure 4(b) is the image after the radial distortion is corrected. As indicated in the image, the red dots and the green circles are aligned after the transformation is applied to the image.
Once image aberrations are corrected, the next step is to calculate the matrix for each camera. The calibration target is used again to obtain sets of and . First, the target is positioned at a nominal height , and a single image is taken by each camera. Then the stage is moved to the next position , and another image is taken. Repeat the process to obtain a sufficient number of sets, and . Given these correspondences, the matrix can be calculated.
C. Measurements
Figure 5 shows the two camera centers and the world coordinate. From the calculated matrices, can be determined. The calculated values are and in millimeters. At this camera location, the image field of view is .
The image acquisition procedures are as follows.
- (1) First, Lights 1 and 2 are turned off and an image is captured by Cameras 1 and 2 with the on-axis light.
- (2) Turn off the on-axis light and turn on Light 1. Capture an image by Camera 2.
- (3) Turn off Light 1 and turn on Light 2. Take an image by Camera 1.
In order to reconstruct coordinates, point correspondences should be identified. The first step is to determine corresponding bump pairs between two images. For this purpose, the on-axis light is used. Figure 7 shows the BGA side of IC package sample (top), images using Light 1 (bottom left), and the on-axis light (bottom right).
The image captured by Light 1 shows brighter background reflection from the substrate surface as compared with the image captured by the on-axis light. If there is background reflection that has similar intensity values when compared to the bumps, each ball cannot be isolated properly. This is why the image with the on-axis light is needed for bump masking. Figure 8 shows the masked image. The BGA image with the on-axis light is used to make the mask and is then applied to the image captured with Lights 1 and 2.
Figure 9 is the masked image with bump numbers. The top image is from Camera 1 and the bottom is from Camera 2. Because the cameras look at the object from different angles, labels between the two images do not match each other and, thus, reordering process is necessary in order to have the same labeling between the two images.
D. Substrate Warpage Measurement
Once the corresponding bump pairs between the two images are determined, a substrate warpage measurement can be performed. Since there are no specific features or texture on the substrate that can be used for locating point correspondences, the ball edge is used for obtaining these pairs. At first, a fundamental matrix F is calculated using point correspondences obtained from the edge of each bump as illustrated in Fig. 10. A coordinate of each edge is determined as the position, where each ball has the maximum diameter. An coordinate is defined from the intensity profile of this cross section by using an intensity threshold.
Once the fundamental matrix F is obtained, point correspondences on the substrate can be calculated. Figure 11 illustrates how to determine these pairs.
At first, a reference point shown as the red dot is chosen from Camera 1 image (top). The coordinates of this reference point are defined from the edge locations previously determined, as a result the coordinate of the red dot and green arrow (max diameter) are identical. The coordinate of the red dot is defined as 8 pixels away from the edge of the ball in this case. Once the point on Camera 1 is defined, we can calculate an epipolar line by using Eq. (8). We know that a corresponding point should be somewhere along this line. To identify this point, again, the coordinate is chosen from the maximum ball diameter position in the Camera 2 image, and the green dot is the corresponding point. Since the reference points can be defined at each side of the ball, we have two reference points for each ball.
Once point correspondences are defined, we can calculate the Z coordinates. First should be noted that the disparity of the substrate changes slowly almost everywhere, or in other words, the substrate surface should be smooth. Thus, to calculate the Z coordinate of a single point, we take an average of nearest four points around it. We define the substrate warpage as follows:
indicates the five largest Z values, and is the five smallest Z values from measurements. Figure 12 illustrates the 3D profiles of the IC package and clearly shows the warped shape. The dimension of this sample is . Each dot indicates the Z coordinate of the sampled points. The color plane shows the regression plane based on the Z coordinates.For evaluating our results, we use a confocal microscope as our reference. The measurements are repeated 35 times consecutively. The mean substrate warpage with is based on our system and 215.2 μm based on our reference confocal microscope. The measurement bias is about 11 μm for this IC package. Another metric for evaluating the system performance is the linearity between our system and the reference. One of the parameters for measuring linearity is a correlation coefficient that is defined as follows:
where cov is the covariance and is the standard deviation. X is the set of data from our system and Y is that of the reference tool. Figure 13 is the point-to-point correlation plot between the two systems. The blue centerline shows the regression line with a correlation coefficient with of . The two black lines illustrate the upper and lower limits with .E. BGA Coplanarity Measurement
In order to determine the BGA coplanarity bump heights should be calculated. We use a 3D bump model to estimate bump heights. It is modeled as a hemi-ellipsoid as shown in Fig. 14. A single ball area is defined as , which is the same as the real image size captured by the cameras. From the matrices obtained from the experiment and X, Y, Z coordinates of the model, we can calculate expected 2D captured images shown in Fig. 15.
The two green circles indicate the edge of the bump, which is determined by the same method used in the warpage measurement, and a straight line defines the diameter in pixels. From this model the relationship between the ball height and the diameter in pixels can be obtained, which is shown in Fig. 16.
From the two edge locations determined for the warpage measurements, we can obtain the diameter of each ball and convert it to a ball height using this relationship. To reconstruct the BGA coplanarity distribution, the calculated ball heights are added to the Z coordinates of the substrate warpage. The results are shown in Figs. 17 and 18.
The mean BGA coplanarity with is based on our system and 222.8 μm based on our reference confocal microscope. The correlation coefficient with is and is . Since BGA ball heights are estimated based on the model, the system gives measurement outliers if the shape of a ball deviates from the model due to process issues. Thus both correlation coefficient and for BGA coplanarity are worse than these two parameters for warpage measurement. Yet the proposed method gives approximately the same standard deviation as the substrate warpage measurement. For obtaining two measurement results (Figs. 12 and 17) from a series of raw images, the execution time with by a laptop (Intel Core i7 2.4 GHz, 8 GB of memory) in a MATLAB environment is . We have validated that the proposed method works for IC package sample with concave warpage by measuring 30 different samples.
To evaluate the effect of BGA surface reflectivity for height reconstruction, two different illumination conditions are compared. Figure 19 shows the identical bump with nominal intensity (left) and brighter illumination (right) to create intensity saturation at the top of the BGA ball. The mean pixel diameter difference ((R1-L1)-(R2-L2)) with between two illumination conditions is pixels among randomly chosen 35 different bumps. From Fig. 16, 0.8 pixel diameter corresponds to 4 μm height.
4. Conclusion
We have demonstrated a method for substrate warpage and BGA coplanarity inspection using the stereo-vision system. This system allows fast full-field measurements, which is suitable for inline backend inspection and process monitoring. For evaluating the performance of our system, the particular IC sample is measured 35 times and compared with the reference confocal microscope. The mean substrate warpage with is based on our system and 215.2 μm based on our reference confocal microscope. The measurement bias is about 11 μm for this IC package. The correlation coefficient is and the difference in the two methods is for the warpage measurement. The mean BGA coplanarity is based on our system and 222.8 μm based on our reference confocal microscope. The bump level correlation coefficient for BGA coplanarity is and the difference is . A data acquisition takes about 0.2 s for the full field measurements.
The authors gratefully acknowledge the support of Intel Corporation.
References
1. W. D. Brown, Electronic Packaging (IEEE, 2006).
2. W. J. Greig, Integrated Circuit Packaging, Assembly and Interconnections (Springer, 2007).
3. Texas Instruments, “Flip chip ball grid array package reference guide” (2005), http://www.ti.com/lit/ug/spru811a/spru811a.pdf.
4. H. Tsukahara, Y. Nishiyama, F. Takahashi, and T. Fuse, “High-speed solder bump inspection system using a laser scanner and CCD Camera,” Systems and Computers in Japan 31, 94–102 (2000). [CrossRef]
5. P. Kim and S. Rhee, “Three-dimensional inspection of ball grid array using laser vision system,” IEEE Trans. Electron. Packag. Manufact. 22, 151–155 (1999). [CrossRef]
6. H. N. Yen and D. M. Tsai, “A fast full-field 3D measurement system for BGA coplanarity inspection,” Int. J. Adv. Manuf. Technol. 24, 132–139 (2004). [CrossRef]
7. V. Bartulovic, M. Lucic, and G. Zacek, “Inspection of ball grid arrays (BGA) by using shadow images of the solder balls,” U.S. Patent 6,177,682 B1 (23 January 2001).
8. D. Marr and T. Poggio, “Cooperative computation of stereo disparity,” Science 194, 283–287 (1976). [CrossRef]
9. U. R. Dhond and J. K. Aggarwal, “Structure from stereo—a review,” IEEE Trans. Syst. Man Cybern. 19, 1489–1510 (1989). [CrossRef]
10. M. Z. Brown, D. Burschka, and G. D. Hager, “Advances in computational stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 993–1008 (2003). [CrossRef]
11. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3, 323–344 (1987). [CrossRef]
12. Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proc. 7th Int. Conference on Computer Vision (IEEE, 1999), pp. 666–673.
13. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000). [CrossRef]
14. P. Luo, Y. Chao, and M. Sutton, “Application of stereo vision to three-dimensional deformation analyses in fracture experiments,” Opt. Eng. 33, 981–990 (1994). [CrossRef]
15. J. J. Aguilar, F. Torres, and M. A. Lope, “Stereo vision for 3D measurement: accuracy analysis, calibration and industrial applications,” Measurements 18, 193–200 (1996). [CrossRef]
16. C. J. Tay, X. Kang, C. Quan, X. Y. He, and H. M. Shang, “Height measurement of microchip connecting pins by use of stereovision,” Appl. Opt. 42, 3827–3831 (2003). [CrossRef]
17. Y. J. Xiao and Y. F. Li, “Optimized stereo reconstruction of free-form space curves based on a nonuniform rational B-spline model,” J. Opt. Soc. Am. A 22, 1746–1762 (2005). [CrossRef]
18. Z. Ren and L. Cai, “Three-dimensional structure measurement of diamond crowns based on stereo vision,” Appl. Opt. 48, 5917–5932 (2009). [CrossRef]
19. Z. Ren, J. Liao, and L. Cai, “Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision,” Appl. Opt. 49, 1789–1801 (2010). [CrossRef]
20. Z.-Z. Tang, J. Liang, Z. Xial, C. Guo, and G. Hu, “Three-dimensional digital image correlation system for deformation measurement in experimental mechanics,” Opt. Eng. 49, 103601 (2010). [CrossRef]
21. C. J. Tay, X. He, X. Kang, C. Quan, and H. M. Shang, “Coplanarity study on ball grid array packaging,” Opt. Eng. 40, 1608–1612 (2001). [CrossRef]
22. M. Dong, R. Chung, E. Y. Lam, and K. S. M. Fung, “Height inspection of wafer bumps without explicit 3-D reconstruction,” IEEE Trans. Electron. Packag. Manufact. 33, 112–121 (2010). [CrossRef]
23. C. Steger, Handbook of Machine Vision (Wiley-VCG, 2006).
24. J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proc. Computer Vis. Patt. Recog.1106–1112 (1997).
25. K. F. Riley, M. P. Hobson, and S. J. Bence, “Matrices and vector spaces,” in Mathematical Methods for Physics and Engineering (Cambridge University, 2002).
26. R. Hartley, “In defense of the eight-point algorithm,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 580–593 (1997). [CrossRef]
27. R. Hartley, “Triangulation,” Comput. Vis. Image Underst. 68, 146–157 (1997). [CrossRef]