Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Concentric circle scanning system for large-area and high-precision imaging

Open Access Open Access

Abstract

Abstract: Large-area manufacturing surfaces containing micro- and nano-scale features and large-view biomedical targets motivate the development of large-area, high-resolution and high-speed imaging systems. Compared to constant linear velocity scans and raster scans, constant angular velocity scans can significantly attenuate transient behavior while increasing the speed of imaging. In this paper, we theoretically analyze and evaluate the speed, acceleration and jerks of concentric circular trajectory sampling (CCTS). We then present a CCTS imaging system that demonstrates less vibration and lower mapping errors than raster scanning for creating a Cartesian composite image, while maintaining comparably fast scanning speed for large scanning area.

© 2015 Optical Society of America

1. Introduction

Demand for miniature and low-cost devices, along with advances in materials, drives manufacturing toward micro- and nano-scale patterns in large area. Flexible electronics (FE) [1–3], for example, offers a ubiquitous platform for diverse products in displays, solar panels and healthcare with advantages that include low cost, compact form, light weight, and high performance. To inspect high-resolution patterns over a large range requires high-precision imaging technologies [4]. Similarly, such large-view and high-precision microscopes are also desirable for biological and medical imaging. The invention of a variety of fast frame grabbers and optical microscopy techniques facilitate the imaging at micro and nanometer scales. The field of view (FOV) of the high-resolution microscope, however, fundamentally limits detailed pattern imaging over a large area. The straightforward solution to this problem is to use large FOV and high-resolution optical sensors, such as higher-powered optics and larger CCD arrays. However, these sensors normally have the prohibitive expense. Recently, lens-free large-area imaging systems have been proposed for large FOV with simple and compact geometry and light weight, such as computational on-chip imaging tools [5] or miniaturized mirror optics [6]. On-chip imaging uses digital optoelectronic sensor array to directly sample the light transmitted through a large-area specimen without using lenses between the specimen and sensor chip. Miniaturized mirror optics uses various mirror shapes and projective geometries to reflect light arrays from larger FOV into the smaller FOV of camera. Both of these large-area imaging systems have limited spatial resolution. Moreover, the on-chip imaging tools are limited to transmission microscopy modalities in general. The mirror optics has distortion and low contrast due to manufacturing problems, such as unsatisfied mirror surfaces. In practice, an alternative approach to large-area microscopy is to implement high-precision scanners at an effective scanning rate and individual FOV images are stitched together into a wide view. During this process, the fast scanners can acquire multiple frames over a region of the work piece. Using the variation between the sequential low-resolution (LR) frames, super-resolution (SR) methods [7] can further reduce aliasing and under-sampling.

Raster scan is the conventional method for scanning small-scale features over large areas [2,8], wherein samples are scanned back and forth in one Cartesian coordinate, and shifted in discrete steps in another Cartesian coordinate. Fast and accurate scanning requires precise positioning with low vibration and short settling times. Unfortunately, fast positioning relies on high velocities and high accelerations that often induce mechanical vibrations. In general, there are three main approaches to reducing vibration in a raster scan. The first approach is to build large mechanical structures to withstand the discrete start and stop motions of the raster scan, which increases the mass and cost of the mechanical support structure. The second approach is to apply control techniques on actuators to reduce vibration during motion. This approach requires large control gains at high frequencies. This is a challenge when using piezoelectric actuators with low gain margins [4]. Improving closed-loop bandwidth can be achieved by either inversing or damping resonant dynamics [8]. The former method requires the known accurate complete nonlinear dynamics. Its implementation is hampered by the inherent model complexity and uncertainty [9]. The latter method is implemented by mechanical resonances damping [10], active vibration control [11], and integral resonant damping [12]. These feedback controllers generally suffer from measurement noises during a scan.

An alternative approach to reducing mechanical vibrations is to design smooth trajectories that limit jerk and acceleration without additional large mechanical structures or implementing complex control techniques. These well-known trajectories have been investigated and include spiral [13], Cycloid [14], and Lissajous [15] scan patterns. These trajectories offer high imaging speeds without exciting resonances of scanners and with little control efforts, at the cost of uniform sample point spatial distribution in Cartesian coordinates. Acquiring a uniform Cartesian composite image requires a well-designed trajectory that has sample points evenly distributed in the Cartesian coordinates. Concentric circular trajectory sampling (CCTS) [16] offers such an optimized sampling distribution for Cartesian image reconstruction. In this paper, we further investigate the performance of CCTS for imaging. Concerning the imaging of high-resolution patterns in large area and high speed, we design a vision system that synchronizes a camera and rotary motor on an air-bearing stage that can accurately acquire fine-detailed images in sampling positions. The synchronization strategy results from a conclusion in [8] that an optimized trapezoidal velocity profile can guarantee linear alignment of sample points, and hence avoids distortion and degradation in image reconstruction. We demonstrate that the transient behavior in inspection can be attenuated by using CCTS to replace raster sampling. We evaluate two rotational velocity profiles: constant linear velocity (CLV) and constant angular velocity (CAV). The latter velocity profile allows for higher-speed scanning in larger areas without increasing motor speed and with lower vibrations. Note that our concentric circular scan trajectory is a general method for imaging micro- and nano-scale features over large scan areas, typically on the order of a few square millimeters, or larger, by utilizing a high scan speed on the order of mm/s. Our presented approach is cost-effective, and provides a high-speed solution to both large FOV and high resolution. The proposed CAV imaging solves the conventional problem in raster scanning wherein the speed is limited by linear motor’s speed and scanner’s resonance frequency. However, the precision of the approach is still constrained by the response of the stage and the sensitivity of the target to motion, such as cell specimen.

In section 2, we analyze the generation of the CCTS trajectory by comparison with the raster scan trajectory. In section 3, we elaborate on the setup of the CCTS inspection system including camera-stage synchronization and imaging techniques. A variety of preliminary experimental results is demonstrated in section 4. We summarize conclusions and future work in section 5.

2. Theory

A raster scan trajectory is composed of a series of scan lines and turnaround points of the lines that cause jerks and limit the smoothness of the trajectory. A traditional solution in industry is to overshoot the scan region, and avoid imaging the jerk points, since all the jerk points are around the scan line endpoints. This solution is easy to implement, but it increases scan time and does not fundamentally address the root cause of vibration near natural resonance frequency. To overcome the aforementioned limitation, we consider a circular trajectory that maintains continuity in high-order derivatives by smoothly sliding along the tangential direction.

In the following, we briefly discuss rotational scanning techniques from the viewpoints of acceleration, velocity, position and scan time. A rotation path can be described by the instantaneous radius r and azimuthal angle θ. Let r(t)=αt, where t denotes time, and α denotes radial-motion speed. Then we have rotational acceleration,

a=(θ˙)2αt.
To maintain a constant acceleration, we have rotational speed,
θ˙=(aαt)1/2=(aα)1/2t1/2,
and rotational angle,
θ=2(aα)1/2t1/2+C,
where C is a constant. Constant acceleration reduces jerks. CCTS has a constant value of radius and constant acceleration in each circle when rotational speed is kept constant.

In the following, we conduct a theoretical performance analysis of the translational scan trajectory and a concentric circular trajectory in terms of scan time, turnaround points, and scan area, given a constant acceleration, and minimum spatial spectra/samples. To simplify the calculation, the radii are incremented linearly from central circles to outer circles. We consider two types of targets: rectangular area and circular area (see Figs. 1(a) and 1(d)), and two types of constant rotation velocity: CLV and CAV. Detailed calculations are included in Appendix A.

 figure: Fig. 1

Fig. 1 (a) Rectangular target. (b) A raster scan of (a). (c) A concentric-circle scan of (a). (d) Round target. (e) A raster scan of (d). (f) A concentric-circle scan of (d). Here turnaround points and scan motion directions are respectively marked by dots and arrows.

Download Full Size | PDF

2.1 Total scan time of rectangular area: raster versus CLV concentric-circle scan

Assuming a rectangular target is scanned (see Fig. 1(a)), the raster scan area can be calculated

Araster=S2,
where S is the length of the square area. Given pitch p, the number of lines in the raster scan (see Fig. 1(b)) can be calculated as
Nline=Sp,
where ‘’ denotes the ceil integer function.

Given an acceleration time TAT and a translation speed v, the translational scan time is

tT=(Sv+TAT)Nline,
For concentric circle rotation (see Fig. 1(c)), the outmost circle needs a minimum radius of
rmaxA=22S,
to reach all of the rectangular area. Then the number of circles can be calculated as

NcircleA=2S2p.

Given an acceleration time TAR and maximum rotation speed v in the tangential direction along each circle, we have the concentric circle scan time (refer to Appendix A),

tR=(2πS2v+πpv+TAR)NcircleA.

Note the acceleration time can be set the same for different cases in controller in real implementation. The magnitude of acceleration time is on the order of milliseconds that is ignorable compared to that of motion time on the order of seconds. Ignoring the acceleration time, the rotation-scan time, obtained by Eq. (9), is π/2 (~1.57) times the translation-scan time, obtained by Eq. (9), because the redundant scanning of blank areas as shown in Fig. 1(c). The turnaround points (see the dots in Figs. 1(b) and 1(c)) are reduced from 2*Nline−2 in raster scan to NcircleA in concentric-circle scan, approximately reducing the number of jerks by 65% as each turnaround point has two jerks.

2.2 Total scan time of round area: raster versus CLV concentric-circle scan

Assume a circular target is scanned (see Fig. 1(d)) with a diameter of S. Using the same acceleration and speed strategies from section 2.1, the raster scan will have the same scan area, scan lines, and scan time as Eq. (4)-(6). However, the radius of the outmost circle will be reduced to

rmaxA=12S.
Then the number of circles can be calculated as
NcircleB=S2p.
The concentric circle scan time is

tR(πS2v+πpv+TAR)NcircleB.

Ignoring the acceleration time in the Eq. (6) and (12), the above rotation-scan time is approximately π/4 (~0.785) times the translation-scan time. This yields a 21.5% decrease in scan time without scanning the blank areas in Fig. 1(e). The turnaround points (see the dots in Figs. 1(e) and 1(f)) are reduced from 2S/P2 in raster scan to S/(2P) in concentric-circle scan, approximately reducing the number of jerks by 75%.

2.3 Total scan time of round area: raster versus CAV concentric-circle scan

Assuming concentric-circle scan has a CAV θ and raster scan has all of the parameters kept the same as in section 2.1, we analyze in detail the two trajectories of scanning the same round area as in section 2.2. Let

v=rRθ˙,
where rR is the circle radius where the linear velocity v can be achieved at the angular velocityθ˙. Then, the CCTS time can be calculated as
tR=(2πθ˙+TAθ)NcircleB.
Here, we use TAθfor CAV angular acceleration time to distinguish it from the acceleration time for CLV scan. Substituting Eq. (11) into Eq. (14), we have

tRπSθ˙p+SpTAθ.

Ignoring the acceleration time in Eq. (6) and Eq. (15), we find that when

θ˙=πv/S,
the two scan trajectories will have the same scan time. If θ˙>πv/S, the concentric-circular scan will be faster than the raster scan. A large size S will lower the criteria for θ in this inequality. In other words, the larger the scan area is, the faster the CAV circular scan is. Such a conclusion is also applicable for the cases of scanning a rectangular target. Note that the fast speed of large-area circular scan with CAV does not rely on highly frequent shifts between circles and hence avoids high-frequency resonance.

2.4 Mapping CCTS for Cartesian imaging

Established image processing techniques are most developed for Cartesian composite images where pixels are uniformly distributed along the X and Y-axes. To generate the images, rotational sample points are mapped to the Cartesian coordinates, and each pixel is generated by interpolating its neighboring sample points. The mapping error is the main cause of image distortion due to non-uniform sampled spatial positions in Cartesian coordinates. To reduce the distortion in the interpolated images, we optimized CCTS to achieve uniform sampling positions in the Cartesian coordinates [16]. Our optimized CCTS technique applied neighborhood constraints on tangential motion and radial motion to maintain uniform Cartesian sampling positions. In this paper, we evaluate the performance of our CCTS and image reconstruction using two popular interpolation methods: the nearest-neighbor (NN) interpolation method and the linear interpolation method. The former is the easiest to implement in non-raster sampling [13]. The latter has been proved theoretically and experimentally for post correcting of image distortions [17]. Mathematically, the NN method is C0 continuous, and the linear interpolation is C1 continuous. The NN method assigns to each query pixel the value of the nearest sample point; the linear interpolation method assigns to each query pixel the weighted values of its neighboring pixels.

Moreover, we implement the CCTS for SR and mosaicing respectively for the solution of high resolution and large area. In SR, we employ robust iterative backpropogation (IBP) [18,19], to achieve sub-pixel resolution from relative-motion of LR images; in mosaicing, we fuse limited FOVs to achieve one wide-view composite image [20,21]. In this paper, we actively generate the homography matrices for mosaicing, according to the known motion. Then, the mosaics in the global coordinates are reprojected onto a synthetic manifold through rendering transformation. The unfolded manifold forms the overview of the scene. As we fix the views rectangular to the scene as we describe in the above interpolation, reprojection calculation is avoided.

3. Experimental setup

We developed a low-cost imaging system that integrates the CCTS algorithm on a planar X-Y air-bearing stage [22]. As shown in Fig. 2, the inspection system consists of a fast-speed camera, controller, rotation stage, X-Y air-bearing stage, and a light source. The air-bearing stage allows the movement of the target along the X and Y-axes with a workspace of 220 × 220 mm2, a resolution of 20nm and 40MHz bandwidth feedback on the X and Y position. A Newport rotation stage (RGV100HL stage) stacked on top of the X-Y air-bearing stages allows rotation about an A-axis with a resolution of 0.00001°, repeatability of 0.0003°, and absolute accuracy of 0.01°. The X, Y and A-axes are orthogonal. The stage has a programmed maximum velocity of 40mm/s along the X and Y-axes, and 720°/s in rotary A-axis. The imaging system consists of two parts: a high-speed CMOS area-scan camera (BASLER A504K) and high-speed frame grabber (BitFlow). The camera has a resolution of 1024 × 1280 (monochrome), a pixel pitch of 12μm, and a maximum frame rate of 500fps. The incidence angle of the camera upon the part is configured parallel to the A-axis of the air-bearing stage. The camera and lens apparatus is kept stationary during scans given its large mass. To demonstrate the cost-effective nature of the presented design, we listed the cost of these key components in US dollars in Table 1. The total cost of the listed components is $44400.

 figure: Fig. 2

Fig. 2 The setup of imaging system.

Download Full Size | PDF

Tables Icon

Table 1. The cost of key components of the designed imaging system

For imaging, we devise a rotational scanning process including stage-camera calibration, synchronous motion of the stage, image acquisition, image preprocessing, image reconstruction and SR/mosaicing algorithm (as shown in Fig. 3). Stage-camera calibration includes the registration of camera-coordinates and stage-coordinates, and the standardization of the illumination. The working stage and the camera have independent coordinates. To sample the motion of the stage in the scanned images, the stage coordinates need to be registered in the camera coordinates by aligning the camera and stage and registering the rotation center of the image. The magnification factor (MF) and illumination system are specifically customized. We use a gooseneck fiber optic light source for illumination. In the study, we implement a video camera with a white light in the imaging of microstructures, because they have become ubiquitous in microscopy labs with the advantages of relative inexpensiveness, high frame rates, and easy implementation.

 figure: Fig. 3

Fig. 3 Workflow of rotational imaging system.

Download Full Size | PDF

3.1 Stage-camera synchronization

In the synchronization process, the control system coordinates the rotation speed of the stage and frame-grab rate of the camera to guarantee that images are precisely acquired at predefined position in the peripheral direction along all circles. Meanwhile, the control system moves the stage along the radial direction to extend the FOV. We synchronize the stage motion and camera in a host PC driver. Users design the image sampling steps and send the control commands through the host-stage user interface (UI), via the controller, to the stage. The stage motion is fed back through the encoder and the host-stage UI to the host PC driver. When the stage moves to the desired position, it will trigger the frame grab driver and camera to take an image immediately.

As shown in Fig. 4, the camera control signal takes less than Δt = 1.33ms to react to the trigger of the controller. The BASLER A504K can achieve the maximum exposure time of Texpmax = 1996µs with a frame rate of 500 fps at full AOI. The sum of the maximum camera reaction time and the exposure time, Tupper, makes up one upper bound of the frame time and lower bound of the trigger interval in our imaging system. Therefore, we designed the trigger signals, each starting on a time interval of Texsync, which is larger than Tupper, to obtain reliable synchronous timing. The calculation of the signal delay time Tupper, allows the accurate design of imaging position and the reduction of positioning errors.

 figure: Fig. 4

Fig. 4 Time control of CCTS imaging (TA: acceleration time. θ˙: maximum rotation speed.).

Download Full Size | PDF

We control the stages for circular rotation in a smooth way though shifts are necessary for radii extension between concentric rings. We decrease the acceleration and deceleration time for the shifts and initialization of next-circle rotation-start to reduce waste time. Meanwhile, as shown in Fig. 4, the trigger signal always starts after acceleration, and image acquisitionends before deceleration in each circle. Additionally, our CCTS has sample numbers progressively increasing with the radii of outer circles. Given a CAV, acquiring all of the sample points in the outmost circle requires a minimum frame rate among all rings (see details in Appendix B).

3.2 Imaging

The USAF 1951 test target (QA30) [23] is used to analyze the sampling, mosaicing and SR image reconstruction. The QA30 target is printed on film. To acquire reflective images of the target QA30, we use premium cotton white paper for the background. Implementing our CCTS, we shift and rotate the stage for the camera to acquire a group of images for mosaicing. Meanwhile, we can use one camera pixel for CCTS to acquire four LR images for each target. For the same target, LR images vary from each other by regular distinct small angles. Such angular variations can be achieved by adding the small regular angles to the initial angle of each circle. Then, we apply IBP algorithms for SR.

4. Experimental results and discussion

4.1 Trajectory performance

Two low-frequency, high-sensitivity, and filtered accelerometers (799M) are mounted on the stage to measure the vibration in the X and Y dimensions with a sampling rate of 1000Hz. Figure 5 demonstrates the acceleration/deceleration and velocity profiles measured in scanning a round target with a diameter of 3.2 mm using three scan trajectories: raster, CLV CCTS, and CAV CCTS scan (referring to Fig. 1(d)–1(f)). We employ an S-curve velocity profile with linearly varying acceleration. To clearly observe the performance of the scans, we illustrate the profiles in the time series of one second in the first top three rows of Fig. 5. All of the scans employ the same maximum linear velocity of 20mm/s and acceleration/deceleration time of 4 ms, while the maximum CAV is 720°/s and acceleration/deceleration time of 24ms. CLV is achieved via linearly blending and circularly interpolating the moves on the X and Y-axes of the air-bearing stage; CAV is achieved via linearly interpolating the rotation around the A-axis of the rotation stage. In columns (a) and (b) of Fig. 5, acceleration and vibration occur in both the X and Y directions while being limited to the X (circle shift) direction in column (c). In one cycle, the raster scan and the CLV scan show respectively six and four jerks accounting for both the X and Y directions, while the CAV scan shows only one jerk. The raster scan completes five scan lines in 1.25s; the CLV CCTS and CAV scans respectively complete two circles in 1.1s and 1.68s. The CAV scan spends a longer time in these experiments because the scan area is very small. Given the same velocity and acceleration, both the raster and CLV scan times in each cycle increase with the increments of scan area size. In contrast, CAV scan time remains the same for each cycle for any size of scan area. In the fourth row of Fig. 5, the vibrations of the raster scan and the CLV circular scan are dominated by fundamental low frequencies less than 10Hz, while the CAV scan has no significant fundamental frequencies. Moreover, the CLV circular scan shows more low-frequency accelerations (< 100Hz) because of the varying linear accelerations and velocities of either the X or Y linear motor for a constant blended velocity in each cycle. The vibration magnitude in the CAV scan is an order of magnitude smaller than the vibration in the CLV and raster scans. Due to its advantages as mentioned above, we implemented the CAV scan for imaging in the following sections to test its image reconstruction ability.

 figure: Fig. 5

Fig. 5 Trajectory performance of raster scan and concentric circular scan on round target: row 1 – acceleration time series along X axis; row 2 – acceleration time series along Y axis; row 3 – velocity time series along Y axis; row 4 – spectra of acceleration along Y axis; column (a) - raster scan; column (b) - concentric CLV circular scan; column (c) - concentric CAV circular scan.

Download Full Size | PDF

4.2 Sampling, mosaicing and SR image reconstruction

First, the error introduced by mapping the concentric circular sample points into Cartesian coordinates is evaluated. In Fig. 6(a), we generate a star target (500 × 500 pixels) and use a 4 × 4 average window to simulate a pixel of an average filter that scans the target using CCTS. Figures 6(b) and 6(c) show the mapping results (125 × 125 pixels) using respectively NN and linear interpolation. The histograms of mapping errors quantitatively demonstrate (see Fig. 6(d)) that linear interpolation mapping has significantly lower errors than NN interpolation mapping. Hence, we implement linear interpolation for Cartesian image reconstruction in this paper. Additionally, the Voronoi diagram of our sample points, the dual diagram of Delaunay Triangulation, demonstrates that Voronoi areas have approximate one Cartesian pixel size. In other words, sampling points have optimized uniform distribution of influence areas.

 figure: Fig. 6

Fig. 6 Mapping performance of concentric circular scan.(a) Synthetic Star Target. (b) Linear Interpolation Mapping. (c) NN Interpolation Mapping. (d) Histograms of Mapping Errors

Download Full Size | PDF

Second, the mapping errors and imaging time are evaluated for high-speed tracking of CAV CCTS and raster scans. Both types of scans are set up to produce a round target with a diameter of 4.578 mm. Figure 7 (1)is the tracking spot for sample point positioning. This spot is one dot of the fourth zone of a dot distortion target (AP-DD100-P-RM [23]) with the diameter 0.2 mm. The dot target is fixed on the motion stage for imaging. The dot center coordinates are extracted by circle Hough Transform [24] for tracking sample point position. To acquire the dot in scanning, the FOV of camera has to be larger than the dot motion area. Meanwhile, the buffer size of frame grabber and memory of computer also limit the frame size and frame numbers. For example, exhausting the available buffer and memory in our experimental setup for the pitch p = 21.8μm, we can achieve 210 × 210 sampling images for raster scanning, or 105 concentric circles for CCTS scanning, for each image size of 480 × 480 pixels. Using Eq. (30) and rotation acceleration 3000°/s, the required minimum frame rates can be calculated, and are respectively 70, 325 and 715 fps for ω = 20, 90, and 180°/s in Table 2. The camera in this paper has the speed of 500fps for images of 1024 × 1280 pixels and hence can provide a maximum frame rate of 2844fps for images of 480 × 480 pixels.

 figure: Fig. 7

Fig. 7 First two rows: (a)-(f) Tracking trajectories of CAV CCTS scans for ω = 20°, 40°, 90°, 180°, 360, 720°/s. Third row: (g)-(i) tracking points of raster scanning. (l) Tracking spot. The pitch of the scans was 1μm. Solid lines and ‘*’ are respectively the desired CCTS trajectories and sample positions. ‘o’ and ‘ + ’ are the achieved sample position and Cartesian coordinates.

Download Full Size | PDF

Tables Icon

Table 2. RMS of sampling error and scanning time for raster (p = 21.8μm, 210 × 210 point grid) and CAV CCTS scans (pitch p = 21.8μm, 105 circles).

Figures 7(a)–7(f) demonstrate the sampling CCTS trajectories between ± 6μm for ω = 20°, 40°, 90°, 180°, 360 720°/s. The sampling points and desired points match accurately up to ω = 360°/s. To compare the mapping accuracy, Figs. 7(g)–7(i) illustrate the raster scanning trajectories between ± 6μm for v = 0.5, 2, and 4 mm/s. For these various linear velocities, little difference can be visualized between their mapping errors. To quantitatively evaluate the performance of CCTS scans, the root mean square (rms) errors, Erms, between the desired and achieved CCTS trajectories are calculated and are tabulated in Table 2. The mapping errors are generated when interpolating the sampling points to create a Cartesian composite image. Hence, for NN interpolation, the NN mapping errors (ENN) are measured by calculating the Euclidean distance between the image Cartesian coordinates and their corresponding NN sampling point positions. For linear interpolation, the linear mapping errors (Elinear) are measured by the spatial variation cost function in [16]. Table 2 shows that Erms increases as CCTS rotation speed increases (also see Figs. 7(a)–7(f)). Such an increase is mainly induced by eccentricity and wobbles. Nevertheless, Emrs remains relatively low compared to the mapping errors, Elinear and ENN of CCTS. The mapping errors of raster scans have not shown advantages over CCTS scans because motion stages have accuracies of 1μm/100mm travel. Using Eq. (20), for each raster scanning speed in Table 2, v = 0.5, 2, and 4 mm/s, the corresponding CCTS scanning speed can be calculated to achieve the same scanning time, ω = 19.66°, 39.32°, and 176.93°/s. Our experimental speeds are a little faster than the calculated speeds and result in shorter scanning times.

Third, the performance of CCTS in generating images is investigated by scanning USAF 1951 resolving power target (QA30) [23]. QA30 consists of six groups; each group consists of six elements. The number of lines per millimeter increases progressively in each group, and doubles every six-target elements. For instance, the first and sixth elements of group 6 (group 7) have respectively 64 (128) and 114 (228) lines per millimeter. In other words, the widths of lines in group 6 (group 7) decrease from 7.8μm (3.9μm) to 4.4μm (2.2μm). We compare the mosaicing and SR image to the conventional image based on pixel arrays for evaluation. All of the following CCTS scans are controlled with a rotation speed of 180°/s.

Figures 8(a)–8(f) demonstrate the stitching and mosaicing results of target QA30 using optimized CCTS. We actively control the positioning of each sampled patch and start the rotation process at the target center. To clearly demonstrate the stitching result in the limited space in this paper, we take the patch size of 80 × 60 pixels for stitching. Radii increment in the step of 30 pixels. Sequential images are then projected on a global coordinates system. Note that we use conventional backward-projection that interpolates the newly generated stitching images when every image is added. Hence, the latest-stitched regions are sharper. As shown in Figs. 8(a)–8(c), the stitched images have more blurry effects in the center regions. However, mosaicing image has no such uneven sharpness problem with the image size growing (see Fig. 8(e)). Figures 8(e) and 8(f) also demonstrate that the concentric circular mosaiced images consists of high-frequency features as detailed as those acquired by large sensor cameras of similar resolution (Fig. 8(g) and 8(h)), while Fig. 8(a) demonstrates that the stitching process can achieve as many features as larger sensor cameras (Fig. 8(g)).

 figure: Fig. 8

Fig. 8 Mosaicing and SR imaging: (a) Stitching 2971 10 × -images of the groups of 2-7 in target QA30. (b) Stitching 148 10 × -images of the 4th-7th groups (the highlighted center of image (a)) in target QA30. (c) Stitching 7 10 × -images of the 6th and 7th groups (the highlighted center of image (b)) in target QA30. (d) One 80 × 64 image patch for stitching that includes the 6th group (the highlighted region in image (c)). (e) Concentric circle sampling and mosaicing 70483 10 × -pixels of target QA30. (f) Zoom-in highlighted center of image (e). (g) One 1024 × 1024 10 × -image of target QA30. (h) Zoom-in highlighted center of image (g). (i) QA30-SR (interpolation factor = 2) using four mosaiced 10 × -images acquired by CCTS. (j) Zoom-in highlighted center of image (i). (k) 20 × -image of the groups 4-7 in target. (l) Zoom-in highlighted center of image (k).

Download Full Size | PDF

Moreover, Figs. 8(i) and 8(j) demonstrates the SR results of Figs. 8(e) and 8(f). Figure 8(f) highlights the high-frequency-missing ROI in the LR images. The HR image in Fig. 8(k) is acquired using 20 × lenses such that the corresponding peak frequencies in Fig. 8(l) can be acquired as references. In all experiments, we use a 3-sigma Gaussian white noise model for SR. The SR result shows smoothness, high-resolution and de-aliasing effects. As shown in Fig. 8(i), the high frequencies (as shown in Fig. 8(j) are partially recovered using the robust IBP-SR techniques. As highlighted in Fig. 8(j), all the elements in group 6 and the first two elements in group 7 are recovered with the achievable resolution of 3.5μm at the second element of group 7. Meanwhile, those elements’ labels have distinctive patterning in the SR image, though they appear smeared and coarse in Fig. 8(f). In the interest of brevity, we summarize that our rotational imaging technique can combine SR algorithms to reconstruct HR patterns for imaging without increasing hardware cost.

5. Conclusions

We implement a concentric circle trajectory scan for high-resolution and large-area imaging. The method reduces vibration in scanning, and overcomes the tradeoff limitation between resolution and FOV in conventional imaging systems by using CCTS and mosaicing. The imaging system integrates the capacities of the motion stage, controller, and sensor well in terms of resolution, repeatability and speed. We analyze and evaluate the CAV and CLV CCTS. CAV scan has significant advantages over CLV method in easy control, speed and low vibration, especially for large sample areas. Given a sufficient rotation radius, the CAV imaging speed will not be limited by the scanner’s motor speed. The CAV scan solves the conventional problem in raster scanning wherein the speed is limited by linear motor’s speed and scanner’s resonance frequency. In addition, both the simulation and practical experiments indicate that mosaicing and SR images are achievable using high-speed LR area scanners in conjunction with our rotational sampling procedure and algorithms.

In this paper, we implemented a commonly used camera to test the concentric circular scan for imaging wherein image reconstruction is limited by the rectangular pixel shape. We have not yet explored the implementation of the trajectory in scanning probe microscopy such as atomic force or scanning tunneling microscopy. If implementing traditional position sensors for the scan, the image quality and tracking performance need further investigation. Implementing the imaging system for manufacturing inspection and medical imaging applications also requires the integration of image-processing and domain-specific techniques.

Additionally, this imaging platform is applicable to the other imaging modalities such as dark field or fluorescence microscopy. We have successfully imaged target ISO_12233 (QA72) [23] in dark field on the same platform to visualize its chrome on opal glass. The experimental results were as satisfactory as those of target QA30, though we did not demonstrate them in this paper because of limited space. To image fluorescence target, the specific light source and camera can be mounted for its required mode. The current platform allows integration and synchronization of multi-modal imaging sensors with the designed CCTS motion trajectory on the same user interface, as light sources and sensors are mainly different between those modes.

Appendix A The calculation of scan time

We consider a trapezoidal velocity profile in Fig. 9 with three phases: constant acceleration time (tT1/tR1), constant velocity time (tT2/tR2), and constant deceleration time (tT3/tR3), to simplify the calculation of scan time. The profile corresponds to one circle or scan line with a maximum velocity v and the same constant acceleration and deceleration value a. For example, in Fig. 9, we denote tT1 and tR1 in the same constant acceleration period for either the first phase of translation and rotation, while it does not mean they have the same values or even units in practice. The motion times in phase 1 and 3 are,

TA=tT1/R1=tT3/R3=va,
and the motion distance for phase 1 and phase 3 is
ST1/R1=12a(tT1/R1)2=v22a.
The motion distance for phase 2 is
ST2/R2=vtT2/R2,
and the motion time in phase 2 is,
tT2/R2=1v(SST1/R1ST3/R3)=1v(Sv2a),
where S is the whole distance for one circle or a scan line. Then, for one circle or scan line,
tTc=2(1v(Sv2a)+2va)=2(Sv+TA).
Then, we have the whole raster scan time for an ssarea

 figure: Fig. 9

Fig. 9 Velocity profile in one circle or one scan line.

Download Full Size | PDF

tT=2Nline(Sv+TA).

For the concentric circular trajectory in [16], to simplify the calculation, we consider a regular radiusri=ip, so that the trajectory has the same pitch as raster scan for comparison. Note p=dx=dy. For the ith circle, ri meets the requirements of radial constraints although it may not be optimized. Then, for the circle i, i=1,...,NcircleA, its diameter is 2ipand its circumference is,

tR=1v(2πpv2a)+,...,1v(2πipv2a)+1v(2πNcircleApv2a)+2NcircleAva=(πp(NcircleA+1)v+TA)NcircleA.
Substituting Eq. (6) into Eq. (23), we have,
tR(2πS2v+2πpv+TA)NcircleA.
Ignoring the term 2πpv, the above Eq. (24) can be approximated by,
tR(2πS2v+TA)NcircleA.
When S is big enough, we can ignore acceleration time in Eq. (24) and Eq. (25). Then we find that rotation time is about π/2of the translation time. For round area, substituting NcircleB into Eq. (25), we have
tR=(πp(NcircleB+1)v+TA)NcircleB.
Substituting Eq. (8) in Eq. (26), we have
tR(πS2v+2πpv+TA)NcircleA.
Ignoring the term 2πp/v and acceleration time when S is big enough, we find that rotation time is about π/4 of the translation time.

Appendix B The calculation of minimum frame rate

Given a rotation speed ω at the outmost circle, its rotation time is,

tcircle=π/ω.
Then, the average effective sampling rate is,
srate=#sampletcircle2TA,
where TA is acceleration and deceleration time and #sampleis the number of samples in the circle. Substituting Eq. (28) in Eq. (29), we have,
srate=#sampleπ/ω2TA.
To acquire all the samples in the outmost circle, the frame rate frate should be faster than srate.

Acknowledgments

This work was supported by the National Science Foundation (NSF) Center for Hierarchical Manufacturing (CHM) under NSF Award Number CMMI-1025020.

References and links

1. A. Nathan, A. Ahnood, M. T. Cole, S. Lee, Y. Suzuki, P. Hiralal, F. Bonaccorso, T. Hasan, L. Garcia-Gancedo, A. Dyadyusha, S. Haque, P. Andrew, S. Hofmann, J. Moultrie, D. P. Chu, A. J. Flewitt, A. C. Ferrari, M. J. Kelly, J. Robertson, G. A. Amaratunga, and W. I. Milne, “Flexible electronics: the next ubiquitous platform,” Proc. IEEE 100, 1486–1517 (2012). [CrossRef]  

2. K. Jain, M. Klosner, M. Zemel, and S. Raghunandan, “Flexible electronics and displays: high-resolution, roll-to-roll, projection lithography and photoablation processing technologies for high-throughput production,” Proc. IEEE 93(8), 1500–1510 (2005). [CrossRef]  

3. J. A. Rogers, T. Someya, and Y. Huang, “Materials and mechanics for stretchable electronics,” Science 327(5973), 1603–1607 (2010). [CrossRef]   [PubMed]  

4. S. Devasia, E. Eleftheriou, and S. O. R. Moheimani, “A Survey of Control Issues in Nanopositioning,” IEEE Trans. Control Syst. Tech. 15(5), 802–823 (2007). [CrossRef]  

5. A. Greenbaum, W. Luo, T. W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9(9), 889–895 (2012). [CrossRef]   [PubMed]  

6. H. Nagahara and Y. Yagi, “Lensless imaging for wide field of view,” Opt. Eng. 54(2), 025114 (2015). [CrossRef]  

7. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution Image Reconstruction: a Technical Overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003). [CrossRef]  

8. A. J. Fleming and K. K. Leang, “An experimental comparison of PI, inversion, and damping control for high performance nanopositioning,” in American Control Conference (ACC) (2013), pp. 6027–6032. [CrossRef]  

9. K. K. Leang and S. Devasia, “Feedback-linearized inverse feedforward for creep, hysteresis, and vibration compensation in AFM piezoactuators,” IEEE Trans. Control Syst. Tech. 15(5), 927–935 (2007). [CrossRef]  

10. A. A. Eielsen, M. Vagia, J. T. Gravdahl, and K. Y. Petersen, “Damping and tracking control schemes for nanopositioning,” IEEE-ASME T, Mech. 19(2), 432–444 (2013).

11. B. Babakhani, T. J. A. Vries, and J. V. Amerongen, “A comparison of the performance improvement by collocated and noncollocated active damping in motion systems,” IEEE-ASME Trans. Mech. 18(3), 905–913 (2013).

12. G. Y. Gu, L. M. Zhu, and C. Y. Su, “Integral resonant damping for high-bandwidth control of piezoceramic stack actuators with asymmetric hysteresis nonlinearity,” Mechatronics 24(4), 367–375 (2014). [CrossRef]  

13. I. A. Mahmood, S. Moheimani, and B. Bhikkaji, “A new scanning method for fast atomic force microscopy,” IEEE Trans. NanoTechnol. 10(2), 203–216 (2011). [CrossRef]  

14. Y. K. Yong, S. O. R. Moheimani, and I. R. Petersen, “High-speed cycloid-scan atomic force microscopy,” Nanotechnology 21(36), 365503 (2010). [CrossRef]   [PubMed]  

15. T. Tuma, J. Lygeros, V. Kartik, A. Sebastian, and A. Pantazi, “High-speed multiresolution scanning probe microscopy based on Lissajous scan trajectories,” Nanotechnology 23(18), 185501 (2012). [CrossRef]   [PubMed]  

16. X. Du, N. Kojimoto, and B. W. Anthony, “Concentric circular trajectory sampling for super-resolution and image mosaicing,” J. Opt. Soc. Am. A 32(2), 293–304 (2015). [CrossRef]  

17. K. K. L. Cheo, Y. Du, G. Zhou, and F. S. Chau, “Post-corrections of image distortions in a scanning grating-based spectral line imager,” IEEE Photon. Technol. Lett. 25(12), 1103–1106 (2013). [CrossRef]  

18. M. Irani and S. Peleg, “Improving resolution by image registration,” Graph. Models Image Proc. 53(3), 231–239 (1991). [CrossRef]  

19. A. Zomet, A. Rav-Acha, and S. Peleg, “Robust super-resolution,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. I-645–I-650.

20. D. P. Capel, Image mosaicing and super-resolution (Springer Science & Business Media, 2004).

21. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60(2), 91–110 (2004). [CrossRef]  

22. D. M. Ljubicic, High Speed Instrumentation for Inspection of transparent parts, Doctoral dissertation, (Massachusetts Institute of Technology, 2013).

23. http://www.appliedimage.com.

24. E. R. Davies, Machine Vision: Theory, Algorithms, Practicalities (Morgan Kauffman Publishers, 2005).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 (a) Rectangular target. (b) A raster scan of (a). (c) A concentric-circle scan of (a). (d) Round target. (e) A raster scan of (d). (f) A concentric-circle scan of (d). Here turnaround points and scan motion directions are respectively marked by dots and arrows.
Fig. 2
Fig. 2 The setup of imaging system.
Fig. 3
Fig. 3 Workflow of rotational imaging system.
Fig. 4
Fig. 4 Time control of CCTS imaging (TA: acceleration time. θ ˙ : maximum rotation speed.).
Fig. 5
Fig. 5 Trajectory performance of raster scan and concentric circular scan on round target: row 1 – acceleration time series along X axis; row 2 – acceleration time series along Y axis; row 3 – velocity time series along Y axis; row 4 – spectra of acceleration along Y axis; column (a) - raster scan; column (b) - concentric CLV circular scan; column (c) - concentric CAV circular scan.
Fig. 6
Fig. 6 Mapping performance of concentric circular scan.(a) Synthetic Star Target. (b) Linear Interpolation Mapping. (c) NN Interpolation Mapping. (d) Histograms of Mapping Errors
Fig. 7
Fig. 7 First two rows: (a)-(f) Tracking trajectories of CAV CCTS scans for ω = 20°, 40°, 90°, 180°, 360, 720°/s. Third row: (g)-(i) tracking points of raster scanning. (l) Tracking spot. The pitch of the scans was 1μm. Solid lines and ‘*’ are respectively the desired CCTS trajectories and sample positions. ‘o’ and ‘ + ’ are the achieved sample position and Cartesian coordinates.
Fig. 8
Fig. 8 Mosaicing and SR imaging: (a) Stitching 2971 10 × -images of the groups of 2-7 in target QA30. (b) Stitching 148 10 × -images of the 4th-7th groups (the highlighted center of image (a)) in target QA30. (c) Stitching 7 10 × -images of the 6th and 7th groups (the highlighted center of image (b)) in target QA30. (d) One 80 × 64 image patch for stitching that includes the 6th group (the highlighted region in image (c)). (e) Concentric circle sampling and mosaicing 70483 10 × -pixels of target QA30. (f) Zoom-in highlighted center of image (e). (g) One 1024 × 1024 10 × -image of target QA30. (h) Zoom-in highlighted center of image (g). (i) QA30-SR (interpolation factor = 2) using four mosaiced 10 × -images acquired by CCTS. (j) Zoom-in highlighted center of image (i). (k) 20 × -image of the groups 4-7 in target. (l) Zoom-in highlighted center of image (k).
Fig. 9
Fig. 9 Velocity profile in one circle or one scan line.

Tables (2)

Tables Icon

Table 1 The cost of key components of the designed imaging system

Tables Icon

Table 2 RMS of sampling error and scanning time for raster (p = 21.8μm, 210 × 210 point grid) and CAV CCTS scans (pitch p = 21.8μm, 105 circles).

Equations (30)

Equations on this page are rendered with MathJax. Learn more.

a= ( θ ˙ ) 2 αt.
θ ˙ = ( a αt ) 1/2 = ( a α ) 1/2 t 1/2 ,
θ=2 ( a α ) 1/2 t 1/2 +C,
A raster = S 2 ,
N line = S p ,
t T =( S v + T A T ) N line ,
r maxA = 2 2 S,
N circleA = 2 S 2p .
t R =( 2 πS 2v + πp v + T AR ) N circleA .
r maxA = 1 2 S.
N circleB = S 2p .
t R ( πS 2v + πp v + T AR ) N circleB .
v= r R θ ˙ ,
t R =( 2π θ ˙ + T Aθ ) N circleB .
t R πS θ ˙ p + S p T Aθ .
θ ˙ =πv/S,
T A = t T1/R1 = t T3/R3 = v a ,
S T1/R1 = 1 2 a ( t T1/R1 ) 2 = v 2 2a .
S T2/R2 =v t T2/R2 ,
t T2/R2 = 1 v ( S S T1/R1 S T3/R3 )= 1 v ( S v 2 a ),
t Tc =2( 1 v ( S v 2 a )+2 v a )=2( S v + T A ).
t T =2 N line ( S v + T A ).
t R = 1 v ( 2πp v 2 a )+,..., 1 v ( 2πip v 2 a )+ 1 v ( 2π N circleA p v 2 a )+2 N circleA v a =( πp( N circleA +1 ) v + T A ) N circleA .
t R ( 2 πS 2v + 2πp v + T A ) N circleA .
t R ( 2 πS 2v + T A ) N circleA .
t R =( πp( N circleB +1 ) v + T A ) N circleB .
t R ( πS 2v + 2πp v + T A ) N circleA .
t circle =π/ω.
s rate = #sample t circle 2 T A ,
s rate = #sample π/ω2 T A .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.