Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi-image position detection

Open Access Open Access

Abstract

The exact measurement of positions is of fundamental importance in a multitude of image-sensor based optical measurement systems. We propose a new method for enhancing the accuracy of image-sensor based optical measurement systems by using a computer-generated hologram in front of the imaging system. Thereby, the measurement spot is replicated to a predefined pattern. Given enough light to correctly expose the sensor, the position detection accuracy can be considerably improved compared to the conventional one-spot approach. For the evaluation of the spot position we used center-of-gravity based averaging. We present simulated as well as experimental results showing an improvement by a factor of 3.6 to a positioning accuracy of better than three thousandths of a pixel for a standard industrial CCD sensor.

© 2014 Optical Society of America

1.Introduction

A lot of different optical measurement methods are based on the detection of spot locations. Well known examples are triangulation sensors, autocollimators, Shack-Hartmann wavefront sensors or typical imaging-based geometry methods, e.g. photogrammetry. Figure 1 shows some measurement geometries where a spot is imaged onto a position sensing device. For some applications, the light field is separated in an intermediate plane, most often a plane conjugate to the pupil of the optical system. This is the case e.g. in Shack-Hartmann sensing (SHS) or in some plenoptic cameras. Instead of one image point, multiple image points are generated and multiple position measurements are performed. Typically, the measurement uncertainty of the position detection is directly related (often proportional) to the measurement uncertainty of the measurement principle. As a result, high positioning accuracies for spots are of considerable practical importance for a lot of applications.

 figure: Fig. 1

Fig. 1 Typical imaging geometries for point measurements.

Download Full Size | PDF

Improvement of the positioning accuracy by extended patterns is employed in different contexts. In some applications (e.g. stereo-vision or scene-based SHS [1, 2]) the shift of extended images is to be detected, often by cross-correlation [3,4]. Speckles-based correlation techniques employ the speckle pattern due to the surface under investigations. Typical machine-vision applications most often rely on edges and geometrical primitives (e.g. circles) to achieve high positioning accuracies. In closed-range photogrammetry, apart from different point-like object points [5] and edges [6], often specialized extended patterns [7] yield a good positioning accuracy and an unambiguous identification of the target points. The disadvantage of these methods is that extended patterns on the object are necessary and, therefore, the lateral resolution is limited. In addition, processing is more complicated if the tilt of the object cannot be assumed to be constant.

In this contribution we propose a method which addresses the precise position measurement of single spots on the object by the exact measurement of multiple images on a pixelated image sensor. The method can be used in general for improving the positioning accuracy by introducing a pattern generating hologram in the imaging path of an optical measurement system.

2. State of the art

In accordance with e.g. [810], we describe the accuracy of the position measurement by the root-mean squared error (RMS) σ of L repeated measurements xi with respect to the correct position xc:

σ=1Li=1L(xixc)2,

Using N photons and, therefore, N individual photodetection events that lead to the sensor output the maximum achievable σ is given by [11]

σ=σAN,
where we denoted the root mean squared spot size of the spot by σA. This can be understood simply by assuming that the mean of N position measurements of individual photons is used. Therefore, for the N measurements it is clear that the standard deviation of the mean measurement result (the average position) decreases with the square root of N [12]. This equally applies to Gaussian as well as Poissonian probability distributions. One, therefore, could theoretically achieve unlimited positioning accuracy given an unlimited number of photons, infinitely small pixels, and the eliminiation of other sources of noise.

Of course, in practical applications this is not the case due to additional noise and limitations apart from the photon noise. Important error contributions are due to the sensor. The read-out noise, dark-current noise, pixelation and quantization noise are important for every pixelated sensor [8]. Also, especially for CMOS sensors, the fixed pattern noise and the photo response non-uniformity have to be taken into account [13]. Fortunately, these two noise contributions can be strongly reduced by post-processing or calibration. In addition, accuracy might be limited by pixel crosstalk, jitter, transfer-noise, a non-flat pixel sensitivity or due to the pixel geometry and microlenses in front of the pixels. For a detailed noise model see [14].

For very low light flux of only a few tens to some thousands of photons per exposure, it is typically the best approach to use as few ”pixels” per spot as possible in order to reduce the total read-out noise [15]. In SHS e.g. this leads to designs using quad-cells.

For most industrial applications, enough light is available so that the quantum well capacity of a lot of pixels can be saturated at the desired exposure time. In this case, the maximum number of photons per pixel is limited by the quantum well capacity (and the quantum efficiency) of the pixel. Assuming a typical mid-range area sensor with several millions of pixels, the quantum well capacity is in the range of up to some ten thousands of electrons. If we assume a high quantum efficiency, this corresponds to some ten thousands of photons and as a result the photon noise is in range of some hundreds of photons ( N100). In this case, it might be advantageous to use a lot of pixels for the signal acquisition in order to achieve the best overall positioning accuracy. Of course, the information carried by the pixels should not be redundant. Therefore, it is important not to use just a large oversaturated spot on the sensor in order to achieve the optimum position detection, but rather an extended pattern.

Apart from the noise introduced by the sensor, additional sources of error are misalignment (see e.g. Pfund et al. for SHS [16]) and the aberrations of the wavefront itself which lead to distorted spot shapes that are hard to evaluate. One can iteratively improve on such errors in adaptive optics [17]. Deconvolution can be used to improve the results in non-adaptive systems [18].

Concerning the precise location of target points, a lot of investigation in the close-range photogrammetry and vision metrology community as well as in the field of SHS have been undertaken. An extensive overview of different results of the achievable accuracy is given by Shortis et al. [9]. RMS deviations of up to 1/100 of a pixel are reported using different algorithms and extended targets. Neil et al. achieved experimental results of up to 1/150 of a pixel [8]. Trinder et al. have shown simulated results with accuracies of up to 1/100 of a pixel using different approaches [6]. The results shown by Bo et al. [10] and Takita et al. [4] are worse than 1/100 of a pixel. Shaoqiong et al. compared different algorithms and achieved results between 1/20 and 1/100 of a pixel [19]. Rufino and Accardo simulated a position detection system for a star tracker with position improvements of 1/100 of a pixel to 1/2000 of a pixel using a neural network correction [20]. One should keep in mind, however, that the different results are often not directly comparable because the performance strongly depends on the details of the simulations or the experimental conditions.

3. Core concept

The proposed approach to improve position accuracy is depicted in Fig. 2. A computer-generated hologram located in front of the objective lens of the imaging system leads to a fan-out of M spots in the image sensor plane. Instead of one image of the object point M laterally shifted images are detected. Each of the detected spots has a position uncertainty of σD. We can improve the position accuracy according to Eq. (2) by the square-root of the number of the averaging positions if σD is considerably dominated by statistical noise (due to electronic noise and photon noise) and other spatially varying errors (pixelation). For M = 16 this should result in an overall improvement of the accuracy by a factor of 4.

 figure: Fig. 2

Fig. 2 By use of a hologram, each image point is replicated to N points.

Download Full Size | PDF

The method can be used in combination with all principal imaging types depicted in Fig. 1. In the following, we concentrate on the standard one-point imaging geometry.

3.1. Simulation

We tested different evaluation strategies and different holograms by simulation. The simulations have been performed for one fixed wavelength and correspond to the full coherent case since we are dealing with point objects. For the realistic simulation of the image sensor output we assumed a circular aperture with a diameter of 100 pixels embedded in a 1024 × 1024 field. This 10-fold zeropadding results in a 10 × 10 subpixel reconstruction of the holograms. Therefore, a clear aperture without a hologram results in an Airy pattern with a diameter of 10 pixels. For the hologram, a complex superposition of M individual blazed gratings is used. This leads to a replication of the conventional image point to M copies at differently shifted positions. The positions are determined by the periods of the blazed gratings.

If aberrations are present in the optical setup, it might be advantageous to use a dynamic hologram in order to iteratively improve the spot quality. This approach has been undertaken e.g. by Seifert et al. for an adaptive SHS [17].

Then, an additional tilt is added to the wavefront in order to simulate the local wavefront tilt due to different object point positions. Forty different small tilts are used for each simulation in order to test for position dependent noise which is especially important due to the sensor pixelation. The reconstructed intensity is computed using the fast Fourier transform (FFT) followed by taking the modulus squared of the resulting field. The central part of the intensity pattern is cut out and resampled by summing 3 × 3 patches of the subpixels. The resulting intensity pattern then corresponds to the relevant region of interest of the image sensor signal.

After that, photon noise is added to each pixel. Since we are far away from low light conditions, we replace the Poisson distribution by the Gaussian random distribution for the number of photons of each pixel. For the mean and the standard deviation we use the number of photons M without noise and the standard deviation M. To compute M, we assume that the maximum M corresponds to a number of photons which we assume to be the quantum well capacity (at a quantum efficiency of 1). Background light, e.g. due to scattering, is neglected.

For the sensor noise, we limit ourselves to one noise term with Gaussian distribution and zero mean. We include all relevant electronic camera noise in this noise term, especially dark current noise, read-out and uncorrected fixed pattern noise. Before adding all noise contributions, the image values are normalized to a maximum of 255. After adding the noise, each pixel is clipped to zero and 255 before quantizing to integer values.

Three related but different classes of algorithms can be used to achieve a subpixel position measurement of the image shift of spots and patterns. Centroid based computation is the cornerstone for most applications and gives best results under most conditions. Fitting to spots using a model of the spot (typically a parabola or a Gaussian) leads to similar results and correlation-based approaches are especially interesting if strong noise is present [2].

For individual spots, Shortis analyzed different well-known evaluation strategies [21]. Binary algorithms (e.g. ellipse-fitting of edge-points [22, 23]) lead to a reduced accuracy compared to different center-of-gravity-based techniques and Gaussian or parabolic fitting methods.

For most algorithms, the elimination of the background clutter is of major importance. In our simulations and practical experiments we use the center-of-gravity (COG) with thresholding. Before computing the COG, the image is thresholded according to

I(x,y)=I(x,y)TforI(x,y)>T
I(x,y)=0forI(x,y)T

Of course, the optimum choice of the threshold is critical and different methods have been proposed [24].

In accordance with Shortis et al. [9] and Thomas [11], we employed T = 3σN with the standard deviation σN of the total noise contribution in the digital image.

We have tested other thresholds, but they resulted in poorer positioning accuracy. Noise outside the central part of the spot is weighted quite strong and it therefore leads to a comparatively large error. This is especially important for COG-based positioning methods. Weighted COG-methods [25] are another option to reduce this noise influence, but did not lead to improved accuracy in the numerical experiments conducted by Shortis et al. [9]

Discretization and averaging due to the pixelated nature of the camera leads to different errors depending on the details of the pixel geometry and response. Systematic errors will be present for simple COG evaluations even for a flat and uniform pixel response (where the pixel graylevel value is proportional to the sum of the number of photons). In this case the COG of each pixel does not necessarily correspond to the geometric center of the pixel. The effect can be reduced by having a larger number of pixels per spot (oversampling) or by software [19].

The performance of the algorithm might be affected by additional preprocessing that can be employed to reduce the noise [25]. Typical linear and non-linear noise removal filters can be used. In this study, we tested an optional Gaussian filter with varying radius. In our simulated and practical experiments, however, no improvement could be achieved using such kind of preprocessing, probably due to the nearly perfect spot images.

For obtaining the position of the original spot we take the average of all spot positions of the multiplexed spot images. In order to assign each spot to its corresponding zero-shift reference spot we first cross-correlate the image with the corresponding reference image. The position of the maximum correlation spot corresponds to the unknown spot shift. Indeed, this correlation peak position can be used as a quite exact spot location and improves position sensing compared to the simple one-spot COG reference method if considerable sensor noise is present. The zero-shift reference spot can be obtained by simulation if the point spread function of the imaging system is known. In most applications, however, it will be just a measured spot for a defined reference position.

Conventional DFT-based correlation partly leads to a staircase effect [4]. The strength of the effect strongly depends on the pattern to be correlated, the noise and the size of the area used for COG determination. The main cause, nevertheless, is the discrete nature of the correlation result. Therefore, the staircase effect can be reduced by a factor α if the patterns to be correlated are first interpolated α-times.

For most sensors, however, correlation only leads to a minor improvement. For the correlation of more extended spot patterns, the correlations in our simulations were quite noisy due to correlation clutter. As a result, the position accuracy has been limited. Accuracies of up to 1/100-th of a pixel have been achieved but the full advantage of the multi-spot technique has not been gained. Therefore, we use correlation only as the first step for assigning each spot to its corresponding zero-shift base spot.

Then, the position is determined for each spot by the COG method that we also use for the one-spot reference method. Since the noise introduced by the sensor and the whole process is random, we can assume that the result is improved by a factor of M if M spots can be evaluated.

3.2. Results

At high resolution and low to moderate cost, CCD sensors typically have a quite low quantum well capacity. For the numerical results, we use an hypothetical CCD sensor with a quantum well capacity of 10.000 electrons and a sensor noise contribution of 20 electrons (RMS). We assume a quantum efficiency near to one so that we directly take the photon number equal to the number of electrons. Exposure time is set such that for the center of the spot (maximum irradiance on sensor) we assume 10.000 electrons. For a typical irradiance of 5.000 photons we therefore have 5.00071 noise electrons and quantization noise [26] of 10.000/256/1212 for an 8 bit signal output. The SNR of this signal would be SNR=5.000/702+122+20267, which corresponds to approximately 6 effective bits, assuming that the different noise term are statistically independent.

The results can be improved by a small amount if the sensor data is first interpolated to a subpixel grid (using cubic interpolation in the freely available image processing library OpenCV [27]) before computing the COG. Interpolation of more than a factor of four only lead to minor improvements. Therefore, we show results only for 4-times interpolation. Also, the LANCOS interpolation [27] has been tested against the employed cubic interpolation but resulted in a slightly decreased accuracy.

Figure 3 shows the average of the root mean squared (RMS) error σ of the position detection error in dependency of the number of measurement spots for two different runs of the simulation. The error bars depict the statistical variation of the result when running the simulation multiple times. The RMS error is computed with respect to a fitted straight line through the data.

 figure: Fig. 3

Fig. 3 Simulated results for the position error, defined as the root mean squared deviation from the perfect position for a linear movement, using M number of spots. All errors are mean errors for 20 individual simulations and each simulation consists of the RMS errors for 40 different positions. The errorbars denote the standard deviation of the mean values. The theoretical curve shows the expected behavior proportional to 1/M with a start value (1 point) of 0.018.

Download Full Size | PDF

The error is decreased from 0.016 pixel to 0.0052 pixel for 16 spots and to 0.0040 pixel for 25 spots. This corresponds to an improvement by factors of 3, compared to the theoretical value of 16=4 and 4 compared to 25=5. A further increase of the number of measurement spots only leads to minor improvement, probably because of systematic error terms due to pixelation and perhaps quantization.

For other sensor configurations with different levels of electronic noise and different quantum well capacities, similar results have been achieved. Of course, for sensors with a large quantum well capacity and given sufficient light to saturate many pixels, the improvement is limited because even with the single-spot technique the signal-to-noise ratio is very good and, therefore, the theoretical position detection accuracy is high.

4. Experimental results

4.1. Setup

We use the setup depicted in Fig. 4 to perform a practical evaluation of the achievable positioning accuracy. A DPSS laser (Changchun New Industries Optoelectronics Technology CNI-532-400-10, λ = 532 nm, TEM00 with M2 <1.2, pointing stability better 0.05 mrad) is used to illuminate a 50 μ m pinhole which acts as the object point. This object point is imaged onto the industrial CCD image sensor (SVS Vistek eco655 based on Sony ICX655AQA sensor, monochrome, Gigabit Ethernet (GigE) interface, 2448 × 2050 pixels, 3.45 μ m pixelpitch, 60 MHz pixel clock, read-out noise below 10 electrons, quantum well capacity 8000 electrons, used in 8 bit mode) using a conventional objective lens (Lensagon CMFA1520ND, with a focal length of 15 mm). The chosen F-number of K ≈ 32 leads to a spotsize (Airy diameter) of 12 image sensor pixels. Diffraction limited imaging is easily achieved at the limited field in combination with the large F-number and the monochromatic light (Fig. 5). The hologram is an experimental binary N-spot hologram manufactured in polycarbonate and optimized for λ = 532 nm.

 figure: Fig. 4

Fig. 4 Setup for measurement of the subpixel shift centroid. A tilted parallel plate is used in transmission and in reflection to obtain very accurate shifts in the image sensor plane.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Typical experimental spot image. Left hand side: without processing, Right hand side: linear renormalization (oversaturation) to enhance the Airy structure.

Download Full Size | PDF

Very small shifts of the image position are achieved using a plane parallel plate of thickness d = 1.05 mm in combination with a simple mechanical tilt stage. The sensitivity with which the position can be controlled is better than 350 μ rad, leading to a shift of 1.3 nm in the image plane.

The lateral shift of a ray at the plane parallel plane due to the tilt φ is given by [28]

δx=dsin(ϕ)[1cos(ϕ)n2sin2(ϕ)]

The tilt φ of the plate with thickness d is easily measured using an additional alignment laser (λ = 635 nm, 5 mW, fiber-coupled with collimation optics) which is reflected at the plate. The deflection of the laser is measured at a distance of l0 = 555 (±5) mm on a conventional meter stick by the help of an analog video camera which magnifies the spot. The measurement uncertainty of this spot shift Δxs is estimated to be Δxs = 0.2 mm. The tilt φ is related to xs by

tan(ϕ)=xsl0
The combination with Eq. (5) gives
δx=dxs[1(n2+n2xs2l02xs2l02(1+xs2l02)xs4l04(1+xs2l02))1/2]l01+xs2/l02

This lateral shift δx of a ray leads to a deviation of the intersection of the ray in the image plane. Of course, for an object point located at infinity, the image plane would correspond to the focal plane of the objective lens and no shift would occur. Since the image plane is shifted by Δz away from the focal plane, the image point is shifted laterally by

Δx=Δzfδx,
as depicted in Fig. 4 in the lower part. Δz = a′f′ is computed by the ordinary imaging equation 1/a′ = 1/f′ + 1/a with focal length f′, (negative) object distance a and image sided distance a′. As always, all these distances are defined according to their corresponding principal planes.

Combined with Eq. (7) this finally results in

Δxpp=dfxs(1(n2+n2xs2l02xs2l02+xs2xs4l02(l02+xs2))1/2)(af)l0pp1+xs2l02
df(11/n)(af)l0ppxs

Here, Δxpp is the shift in the image plane in pixels. pp denotes the pixel pitch. The approximation in Eq. (10) is better than 2/1000 pixel for our range of measurements (xs < 100 mm). Therefore, a linear movement of the spot position on the image sensor is expected, with respect to the spot position of the alignment laser on the meter stick.

A simulation of the optical system has been performed using raytracing (Zemax) in order to verify the system. Due to the extremely small shifts introduced by the plane parallel plate in combination with the low F-number, practically no aberrations are introduced. Also, the angle of the ray bundle falling onto the microlenses of the image sensor stays practically constant so that — apart from position effects — no change of the microlens behavior will arise.

The employed measurement method, therefore, is capable of yielding very accurate shifts in the nanometer range without expensive equipment. The dynamic range can be easily adapted by the geometry of the setup, especially the thickness of the plane parallel plate and the distance between the plane parallel plate and the meter stick.

All experiments have been conducted in the dark without background illumination on a large air-damped optical table on a separate fundament in the basement of our building. For the measurements it was necessary to shield the setup against air turbulence using cylindrical beam enclosure. Compared to all other possible measurement influences, turbulence by far had the strongest effect (compare section 4.3). The unwanted residual background intensity (including dark current) in the captured image outside the spot are in the range of 0 to 3 grayvalues with an average graylevel of 1.1.

As investigated by Shortis et al. [9], saturation of the pixels can lead to a decrease in accuracy. For the practical experiments we, therefore, set the maximum irradiance on the image sensor to be in the range of 220 to 250 to avoid saturation. The settings of the image sensor have been kept constant for all experiments. The intensity has been controlled using the current of the employed laser.

4.2. Results

Figure 6 shows a typical measurement result for the single spot, compared to the average of 14 spots using the holographic approach. For every position in the image plane (realized by tilting the plane parallel plate), we recorded 30 images in order to test for the variation due to the statistical noise of the camera, environment and light (photon noise). The exposure time was 20 ms without gain. Measurement with the camera set to 12 bit mode did not result in any noticeable improvement compared to 8 bit. This is in accordance with the template matching-based results shown by Trinder et al. [6]. For the quantitative evaluation, we always fitted a line through the data points and computed the standard deviation of the difference of the actual measured point to the line. The averaging of the 14 spot position leads to a standard deviation of this mean of 0.0076 pixel compared to the standard deviation for a single spot of 0.014 pixel. The correlation between the individual spot curves is 0.24 which indicates that still some disturbing environmental influences are present.

 figure: Fig. 6

Fig. 6 Comparison of measured centroid positions for one spot and the average of 14 spots in the case for one single image. S1, S2, and S3 show the measured positions for one of the individual spots. The thick lines shows the average position based on 14 spots and the very thin straight line shows the linear regression.

Download Full Size | PDF

Due to the remaining statistical errors, it is advantageous to apply additional averaging over time. Results are shown in Fig. 7 and the standard deviation of the measurement result is reduced to 0.0028 pixels compared to 0.010 pixels without spatial averaging. This corresponds to an improvement by a factor of 3.6 and should be compared with the theoretical expectation of 143.7.

 figure: Fig. 7

Fig. 7 Comparison of measured centroid positions for one spot and the average of 14 spots. All centroids have been computed based on a time-averaged image (30 recordings). S1, S2, and S3 show the measured positions for one of the individual spots. The thick lines shows the average position based on 14 spots and the very thin straight line shows the linear regression.

Download Full Size | PDF

4.3. Measurement errors

Figure 8 shows a typical time sequence of measurements in order to test the repeatability of the measurement. It turns out that for a single spot the standard deviation of the position is 0.008 pixels which corresponds to 28 nm. By averaging over time (t=30 seconds) and space (multiple spots), this error can be reduced to better than 0.003 pixels for the proposed hologram method. We assume that part of the difference of 0.005 pixels might be due to residual environmental errors and residual statistical errors because of the read-out noise and the photon noise.

 figure: Fig. 8

Fig. 8 Repeatability measurement for one fixed position and one short exposure. S1, S2, and S3 show the measured positions for one of the individual spots. The thick lines shows the average position based on 14 spots.

Download Full Size | PDF

Based on our measurement geometry, we estimated errors using Gaussian error propagation based on Eq. (9) or (10). In addition, there are a number of additional important factors that might have to be taken into account. A detailed modelling of all these effects is beyond the scope of this contribution, but we want to at least comment on these effects and give estimations if possible. Thereby we assume an residual error of 0.002 pixel (6.9 nm) due to external influences.

Air turbulence: A complete shielding of the beam from any air currents has not been accomplished in our experiments. Most probably, the remaining air turbulence leads to a residual small error in our measurements.

Plane parallel plate: The error due to the tilt of the plane parallel plate is not of main concern because the measurement uncertainty of xs due to statistical errors of δxs = 0.2 mm results in a statistical error of the spot position on the image sensor σx = δ Δxpp/δxs · δxs ≈ 2.7 · 10−4 pixel.

Variation of source: Possible relative statistical variation of the lateral position of the pinhole compared to the camera (including objective lens) in the range of 0.7 microns or pointing instabilities of the laser or mechanical mounting instabilities of the laser in the same order of magnitude. The pointing instability of the laser is specified to be better than ±50 · 10−6 rad. In combination with the microscope objective lens of the spatial filter system, with the focal length of 15 mm, this leads to a lateral shift at the pinhole of 15 mm · ± 5010−6 = ±0.75 microns. Power fluctuation might lead to minor inaccuracies for temporal averaging but become very important for interlaced CCD sensors.

Hologram vibration: Remaining angular vibration of the hologram in the range of 42 · 10−6 rad will lead — according to the diffraction equation at a tilted grating (sinα + sinβ= λ /d) — to the 6.9 nm error in the image plane. Here, we denoted the incoming and outgoing angles during diffraction by α and β and the grating period (hologram) by d.

Vibration of camera (components): Mechanical vibration effecting the relative position of the image sensor with respect to the objective lens in the range of 6.9 nm lateral or 16μ m axial (computed by Eq. (10)) also result in the estimated residual error.

Pixel jitter: Pixel clock jitter is not specified for the used camera, but is e.g. 1 ns for a 5 MPixel Aptina chip. With 0.6 · 108 pixels/second this would correspond to 0.06 pixel. Of course, parts of this error will be canceled by the COG computation.

Dark current and read-out noise: The dark current noise is negligible at the employed short exposure. Also, the read-out noise (below 10 electrons) is much less than the photon noise which is in the range of 100 electrons.

Spatial noise: The fixed pattern noise (FPN) is specified to be less than 3 bits in 12 bit mode. Therefore, in 8 bit mode it should be less than 1 bit and negligible. The photo response non-uniformity (PRNU) is specified unrealistically high (± 10%) in the datasheet of the camera. Typically, it should be in the range of 1% or below for a CCD sensor [29] and, therefore, again, smaller than the photon noise. It should be noted, however, that in our study FPN and PRNU have not been tested to the full extend because we limited ourselves to subpixel shifts without calibration. For larger shifts, it might be necessary to correct for these two noise contributions prior to the COG computation [30].

Thermal expansion: Thermal expansion was not a problem for our short-time measurements. But for a practical system the induced error can well exceed our accuracies [31].

Change of refractive index: The change of refractive index of the parallel plate is of minor concern. For the 6.9 nm, a change of Δn = 0.008 is necessary. With thermal expansion coefficients in the range of some 10−6 per Kelvin for glass this can be neglected. Also the effect would be partly canceled by the thermal expansion of the plate.

Aberrations due to the objective lens: Since we achieve extremely good and — more important — constant spot qualities due to the large F-number in our experiments, we do not think that this might be an important source of error. However, for practical systems that use larger apertures due to the necessity to have a better light efficiency, this indeed might be a problem.

Of course, all other parameters of the setup also influence the total measurement uncertainty, especially the systematic errors. For our system, these parameters are kept highly constant during measurement and we are only interested in the statistical errors.

To conclude, most probable candidates for the remaining small statistical error are remaining air turbulence/temperature fluctuation, remaining mechanical vibrations and pixel jitter.

5. Conclusions

We have presented a method for improving the position detection accuracy of image sensor-based spot measurements. The main idea is to introduce a computer-generated hologram in front of the camera lens in order to replicate a single spot onto multiple positions on the image sensor. If enough light is available, this leads to averaging of pixelation, quantization and electronic noise. Therefore, the overall positioning accuracy can be increased. This increase in accuracy is achieved without reducing the lateral resolution on the object as it is the case e.g. in photogrammetry using extended targets. In the proposed method, the extension of the detected pattern is realized only in the detection system. Therefore, the position of arbitrarily small object points can be determined.

We carefully tried to reduce all possible sources of mechanical errors and could achieve an improvement consistent with the expected N behavior for a 14 spot averaging. In this case the standard deviation of the deviation from the expected position could be reduced from 0.01 pixel to 0.0028 pixel using a conventional CCD image-sensor with a low quantum well capacity. It can be stated, that under optimum conditions the typically reported position deviations in the range of tens of a pixel to one hundredth of a pixel can be considerably improved using a conventional CCD sensor in combination with the hologram.

All results have been achieved for pixelated sensors. In principle, the method is also applicable to quad-cell detectors. But since the error contributions are different (no pixelation error), the achievable improvement might be different.

For practical applications where enough light is available, it is recommended that the method is used in combination with temporal averaging using multiple frame rates. Compared to single frames with longer exposure times, one effectively increases the quantum well capacity and, in addition, possible pixel jitter is averaged. Such an approach is possible using programmable regions of interest even at high frames. Of course, for practical measurement systems, a very careful calibration is necessary to be able to really exploit the extremely small statistical errors.

In addition, we also presented a simple method for the accurate shifting of the spots on the CCD camera based on a tilted glass plate which is used in reflection as well as in transmission. This way, shifts in the range of nanometers can be easily controlled by eye and without expensive calibrated equipment.

Acknowledgments

We thank the Deutsche Forschungsgemeinschaft (DFG) for financial support under the grant DFG OS 111/42-1.

References and links

1. T. R. Rimmele, “Solar adaptive optics,” Proc. SPIE 4007, 218–231 (2000). [CrossRef]  

2. L. A. Poyneer, “Scene-based Shack-Hartmann wave-front sensing: Analysis and simulation,” Appl. Opt. 42, 5807–5815 (2003). [CrossRef]   [PubMed]  

3. F. Ackermann, “Digital image correlation: Performance and potential application in photogrammetry,” The Photogrammetric Record 11, 429–439 (1984). [CrossRef]  

4. K. Takita, M. Muquit, T. Aoki, and T. Higuchi, “A sub-pixel correspondence search technique for computer vision applications,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E87-A, 1913–1923 (2004).

5. T. A. Clarke, “Analysis of the properties of targets used in digital close-range photogrammetric measurement,” Proc. SPIE 2350, 251–262 (1994). [CrossRef]  

6. J. Trinder, J. Jansa, and Y. Huang, “An assessment of the precision and accuracy of methods of digital target location,” Journal of Photogrammetry and Remote Sensing 50, 12–20 (1995). [CrossRef]  

7. Z. Xiao, J. Liang, D. Yu, Z. Tang, and A. Asundi, “An accurate stereo vision system using cross-shaped target self-calibration method based on photogrammetry,” Optics and Lasers in Engineering 48, 1252– 1261 (2010). [CrossRef]  

8. D. R. Neal, J. Copland, and D. A. Neal, “Shack-Hartmann wavefront sensor precision and accuracy,” Proc. SPIE 4779, 148–160 (2002). [CrossRef]  

9. M. R. Shortis and T. A. Clarke, “Practical testing of the precision and accuracy of target image centring algorithms,” Proc. SPIE 2598, 65–76 (1995). [CrossRef]  

10. L. Bo, D. Mingli, J. Wang, and Y. Bixi, “Sub-pixel location of center of target based on Zernike moment,” Proc. SPIE 7544, 75443A (2010). [CrossRef]  

11. S. Thomas, “Optimized centroid computing in a Shack-Hartmann sensor,” Proc. SPIE 5490, pp. 1238–1246 (2004). [CrossRef]  

12. B. Saleh and M. Teich, Fundamentals of Photonics, Wiley Series in Pure and Applied Optics (Wiley, 2007).

13. C. Li, M. Xia, Z. Liu, D. Li, and L. Xuan, “Optimization for high precision Shack-Hartmann wavefront sensor,” Opt. Commun. 282, 4333–4338 (2009). [CrossRef]  

14. M. V. Konnik and J. S. Welsh, “On numerical simulation of high-speed CCD/CMOS-based wavefront sensors in adaptive optics,” Proc. SPIE 8149, 81490F (2011). [CrossRef]  

15. A. Vyas, M. Roopashree, and B. Prasad, “Performance of centroiding algorithms at low light level conditions in adaptive optics,” 2009 International Conference on Advances in Recent Technologies in Communication and Computing pp. 366–369 (2009).

16. J. Pfund, N. Lindlein, and J. Schwider, “Misalignment effects of the Shack-Hartmann sensor,” Appl. Opt. 37, 22–27 (1998). [CrossRef]  

17. L. Seifert, J. Liesener, and H. Tiziani, “The adaptive Shack Hartmann sensor,” Opt. Commun. 216, 313– 319 (2003). [CrossRef]  

18. W.-Y. Leung, M. Tallon, and R. Lane, “Centroid estimation by model-fitting from undersampled wavefront sensing images,” Opt. Commun. 201, 11– 20 (2002). [CrossRef]  

19. S. Wang, B. Yan, M. Dong, J. Wang, and P. Sun, “An improved centroid location algorithm for infrared LED feature points,” Proc. SPIE 8916, 891619 (2013). [CrossRef]  

20. G. Rufino and D. Accardo, “Enhancement of the centroiding algorithm for star tracker measure refinement,” Acta Astronautica 53, 135– 147 (2003). [CrossRef]  

21. M. R. Shortis, T. A. Clarke, and T. Short, “Comparison of some techniques for the subpixel location of discrete target images,” Proc. SPIE 2350, 239–250 (1994). [CrossRef]  

22. J. Yu, S. R. Kulkarni, and H. V. Poor, “Robust ellipse and spheroid fitting,” Pattern Recognition Letters 33, 492– 499 (2012). [CrossRef]  

23. Z. Jiandong, Z. Liyan, and D. Xiaoyu, “Accurate 3D target positioning in close range photogrammetry with implicit image correction,” Chinese Journal of Aeronautics 22, 649–657 (2009). [CrossRef]  

24. J. Arines and J. Ares, “Minimum variance centroid thresholding,” Opt. Lett. 27, 497–499 (2002). [CrossRef]  

25. K. L. Baker and M. M. Moallem, “Iteratively weighted centroiding for Shack-Hartmann wave-front sensors,” Opt. Express 15, 5147–5159 (2007). [CrossRef]   [PubMed]  

26. R. Fiete, Modeling the Imaging Chain of Digital Cameras, Tutorial Text Series (SPIE Press, 2010).

27. D. G. R. Bradski and A. Kaehler, Learning Opencv (O’Reilly, 2008).

28. H. Gross, Handbook of Optical Systems: Fundamentals of Technical Optics (Wiley-VCH, 2005). [CrossRef]  

29. B. Jähne, Practical Handbook on Image Processing for Scientific Applications (CRC Press, 1997).

30. T. Li, M. He, N. Lei, C. Li, and Q. Wang, “TDI CCD non-uniformity correction algorithm,” in “4th IEEE Conference on Industrial Electronics and Applications, ICIEA,” (2009), pp. 1483–1487.

31. S. Ma, J. Pang, and Q. Ma, “The systematic error in digital image correlation induced by self-heating of a digital camera,” Meas. Sci. Technol. 23, 025403 (2012). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Typical imaging geometries for point measurements.
Fig. 2
Fig. 2 By use of a hologram, each image point is replicated to N points.
Fig. 3
Fig. 3 Simulated results for the position error, defined as the root mean squared deviation from the perfect position for a linear movement, using M number of spots. All errors are mean errors for 20 individual simulations and each simulation consists of the RMS errors for 40 different positions. The errorbars denote the standard deviation of the mean values. The theoretical curve shows the expected behavior proportional to 1 / M with a start value (1 point) of 0.018.
Fig. 4
Fig. 4 Setup for measurement of the subpixel shift centroid. A tilted parallel plate is used in transmission and in reflection to obtain very accurate shifts in the image sensor plane.
Fig. 5
Fig. 5 Typical experimental spot image. Left hand side: without processing, Right hand side: linear renormalization (oversaturation) to enhance the Airy structure.
Fig. 6
Fig. 6 Comparison of measured centroid positions for one spot and the average of 14 spots in the case for one single image. S1, S2, and S3 show the measured positions for one of the individual spots. The thick lines shows the average position based on 14 spots and the very thin straight line shows the linear regression.
Fig. 7
Fig. 7 Comparison of measured centroid positions for one spot and the average of 14 spots. All centroids have been computed based on a time-averaged image (30 recordings). S1, S2, and S3 show the measured positions for one of the individual spots. The thick lines shows the average position based on 14 spots and the very thin straight line shows the linear regression.
Fig. 8
Fig. 8 Repeatability measurement for one fixed position and one short exposure. S1, S2, and S3 show the measured positions for one of the individual spots. The thick lines shows the average position based on 14 spots.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

σ = 1 L i = 1 L ( x i x c ) 2 ,
σ = σ A N ,
I ( x , y ) = I ( x , y ) T for I ( x , y ) > T
I ( x , y ) = 0 for I ( x , y ) T
δ x = d sin ( ϕ ) [ 1 cos ( ϕ ) n 2 sin 2 ( ϕ ) ]
tan ( ϕ ) = x s l 0
δ x = d x s [ 1 ( n 2 + n 2 x s 2 l 0 2 x s 2 l 0 2 ( 1 + x s 2 l 0 2 ) x s 4 l 0 4 ( 1 + x s 2 l 0 2 ) ) 1 / 2 ] l 0 1 + x s 2 / l 0 2
Δ x = Δ z f δ x ,
Δ x p p = d f x s ( 1 ( n 2 + n 2 x s 2 l 0 2 x s 2 l 0 2 + x s 2 x s 4 l 0 2 ( l 0 2 + x s 2 ) ) 1 / 2 ) ( a f ) l 0 p p 1 + x s 2 l 0 2
d f ( 1 1 / n ) ( a f ) l 0 p p x s
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.