Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Diffraction effects for interferometric measurements due to imaging aberrations

Open Access Open Access

Abstract

Aspheric surfaces are often measured using interferometers with null correctors, either refractive or diffractive. The use of null correctors allows high accuracy in the measurement, but also introduces imaging aberrations, such as mapping distortion and field curvature. These imaging aberrations couple with diffraction effects and limit the accuracy of the measurements, causing high frequency features in the surface under test to be filtered out and creating artifacts near boundaries, especially at edges. We provide a concise methodology for analyzing these effects using the astigmatic field curves to define the aberration, and showing how this couples with diffraction as represented by the Talbot effect and Fresnel edge diffraction. The resulting relationships are validated with both computer simulations and direct measurements from an interferometer with CGH null corrector.

©2012 Optical Society of America

1. Introduction

Interferometry is used for performing measurements of optical systems and surfaces since it often is the only way to achieve the desired measurement precision and accuracy. Commercial interferometers with standard optics can measure spherical or flat surfaces to an accuracy of a few nanometers. Aspheric surfaces, such as many telescope mirrors, are commonly measured using interferometry with custom made null correctors, either refractive or diffractive, as shown in Fig. 1 . The null corrector creates a wavefront that matches the aspheric surface under test, so that the light reflects from the test surface at normal incidence and retraces its path back to the interferometer. This creates a null test, where any features in the data should be caused by irregularities in the surface under test. A null result, showing no features, would indicate a perfect surface [1]. The refractive null corrector, which uses lenses to create the desired aspheric wavefront, is a proven technology and widely used [2]. The diffractive null corrector, which is also known as computer-generated hologram (CGH), has a high degree of flexibility in generating complex wavefronts, and has demonstrated to provide quick, inexpensive, and highly accurate measurements of aspheric surfaces or complex optical systems [3]. Both refractive and reflective null interferometric measurements have good wavefront performance, but they both provide poor quality imaging.

 figure: Fig. 1

Fig. 1 The refractive a) and diffractive b) null correctors are often used to test the aspheric surfaces. The null corrector is designed for wavefront performance, but not for good imaging. Therefore, the system suffers imaging aberration which will limit the measurement accuracy.

Download Full Size | PDF

The imaging performance for null correctors (either refractive or diffractive) is typically very poor because the aspheric surface under test must be viewed through the null corrector that was intentionally designed to introduce aberrations to match the surface’s aspheric departure. The image of the surface under test will suffer aberrations that can be classified using standard definitions [4]. But since the interferometer uses coherent light, the aberrations do not follow the same form as classical aberrations for incoherent systems. The lowest order (zero-order) effect of the null corrector provides the desired wavefront phase necessary for matching the aspheric surface. The next order (first-order) effect can be described as tilt variations that cause mapping distortion between the optic under test and the image in the interferometer. In practice, this effect is compensated by processing the data mapping according to reference fiducials placed at known locations on the surface [5]. After correction of mapping distortion, the image degradation is dominated by the quadratic (second-order) imaging aberrations of focus and astigmatism. These aberrations will have particular variations across the image defined by the second derivatives of the null corrector phase function. These couple with diffraction to cause edge ripples and the phase smoothing for the high frequency features in the surface under test [6].

In previous papers, the basic method of analyzing the diffraction effects in interferometry was introduced and some limited examples for computer-generated holograms were presented [79]. This paper provides the first comprehensive analyses of the general coupling of quadratic aberrations with diffraction for interferometers, and provides verification with computer simulation and experiments. A brief introduction to imaging aberration in interferometry in Section 2 defines the method for quantifying the aberrations. The use of equivalent propagation to quantify the diffraction effects for coherent systems is reviewed in Section 3. The specific coupling between the quadratic imaging aberrations of defocus and astigmatism provided in Section 4 results in a generalized relationship that predicts the effects of the aberrations. Section 5 gives further insight for understanding the diffraction effects in the presence of imaging aberrations. In the end, an experiment demonstration shows the diffraction effects in interferometry due to imaging aberrations.

2. Imaging aberrations in interferometry

Imaging aberrations affect the ability of an interferometer to correctly measure the surface height of the optic under test, especially for features with high spatial frequencies. A measurement transfer function, sometimes called the instrument transfer function (ITF), is often used to quantify an interferometer’s ability to measure a surface in terms of spatial frequency, and it depends on the quality of the imaging optics, detector and other components. The ITF of a particular interferometer can be measured using standard phase surfaces or steps [10]. Interferometer imaging optics can be optimized to assure a good imaging performance over the whole spatial frequency range. However, when testing an asphere using a null corrector, the measurement may suffer significant imaging aberrations introduced by the null optics.

Figure 1 shows the concept of using refractive and diffractive null correctors to transform a spherical wavefront to one that propagates to match the aspheric surface under test. The imaging performance of such a system is evaluated by modeling the surface under test as the object, then analyzing the image through the null corrector and onto the detector plane inside the interferometer. The image analysis must respect the coherent nature of this system, which requires evaluation of both amplitude and phase.

Another difference between coherent and incoherent imaging is the aperture stop. For incoherent imaging, light coming from each point in the object propagates in all directions such that it fills the pupil. The stop in the incoherent system defines which rays from the object will be used to create the image [4]. The imaging for most interferometers is fundamentally different. For a region on the test surface that is nearly perfect and sufficiently far from an edge, the propagation from a point on the test surface can be defined using a single ray. As long as the ray is not blocked by the aperture stop, the ray will intersect the image plane at a well-defined point. The imaging is much more interesting when some feature is present in the surface, or near a boundary such as an edge. Diffraction from a feature in the surface or from a boundary causes the light to propagate with an angular spectrum that can be represented with a small bundle of rays. The angular spectrum is defined by the feature and the geometry, but it is typically very small for any interferometer making precision measurements. Any rays with slopes that exceed the limit defined by the aperture in the interferometer will be blocked [8]. Since the angles are proportional to spatial frequency on the feature, the stop acts as a binary low pass filter. But the bundle of rays from each point on the optic under test also illuminates a small area on the null corrector, so the imaging aberrations with linear or quadratic pupil dependence will dominate. For example, if a 500 mm F/1 parabola is tested with a 100 mm CGH using a system that can resolve features with spatial frequency up to 500 cycles per diameter, the effective beam size on the CGH is about 1 mm in diameter. Over this small area, the imaging aberration with quadratic form (focus and astigmatism) is dominant with maximum value of 0.13 µm. The imaging aberrations with cubic (coma) and quartic (spherical aberration) are negligible in compassion, only 0.0015 µm and 0.00016 µm respectively.

The first order effect of imaging through null correctors is mapping distortion, which means the mapping from the test optic to the interferogram is not linear. The mapping distortion from a null corrector causes two complications:

  • • The surface defects appear shifted. This can be mitigated by remapping the data using fiducial marks, which are placed on the optic under test to find the mapping relation;
  • • Lower order alignment errors appear as higher order wavefront errors. This has been discussed in detail by Selberg and Murphy [11, 12].

Both effects can be corrected by re-mapping or “morphing” the data to correct for the distortion. As long as all data processing is performed after morphing, the two complications listed above are avoided.

The second order effect of imaging aberrations through null correctors can be fully described using astigmatic field curves, which give the field dependence of the astigmatism and power. These aberrations couple with diffraction to cause two principal problems:

  • • High frequency data is filtered out. This phase smoothing can be treated using a small-phase approximation to the well-known Talbot imaging relations [6].
  • • Diffraction ripples from the edges introduce measurement artifacts. This effect can be approximated using Fresnel integrals for a knife edge [13].

In this paper, we evaluate the coupling of the quadratic imaging aberration with the diffraction effects for interferometric measurements, and provide tools to predict these diffraction effects.

3. Diffraction effects in interferometry

3.1 Phase smoothing analysis using Talbot model

For ideal imaging, the field distribution is correctly imaged from one space to another with only magnification. When a focus error is present (the image plane does not coincide with the detector plane), the coherent image can be found by propagating the field from the position of the ideal image to the position of the detector plane. The aberration of focus can be treated for an interferometer by simply evaluating the change in the phase function as the light propagates from the actual focus to the detector plane.

We have previously developed a simple method of evaluating this propagation using the Talbot effect [6]. Talbot imaging is a well-known effect that causes periodic patterns to be re-imaged by diffraction with characteristic period that varies inversely with both wavelength and the square of the spatial frequency. Structure in the surface creates a phase distribution that is treated as a superposition of sinusoidal ripples in the phase. For the case where the phase ripples have small magnitude, the use of partial Talbot cycles allows the development of a transfer function which reduces the magnitude of the ripple as a function of spatial frequency and propagation distance. The effect is called “phase smoothing” because the filtering is much more pronounced at higher frequencies.

According to the Talbot relations, if a wavefront with small sinusoidal phase ripples of W waves (2πW << 1) propagates a distance of L in a collimated space, then the magnitude of the ripples will be attenuated to W’. A transfer function is defined as

TF=W'W=cos(2πLzT)=cos(πλLp2),
which can be used to predict the attenuation of high frequency (small scale) ripples due to propagation. The Talbot distance zT = 2p2/λ is defined for a collimated light, where p is the period of the sinusoidal pattern and λ is the wavelength.

The diffraction pattern for a converging or diverging spherical wavefront is the same as that observed for a collimated beam, except for two simple effects. Consider a converging spherical wavefront that propagates from 1) to 2) as shown in Fig. 2 . The transverse dimension of the diffraction pattern is scaled according to the geometry. This effect is handled by normalizing the spatial frequencies to cycles per aperture, which is invariant for propagation. Mathematically,

 figure: Fig. 2

Fig. 2 Propagation in a converging space is converted to equivalent propagation in a collimated space. The period and amplitude of wavefront ripples at R1 will change due to diffraction as they propagate to R2. The change in the period scales according to geometry. The change in magnitude can be evaluated using Eq. (4), which was also derived and validated in Ref. 6.

Download Full Size | PDF

p1a1=p2a2anda1R1=a2R2.

The second effect to accommodate the spherical wave propagation utilizes the effective propagation distance Le, defined as equivalent propagation in collimated space.

As shown in Fig. 2, a converging wavefront starting with radius of curvature R1 propagates to the position where it has a radius of curvature R2. It is equivalent to the propagation from Z1 to Z2 in a collimated beam created by an ideal lens, where Z1 and Z2 are conjugate images to the locations at R1 and R2. The phase and amplitude features in the converging wavefront are faithfully reproduced by the ideal lens at the conjugate positions in the collimated space.

The propagation distance in the collimated space is calculated using first order optics [6] to be

Z2'Z1'=f2(1R21R1),
and it can be used in Eq. (1) to calculate the transfer function for the wavefront that propagates in a converging or diverging beam. With the use of normalized frequencies, the result is independent of the diameter of the collimated beam and has no dependency on the focal length of the arbitrary lens used to create the equivalent propagation.

The attenuation of the phase ripple with period p1 due to this propagation is

TF=W'W=cos(πλ(Z2'Z1')(p1')2)=cos(πλR1(R1R2)R2p12)=cos(πλLeνn24a12),
where the equivalent propagation distance Le is defined as R1(R1R2)R2and νn is the normalized spatial frequency, defined as the number of cycles per diameter at position 1.

3.2 Edge effects

Edge diffraction from the test surface occurs when its edge is not in focus. The diffraction at the edge of the aperture can be modeled as the Fresnel knife-edge diffraction. The real and imaginary parts of the electric field distribution from an edge can be found by evaluating the Fresnel integrals C(u) and S(u) [13]:

U=12[C(u)C()+i(S(u)S())],
where C() and S()equal −0.5 and u is dimensionless and equals x2/λL (x is the lateral coordinate in the field). Figure 3 shows the amplitude and phase fluctuations due to the edge diffraction. Both amplitude and phase have a rapid oscillation as the distance from the edge becomes large. The diffraction effects scale withλL/2 to get the lateral coordinate x, where L is the defocus error in a collimated space, and can be replaced by the effective propagation distance Le in a non-collimated space.

 figure: Fig. 3

Fig. 3 The amplitude and phase variation calculated from the Fresnel integrals for the case of diffraction of collimated light from a knife edge.

Download Full Size | PDF

The phase smoothing transfer function and the edge effects are evaluated for a defocused image in the interferometer by converting the defocus – the distance from the true focus to the sensor – to an equivalent propagation distance. Defocus that varies across the image, due to tilt in the detector or field curvature from the imaging optics, is evaluated in the same way using the appropriate field dependency for the equivalent propagation distance. The issue becomes more complicated with the presence of astigmatism.

4. Coupling of quadratic imaging aberration with diffraction effects

The evaluation of focus aberration is handled simply using the equivalent propagation distance. Since this aberration is axisymmetric, the transfer function and the edge effects have no dependence on the orientation of the ripples or the edge. The aberration of astigmatism is treated in a similar way, but the orientation of the ripples or the edge with respect to the principal axes of the astigmatism is included.

4.1 Smoothing effect due to field curves and mapping distortion

It is always possible to perform a ray trace analysis of a particular test, including optics in the interferometer, to determine the field curves at the focal plane. But in most cases, the aberrations from the null corrector are much larger than those from the interferometer itself, so a complete ray trace is not required. The field curvatures created by the null corrector, and presented to the interferometer, are determined in the ray trace model by inserting an ideal lens that creates a collimated wavefront and reimages the surface under test, as shown in Fig. 4 . This construction converts the wavefront to a collimated beam so we can use the Talbot effect to predict the wavefront change due to propagation. The ideal lens preserves the field curves and does not introduce any additional imaging aberration. In the optical space reimaged by the ideal lens, the equivalent propagation distance in Eq. (4) comes from the amount of defocus, and the spatial period of the phase ripples must correctly include the magnification of the image.

 figure: Fig. 4

Fig. 4 An ideal lens can be used to convert the converging/diverging wavefront to a collimated one, and then we can use the Talbot effect to analyze the phase smoothing due to defocus error. (a) diffractive null optics; (b) refractive null optics. Solid line: wavefront model showing that rays leave the test surface at the normal direction and come to an ideal focus after the null optics; Dotted line: imaging model showing a point on the test surface is imaged through the null optics and the interferometer stop limits the illuminated area on the null optics and the spatial frequencies that passed by the system.

Download Full Size | PDF

The combined quadratic imaging aberrations of focus and astigmatism are quantified using the generalized Coddington relations [14] and are represented using field curves that show the loci of the line focus positions for the two principal axes of the astigmatism. Fundamentally, the field curves show focus error in the image space as functions of field position and orientation. For an axisymmetric system, the two field curves represent locations where the tangential or sagittal rays come to focus. At the tangential focus, ring features on the object (which is the surface under test in interferometry) are in focus. The spoke features are in focus at the sagittal focus. For non-axisymmetric system, the two principal curves correspond to the focus position for features along the two principal directions. We lose no generality by naming the principal field curves s and t as long as the true orientation of the principal axes is maintained. Features along an arbitrary direction use a combination of the two principal field curves, as addressed in Section 4.3.

The diffraction effects can be treated at each field point for the two orientations defined by the principal axes of the astigmatism. Features or ripples in the direction aligned with one of the principal axes of the astigmatism will come to a sharp focus at a position defined by the field curves. We define zt(x,y) and zs(x,y) as the geometric difference from the two field curves to the focal plane, which provides an equivalent propagation distance for evaluation of the diffraction effects for these particular features. We evaluate the phase smoothing effect at an arbitrary plane with the spatial frequency normalized to cycles/diameter. The transfer function becomes

TFt(x,y)=cos[πλνn24a2mt(x,y)2zt(x,y)]TFs(x,y)=cos[πλνn24a2ms(x,y)2zs(x,y)],

where

  • zt(x, y) and zs(x, y) give the focus error at a field point (x, y) given by the t and s field curves
  • 2a is the diameter of the surface under test, and
  • mt(x, y) and ms(x, y) are the local magnification for the image in the t and s directions.

The transfer function depends on the field curves and the local magnification at each field point. The magnification varies across the field due to the presence of mapping distortion.

4.2 Edge effect due to field curves and mapping distortion

Because of the presence of field curves, the interferometer cannot focus the whole surface of the optic under test. If the edge of the test optic is not in focus, then the measurement suffers from edge diffraction as discussed in Section 3.2. The mapping distortion also has an impact on the edge diffraction because the edge diffraction pattern calculated using the Fresnel integral has to be morphed to correct the mapping distortion.

The magnitude of the edge diffraction for features that are aligned with one of the principal astigmatism axes follows the same type of relationships given above, where the equivalent propagation distance for any point is the defocus defined by the particular field curve, s or t. The case of general orientation is provided in the next section.

For null testing of typical aspheric surfaces such as telescope mirrors, the non-linear mapping from the null correctors creates distorted imaging where features near the outer edge are proportionally larger. Near the center, the actual features are greatly de-magnified as shown in Fig. 5 . Therefore, for an axisymmetric test optic with a center hole, if the inner edge is in focus, then the diffraction pattern from the outer edge is pushed outward and the edge diffraction gets better. Similarly, if the outer edge is in focus, then the diffraction from inner edge is pushed toward to the outer edge and the edge diffraction gets worse. Clearly, the edge diffraction gets better by focusing at the inner edge in this case.

 figure: Fig. 5

Fig. 5 The edge diffraction from outer/inner edge. It is shown that the edge diffraction gets better by focusing at the inner edge in this particular case. (a) Mapping distortion from object to image; (b) Focus at inner edge; (c) Focus at outer edge.

Download Full Size | PDF

4.3 Diffraction effects for features with general orientation

For features along the principal directions, diffraction effects, either phase smoothing or edge diffraction, can be evaluated by the two principal field curves. The principal field curves can be obtained by lens design software package or calculated using generalized Coddington equations [14]. For features with general orientation, the diffraction calculation must use the combination of the two field curves. Consider feature variations in a direction that is oriented an angle θ to one of the principal directions s in the local coordinate, then the equivalent propagation used for calculating diffraction effects with this orientation is

z(x,y)θ=zs(x,y)cos2(θ)+zt(x,y)sin2(θ)=12[zs(x,y)+zt(x,y)]+12cos(2θ)[zs(x,y)zt(x,y)],
where zt(x,y) and zs(x,y) give the two principal field curves labeled t and s. The local magnification becomes
m(x,y)θ=ms2(x,y)cos2(θ)+mt2(x,y)sin2(θ)=12[ms2(x,y)+mt2(x,y)]+12cos(2θ)[ms2(x,y)mt2(x,y)],
where mt(x,y) and ms(x,y) are the two local magnifications along the principal t and s directions. Here the direction of a feature is defined as normal to the feature itself shown in Fig. 6 . Figure 6 shows the field curves for an axisymmetric system, and its principal axes are along tangential and radial directions. If there is a sinusoidal phase ripple across the whole surface, then its angle to the principle axis changes because the sagittal and tangential directions are not constant in the Cartesian coordinates, and in fact rotate as a function of position.

 figure: Fig. 6

Fig. 6 The field curves for an axisymmetric system. The short solid lines in (b) and (c) represent the magnitude and orientation of the two principal axes.

Download Full Size | PDF

The transfer function for features with general orientation can be calculated with modified field curve and magnification using Eqs. (5-7). Also, the effect of edges for non-axisymmetric optics are treated by defining the direction of the edge with respect to the principal astigmatic axis and applying Eqs. (6-7).

4.4 Validation for diffraction effects for features with general orientation

A computer simulation of a coherent imaging system was developed using the physical optics modeling capabilities in Zemax [15]. A 4f system was set up to evaluate the angular dependence of the field curves. Phase ripples with magnitude of 0.2 radians P-V and spatial period of 0.5 mm were placed at the front focal plane of the first lens. The wavelength of the test is 1µm. Two waves of astigmatism were added to the first lens to introduce some imaging aberrations. A separate Zemax imaging model provides the two field curves zt and zs as planes at −190 mm and 138 mm respectively. A layout of the imaging model and the resulting field curves are shown in Fig. 7 .

 figure: Fig. 7

Fig. 7 The Zemax simulation of a coherent imaging system allows the geometric analysis of the field curves and complete diffraction propagation analysis as the incident phase ripple is rotated away from the principal axes.

Download Full Size | PDF

A wavefront model was used to simulate the diffraction using the physical optics propagation capabilities in Zemax, which provides an evaluation of the phase ripple magnitude, thus the transfer function. Rotation of the incident phase ripple pattern allows direct evaluation of the dependency of the transfer function on the orientation of the ripples with respect to the principal astigmatism axes. We can also use Eq. (1) and (7) to calculate the transfer function analytically. Figure 8 shows that the results of this simulation match the analytic prediction using the directional equivalent propagation distance from Eq. (6). A phase ripple with different spatial frequency will have a different transfer function.

 figure: Fig. 8

Fig. 8 Comparison of the computer simulation of a coherent imaging system with the theoretical calculation based on Talbot effects. The phase ripple has a spatial period of 0.5 mm and the wavelength of this test is 1µm. The imaging astigmatism creates two principal focus positions, represented by field curves zt and zs, which are at −190 mm and 138 mm respectively. The analytical transfer function can be calculated using Eqs. (1) and (7). The Zemax diffraction simulation uses the model shown in Fig. 7. The simulation matches the analytical prediction. The Zemax simulation has no data points near where the transfer functions are close to zero because of the low signal-noise-ratio.

Download Full Size | PDF

Assume that an imaging system has a constant astigmatism across the field, causing the transfer function to have an angular dependence as shown in Fig. 8. The principal directions are constant across the whole field. A circular ring phase ripple has its direction rotates as a function of position. For a 20 nm P-V circular ring phase ripple with period of 0.5 mm, it will be filtered due to the imaging aberration as shown in Fig. 9 .

 figure: Fig. 9

Fig. 9 Simulation of a 20 nm P-V phase ripple with a period of 0.5 mm passing through an imaging system with a transfer function shown in Fig. 8: a) original phase ripple; b) phase ripple after passing the aberrated imaging system. (Note: although the images have the appearance of an interferogram, the gray scale is used here to depict phase, not intensity.)

Download Full Size | PDF

5. Physical insight of the imaging aberrations

The interesting coupling of imaging aberration with the diffraction effects can be explained with the Talbot diffraction model. The Talbot effect can be derived by decomposing the sinusoidal amplitude or phase ripple into plane waves corresponding to the 0, +1, and −1 diffraction orders, then using geometric propagation to define the phase shift between them. The familiar Talbot relations are found by simply adding the complex amplitudes [6]. The same diffraction effect can be observed by creating the phase shift between these orders using means other than the simple geometric propagation. To demonstrate this, we use an ideal 4f system and add a phase plate at the intermediate wavefront focus to represent the imaging aberrations. The wavefront phase ripple at the front focal plane of the first lens is transformed to three images at the intermediate focus, corresponding to the 0, +1, and −1 diffraction orders. For the case with no aberrations, the phase ripple is correctly imaged at the back focal plane of the second lens with no phase degradation. When a quadratic phase variation is added to the phase plate at the intermediate focus, +1 and −1 orders experience a phase shift with respect to the zero order. The effect is identical to the propagation for the Talbot relations, and can be modeled as equivalent propagation.

Any quadratic aberration can be decomposed into x2, y2 and 2xy, i.e. two cylinder terms and a 45° cross term. In Fig. 10 , the sinusoidal phase ripple is along y direction, so the first and zero orders also focus along the y axis. If we introduce a quadratic phase at the intermediate focus, then only cylinder in y direction changes the phase shift between the first and zero orders creating equivalent propagation and degrading the image (the image plane is shifted while the detector plane remains the same). Other two quadratic aberrations (cylinder x and 45° astigmatism) will not introduce any phase shift that would appear as equivalent propagation and degrade the image. For a phase ripple along any direction, the component of quadratic aberration that causes phase shift between the 0 and +/−1 orders is used to define the equivalent propagation distance. This leads directly to Eq. (6).

 figure: Fig. 10

Fig. 10 Use a 4f system to show the effects of the quadratic imaging aberration on the phase. The three dots superimposed on the three phase plates are three diffraction orders (1, 0 and −1 orders).

Download Full Size | PDF

In general, the quadratic aberration is not limited to the intermediate focus, but the phase shift between zero and first orders is maintained. An additional effect of anamorphic image distortion is created by this more general case. The result of a Zemax simulation of this general case is shown in Fig. 11 . The three quadratic phase terms are placed one at a time at the first lens, rather than at the intermediate focus. All three terms show anamorphic distortion. The cylinder aligned with the ripples also shows decreased amplitude predicted by the transfer function. The 45° astigmatism shows the predicted rotation of the phase ripples.

 figure: Fig. 11

Fig. 11 The mapping distortion caused by the three quadratic phase terms. (a) 2 λ P-V Cylinder x; (b) 2 λ P-V Cylinder y; (c) 2 λ P-V Astigmatism 45°

Download Full Size | PDF

Note that the 4f model used here is specific but the physical insight for the coupling of the imaging aberrations with diffraction effects is general and it can be used to explain the diffraction effects in any real system.

6. Experimental verification

To demonstrate the coupling of imaging aberration with diffraction effects in interferometers, a hologram was designed and fabricated to test a concave cylindrical surface. The cylinder surface has a radius of 516.8 mm and it is 25.4 mm in diameter. The two principal axes of a cylindrical surface, thus the imaging aberrations, are horizontal and vertical and they are the same over the whole field. The experiment setup is shown in Fig. 12 . A 4” F/11 transmission sphere is used to provide a spherical wavefront. Then it passes the CGH and becomes a cylindrical wavefront, which matches the surface under test. Imaging the cylindrical surface through the hologram creates astigmatism in the image, as represented by the field curves shown in Fig. 13 .

 figure: Fig. 12

Fig. 12 Experiment setup: a hologram is used to convert the spherical wavefront from the interferometer to a cylindrical wavefront and match the cylinder mirror under test.

Download Full Size | PDF

 figure: Figure 13

Figure 13 The two field curves are almost parallel because the hologram has power in one direction and no power in the other direction. A paraxial lens with the focal length of 516.8 mm is used to convert the beam to a collimated one.

Download Full Size | PDF

The field curves, which represent the principal focal surfaces for astigmatism, are simply parallel planes. When the interferometer is set to focus on the vertical direction, then the horizontal direction has a focus error of about 2000 mm due to the curvature difference of the hologram in these two directions.

6.1 Phase smoothing

To demonstrate the phase smoothing due to imaging aberrations in the interferometer, ideally a sinusoidal phase plate would be placed on the test surface, allowing observation of the reduced magnitude of the phase ripple through focus. However, it is difficult to make a phase plate with certain frequencies, so we placed a binary amplitude mask in front of the cylindrical surface. As we have proved in our previous paper, for a certain frequency feature, the transfer function of a sinusoidal amplitude pattern is the same as that of a sinusoidal phase pattern [6]. Instead of looking at the phase change through focus, we look at the change in intensity pattern on the detector inside the interferometer.

One of the amplitude masks is designed to have a period which gives a reverse-contrast intensity pattern at one principal focus if the period is along the other principal direction. Figure 14 shows the intensity pattern on the interferometer detector at different focus planes. A rectangular opaque mask (shown as red rectangle in the image) was placed in front the amplitude mask to help visualize the reverse contrast at the horizontal focus. Note that the intensity fringes at the both sides of the red rectangle change from bright to dark as we change the focus from vertical to horizontal, which agrees with our design. At the intermediate focus, the image becomes blurry and has almost no contrast.

 figure: Fig. 14

Fig. 14 Images of a binary mask with the interferometer focused at different planes for the CGH test of the cylinder. a) at vertical focus, showing sharp contrast; b) at intermediate focus where the contrast is predicted to go to zero; c) at horizontal focus, where the transfer function predicts high contrast, but with a phase reversal. A rectangular opaque mask (shown as red rectangle in the image) was added in front the amplitude mask to help visualize the reverse contrast at the horizontal focus.

Download Full Size | PDF

6.2 Edge effect

Figure 15 shows the intensity pattern of a circular aperture, which is placed in front of the cylindrical mirror. As we change the focus from horizontal to vertical, the edge diffraction occurs strongly at different locations.

 figure: Fig. 15

Fig. 15 Images of circular aperture stop at different focal plane. Red arrows indicate the positions where diffraction occurs strongly. a) at horizontal focus; b) at intermediate focus; c) at vertical focus.

Download Full Size | PDF

At horizontal focus, the top and bottom edges are sharp while the left and right edges have strong edge diffraction. At the intermediate focus, the two principal field curves have the same magnitude but opposite signs, so the edges that are in 45° are in focus, which is clearly shown in Fig. 15(b). A close-up view of diffraction from a vertical knife edge is shown in Fig. 16 .

 figure: Fig. 16

Fig. 16 A close-up phase map due to diffraction from a vertical knife edge around x = 0 mm.

Download Full Size | PDF

A profile of measured edge diffraction is compared with a calculation using equivalent propagation distance from Eq. (6) in Fig. 17 . Although there are clearly some limitations with the measurement, most notably the ability to measure high spatial frequencies, the character and scale of the observed diffraction effects match well with the predictions.

 figure: Fig. 17

Fig. 17 A line profile of edge diffraction from vertical knife edge around x = 0 mm. The experiment is in a good agreement with theoretical calculation.

Download Full Size | PDF

7. Conclusion

The use of interferometry with refractive or diffractive null correctors allows high accuracy for measuring aspheric surfaces. However, the null correctors also introduce imaging aberrations, like field curvature and mapping distortion. The coupling of these imaging aberrations with diffraction effects causes the high frequency features being smoothed and creates edge artifacts. One should be cautious when measuring high spatial frequency errors or for getting high resolution near the edge. We present and demonstrate a methodology for analyzing these effects using Talbot effect and Fresnel diffraction.

References and links

1. D. Malacara, Optical Shop Testing, 3rd ed. (Wiley 2007).

2. J. M. Sasian, “Design of null lens correctors for the testing of astronomical optics,” Opt. Eng. 27, 1051 (1988).

3. J. H. Burge, “Applications of computer-generated holograms for interferometric measurement of large aspheric optics,” Proc. SPIE 2576, 258–269 (1995). [CrossRef]  

4. C. Zhao and J. H. Burge, “Imaging aberrations from null correctors,” Proc. SPIE 6723, 67230L(2007). [CrossRef]  

5. M. Novak, C. Zhao, and J. H. Burge, “Distortion mapping correction in aspheric null testing,” Proc. SPIE 7063, 706313, 706313-8 (2008). [CrossRef]  

6. P. Zhou and J. H. Burge, “Analysis of wavefront propagation using the Talbot effect,” Appl. Opt. 49(28), 5351–5359 (2010). [CrossRef]   [PubMed]  

7. P. Zhou and J. H. Burge, “Diffraction effects in interferometry,” in Optical Fabrication and Testing, OSA Technical Digest (CD) (Optical Society of America, 2010), paper OMA3.

8. P. Zhou, J. H. Burge, and C. Zhao, “Imaging issues for interferometric measurement of aspheric surfaces using CGH null correctors,” Proc. SPIE 7790, 77900L (2010).

9. J. H. Burge, C. Zhao, and P. Zhou, “Imaging issues for interferometry with CGH null correctors,” Proc. SPIE 7739, 77390T (2010). [CrossRef]  

10. E. Novak, C. Ai, and J. C. Wyant, “Transfer function characterization of laser Fizeau interferometer for high spatial frequency phase measurements,” Proc. SPIE 3134, 114–121 (1997). [CrossRef]  

11. L. A. Selberg, “Interferometer accuracy and precision,” Proc. SPIE 1400, 24–32 (1990).

12. P. E. Murphy, T. G. Brown, and D. T. Moore, “Measurement and calibration of interferometric imaging aberrations,” Appl. Opt. 39(34), 6421–6429 (2000). [CrossRef]   [PubMed]  

13. J. Goodman, Introduction to Fourier Optics (Roberts and Company, 2005), pp. 88–91.

14. C. Zhao and J. H. Burge, “Generalization of the Coddington equations to include hybrid diffractive surfaces,” Proc. SPIE 7652, 76522U (2010). [CrossRef]  

15. Zemax, “Design tools,” http://www.zemax.com/.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1
Fig. 1 The refractive a) and diffractive b) null correctors are often used to test the aspheric surfaces. The null corrector is designed for wavefront performance, but not for good imaging. Therefore, the system suffers imaging aberration which will limit the measurement accuracy.
Fig. 2
Fig. 2 Propagation in a converging space is converted to equivalent propagation in a collimated space. The period and amplitude of wavefront ripples at R1 will change due to diffraction as they propagate to R2. The change in the period scales according to geometry. The change in magnitude can be evaluated using Eq. (4), which was also derived and validated in Ref. 6.
Fig. 3
Fig. 3 The amplitude and phase variation calculated from the Fresnel integrals for the case of diffraction of collimated light from a knife edge.
Fig. 4
Fig. 4 An ideal lens can be used to convert the converging/diverging wavefront to a collimated one, and then we can use the Talbot effect to analyze the phase smoothing due to defocus error. (a) diffractive null optics; (b) refractive null optics. Solid line: wavefront model showing that rays leave the test surface at the normal direction and come to an ideal focus after the null optics; Dotted line: imaging model showing a point on the test surface is imaged through the null optics and the interferometer stop limits the illuminated area on the null optics and the spatial frequencies that passed by the system.
Fig. 5
Fig. 5 The edge diffraction from outer/inner edge. It is shown that the edge diffraction gets better by focusing at the inner edge in this particular case. (a) Mapping distortion from object to image; (b) Focus at inner edge; (c) Focus at outer edge.
Fig. 6
Fig. 6 The field curves for an axisymmetric system. The short solid lines in (b) and (c) represent the magnitude and orientation of the two principal axes.
Fig. 7
Fig. 7 The Zemax simulation of a coherent imaging system allows the geometric analysis of the field curves and complete diffraction propagation analysis as the incident phase ripple is rotated away from the principal axes.
Fig. 8
Fig. 8 Comparison of the computer simulation of a coherent imaging system with the theoretical calculation based on Talbot effects. The phase ripple has a spatial period of 0.5 mm and the wavelength of this test is 1µm. The imaging astigmatism creates two principal focus positions, represented by field curves zt and zs, which are at −190 mm and 138 mm respectively. The analytical transfer function can be calculated using Eqs. (1) and (7). The Zemax diffraction simulation uses the model shown in Fig. 7. The simulation matches the analytical prediction. The Zemax simulation has no data points near where the transfer functions are close to zero because of the low signal-noise-ratio.
Fig. 9
Fig. 9 Simulation of a 20 nm P-V phase ripple with a period of 0.5 mm passing through an imaging system with a transfer function shown in Fig. 8: a) original phase ripple; b) phase ripple after passing the aberrated imaging system. (Note: although the images have the appearance of an interferogram, the gray scale is used here to depict phase, not intensity.)
Fig. 10
Fig. 10 Use a 4f system to show the effects of the quadratic imaging aberration on the phase. The three dots superimposed on the three phase plates are three diffraction orders (1, 0 and −1 orders).
Fig. 11
Fig. 11 The mapping distortion caused by the three quadratic phase terms. (a) 2 λ P-V Cylinder x; (b) 2 λ P-V Cylinder y; (c) 2 λ P-V Astigmatism 45°
Fig. 12
Fig. 12 Experiment setup: a hologram is used to convert the spherical wavefront from the interferometer to a cylindrical wavefront and match the cylinder mirror under test.
Figure 13
Figure 13 The two field curves are almost parallel because the hologram has power in one direction and no power in the other direction. A paraxial lens with the focal length of 516.8 mm is used to convert the beam to a collimated one.
Fig. 14
Fig. 14 Images of a binary mask with the interferometer focused at different planes for the CGH test of the cylinder. a) at vertical focus, showing sharp contrast; b) at intermediate focus where the contrast is predicted to go to zero; c) at horizontal focus, where the transfer function predicts high contrast, but with a phase reversal. A rectangular opaque mask (shown as red rectangle in the image) was added in front the amplitude mask to help visualize the reverse contrast at the horizontal focus.
Fig. 15
Fig. 15 Images of circular aperture stop at different focal plane. Red arrows indicate the positions where diffraction occurs strongly. a) at horizontal focus; b) at intermediate focus; c) at vertical focus.
Fig. 16
Fig. 16 A close-up phase map due to diffraction from a vertical knife edge around x = 0 mm.
Fig. 17
Fig. 17 A line profile of edge diffraction from vertical knife edge around x = 0 mm. The experiment is in a good agreement with theoretical calculation.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

TF= W' W =cos( 2π L z T )=cos( πλL p 2 ),
p 1 a 1 = p 2 a 2 and a 1 R 1 = a 2 R 2 .
Z 2 ' Z 1 ' = f 2 ( 1 R 2 1 R 1 ),
TF= W' W =cos( πλ( Z 2 ' Z 1 ' ) ( p 1 ' ) 2 )=cos( πλ R 1 ( R 1 R 2 ) R 2 p 1 2 )=cos( πλ L e ν n 2 4 a 1 2 ),
U= 1 2 [ C(u)C()+i( S(u)S() ) ],
T F t (x,y)=cos[ πλ ν n 2 4 a 2 m t (x,y) 2 z t (x,y) ] T F s (x,y)=cos[ πλ ν n 2 4 a 2 m s (x,y) 2 z s (x,y) ],
z (x,y) θ = z s (x,y) cos 2 (θ)+ z t (x,y) sin 2 (θ) = 1 2 [ z s (x,y)+ z t (x,y) ]+ 1 2 cos(2θ)[ z s (x,y) z t (x,y) ],
m (x,y) θ = m s 2 (x,y) cos 2 (θ)+ m t 2 (x,y) sin 2 (θ) = 1 2 [ m s 2 (x,y)+ m t 2 (x,y) ]+ 1 2 cos(2θ)[ m s 2 (x,y) m t 2 (x,y) ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.