Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Superresolution imaging via ptychography

Open Access Open Access

Abstract

Coherent diffractive imaging of objects is made considerably more practicable by using ptychography, where a set of diffraction patterns replaces a single measurement and introduces a high degree of redundancy into the recorded data. Here we demonstrate that this redundancy allows diffraction patterns to be extrapolated beyond the aperture of the recording device, leading to superresolved images, improving the limit on the finest feature separation by more than a factor of 3.

© 2011 Optical Society of America

1. INTRODUCTION

Ptychography is a form of coherent diffractive imaging (CDI, the process of recovering an image of a specimen from diffraction data) in which a specimen is stepped through a localized coherent “probe” wavefront, generating a series of diffraction patterns at the plane of a detector [1]. By stepping the specimen such that the illuminated area at each position overlaps with its neighbors, redundancy is introduced into ptychographical data that can be exploited during the reconstruction of an image. Although the principle of ptychography was first discussed in the late 1960s, indicating that this type of data could provide a solution to the phase-retrieval problem, it was many years before a computational inversion process (in this case, a form of deconvolution) was suggested and implemented for light and x rays (for reviews of this early work see [1, 2]). A variation of ptychography was first demonstrated using high-energy electrons at subnanometer resolution for the simplified case of a crystalline object [3]. Since then, much faster and more robust iterative methods for solv ing this type of phase problem have been developed [4, 5, 6, 7, 8]. Iterative phase-retrieval ptychography was first demonstrated experimentally with visible light [9], then with hard x rays [10], and, recently, with electrons [11]. It has been implemented extensively at third generation x-ray sources, where it has become a valuable research tool (see, for example, [12, 13]). But often in ptychographical experiments, where the illuminated areas of the specimen overlap by around 70%, redundancy in the recorded data is under-utilized by existing reconstruction algorithms. We make better use of it here by extrapolating each diffraction pattern beyond the aperture of the detector to provide greatly improved image resolution. One use of our method at visible light wavelengths is to provide long working distances while retaining a high numerical aperture (NA); in our experiments we have achieved a resolution of 406 line pairs per millimeter (lp/mm) at a working distance of 95mm, extending by over 3 times the NA of the detector.

Three mechanisms, applicable to ptychography and already used to enhance resolution in other imaging modalities, suggest that ptychographic superresolution may meet with a degree of success:

  1. The “synthetic aperture”: in digital Fourier holography, a synthetic aperture having an effective spatial cutoff frequency higher than the cutoff of the optical system can be realized by recording a series of Fourier holograms, each corresponding to illumination of the specimen by a plane wave incident at a different angle [14]. Each illumination condition allows a different range of scattering angles to pass through the optical system, and can be considered to translate different areas of a much larger “synthetic” hologram onto the area of the detector. The recorded data can be combined to recreate this larger hologram, whose Fourier transform produces a superresolved image of the specimen. Similar ideas are used for tilt series imaging in electron microscopy [15] and to obtain superresolution using sinusoidally structured illumination in conventional white-light microscopy [16]. A roughly analogous relationship exists in ptychography, where the probe in a ptychographic experiment can be considered an amalgamation of localized phase gradients, each approximating a plane wave incident at a different angle. A lateral translation causes a given region of the specimen to be illuminated by a different phase gradient, resulting in a different part of its scattering cross section being directed onto the detector and contributing to the recorded data. This synthetic aperture effect is the primary contributor to the success of our experiments and we exploit it to the full by introducing a diffuser into our experimental setup to broaden the spatial frequency content of the illuminating probe.
  2. Analytic continuation: it has long been known that, in theory at least, measurement of a finite object’s spatial frequency spectrum over a given area can be extrapolated beyond this range by analytic continuation [17]. According to this theory, a measured diffraction pattern can be considered as the complete spatial frequency spectrum of the specimen, convolved with the Fourier transform of the optical system’s exit pupil and multiplied by the aperture of the detector. Since the exit pupil is of finite extent, its Fourier transform is not band limited; the convolution operation then ensures that data in the recorded region of the diffraction pattern contains information from the entire spectrum of the specimen. Extrapolation of this sort is known to be inherently ill conditioned, susceptible to failure given only minute levels of noise or distortion [18]; nevertheless, Gerchberg proposed a method of exploiting this property in CDI to iteratively retrieve unmeasured higher spatial frequencies [19]. In practice, Gerchberg’s method cannot extrapolate more than a couple of pixels beyond the recorded part of the spectrum. We greatly improve upon this limit here by employing ptychography to extrapolate diffraction patterns out to as much as 4 times the extent of the measured data and by using the diffuser to strengthen the influence on the recorded part of the unrecorded region of the diffraction pattern.
  3. Subpixel shifting: in conventional imaging, improved resolution can be achieved using a series of images of a static specimen that are laterally offset by a noninteger number of pixels [20] (a technique that might better be described as de-aliasing rather than superresolution). Although not strictly analogous, the specimen in a ptychographic experiment is translated in just such a manner, and so it is reasonable to expect correct (and fractional) encoding of the specimen or probe movements in the reconstruction algorithm to also enhance resolution. Below, we will see that convergence of our superresolution algorithm depends on the use of these subpixel shifts.

This paper explores the possibilities of ptychographic superresolution at optical wavelengths using an experimental geometry inspired by the list above. Section 2 explains the reconstruction process and the modifications we have made to an existing ptychographical algorithm to incorporate superresolution. Section 3 details the experimental setup and the data collection process, the results from which are presented in Section 4. In Section 5, a preliminary assessment of the limits to the superresolution method are presented before concluding.

2. RECONSTRUCTION PROCESS

Our superresolution algorithm is a modification of the extended ptychographical iterative engine (ePIE) [8], a reconstruction algorithm able to recover from a set of ptychographical data both the complex-valued transmission function of the specimen and the complex-valued illuminating probe wavefront. The superresolution ptychographical iterative engine (SR-PIE) also produces specimen and probe reconstructions and uses identical data, but, in addition, attempts to significantly improve their resolution by extrapolating the recorded diffraction patterns beyond the aperture of the detector. The required inputs to the SR-PIE are:

  • a set of J diffracted intensities, Ij(u), recorded by a detector of M×N pixels on a Δp pitch. Here u=[u,v] is a coordinate vector addressing the pixels of each recording and j=1J. Our goal will be to recover the data that would have been captured were we to have a detector c times larger, spanning cM×cN pixels on the same Δp pitch;
  • rough initial guesses of the probe and specimen, P0(r) and O0(r), where r=[x,y] is a coordinate vector addressing the pixels of the reconstruction, which are set on a pitch of
    [Δx,Δy]=λzΔp[1cM,1cN].
    Here λ is the illumination wavelength and z is the distance between the specimen and the detector (see Fig. 2). P0(r) will span cM×cN pixels but O0(r) will be somewhat larger to allow for the specimen translations. For example, the experimental results in Figs. 8, 10 used c=4, giving a 512×512 pixel “virtual” detector extrapolated from the 128×128 pixel measured data, with the resulting images spanning 1088×1088 pixels; and
  • the J measured positions of the specimen, Rj=[R(x,j),R(y,j)]. Since it is unlikely that these positions will fall exactly at integer values of the pixel pitch in the reconstruction, when converted into this form, they will consist of an integer pixel shift, pj=[p(x,j),p(y,j)], plus a fractional pixel shift, qj=[q(x,j),q(y,j)], so that
    Rj=[Δx(p(x,j)+q(x,j)),Δy(p(y,j)+q(y,j))].

In a single iteration of the SR-PIE, these inputs are used to update images of the probe and specimen a number of times equal to the number of recorded diffraction patterns. We will follow the progress of one of these updates, forming the jth estimates from the (j1)th, as illustrated in Fig. 1. The diffraction patterns are addressed in a random sequence, s(j): the first update step uses diffraction pattern s(1), the next s(2), and so on. To carry out the update, an cM×cN pixel region, denoted oj1(r), whose central pixel is at [p(x,s(j)),p(y,s(j))], is extracted from Oj1(r). The probe estimate is subpixel shifted by qs(j) and multiplied by oj1(r), to form an exit wave, ψj(r). To implement the fractional shift, Pj1(r) is Fourier transformed and the result multiplied by a linear phase ramp whose phase is calculated according to

ϕj(u)=2π(q(x,s(j))ucM+q(y,s(j))vcN).

Next, ψj(r) is Fourier transformed to give Ψj(u), an estimate of the wavefront at the plane of the detector that resulted in the recorded intensity Is(j)(u). Ψj(u) extends over cM×cN pixels, with the central M×N pixels corresponding to the area of the detector. The moduli of the pixels in this region are replaced by the square root of the recorded data, Is(j)(u), while their phases are retained. The modulus and phase of the remaining pixels are left unchanged, as described by Gerchberg. However, in ptychography at least, without an additional constraint, the intensity at the edges of the extrapolated diffraction patterns tends to build up as the reconstruction progresses, causing an encroachment of Fourier repeats and introducing noise into the reconstructed images (an effect illustrated in Section 4). To counter this, the additional step of forcing the border of Ψj(u) to zero ensures that each Fourier repeat also falls to zero at its extremities. The width of this border is nominally a single pixel, but can be increased to reduce high frequency noise in the reconstruction at the expense of resolution—the border can also be tapered in a fashion similar to that reported by Guizar-Sicairos and Fienup in a slightly different context [18].

The revised version of Ψj(u) is inverse Fourier transformed to produce an updated exit wave, ψj(r). New specimen and probe estimates are then calculated according to the two update functions:

oj(r)=oj1(r)+Pj1*(rqs(j))|Pj(rqs(j))|max2(ψj(r)ψj(r)),Pj(rqs(j))=Pj1(rqs(j))+oj1*(r)|oj1(r)|max2(ψj(r)ψj(r)).
To complete the update, the region of Oj1(r) from which oj1(r) was extracted is replaced by oj(r) and the probe estimate is recentered by applying the appropriate conjugated phase ramp, ϕj(u), to its Fourier transform. The undoing of the previous subpixel shift and the application of the next can be combined into a single step by adding the two appropriate phase ramps, minimizing the number of fast Fourier transforms (FFTs) and inverse FFTs (IFFTs) needed for each iteration of the algorithm. This is the practical basis for subpixel shifting the probe rather than the extracted region of the specimen estimate, a choice supported by the fact that the probe is band limited and so its interpolation is likely to be more accurate [21].

The process described in Fig. 1 is repeated until OJ(r) and PJ(r) have been calculated, completing a single iteration of the SR-PIE. The next iteration can then begin using OJ(r) and PJ(r) as the initial specimen and probe estimates and a fresh random sequence to address the diffraction patterns.

3. EXPERIMENT

Figure 2 shows the experimental setup used to collect ptychographical data. The expanded and collimated beam from a fiber-coupled 675nm diode laser was used as a source of illumination. The probe was formed using two doublet lenses of 3cm focal length in a 4f configuration to image a 100μm pinhole covered by a diffuser onto the specimen. Layers of a thin plastic film were used as a diffuser. The strength of the diffuser could be increased by increasing the number of layers of film—a single layer produced a moderate effect, such that the intensity of each diffraction pattern fell to a low value within the area of the detector [Fig. 3a], while adding a second layer produced highly diffuse diffraction patterns whose speckles remained almost uniform in intensity across the detector area [Fig. 3b]. Specimens were mounted on an x/y stage with a specified practical resolution of 0.1μm and bidirectional repeatability of 0.3μm. Each ptychographical scan consisted of 400 diffraction patterns collected from a grid of 20×20 specimen positions on a nominal pitch of 30μm, with the addition of a ±5μm random x/y offset to avoid the so-called “raster grid pathology” [22]. The detector was an AVT Pike F421B 16bit CCD with 2048×2048 pixels on a 7.4μm pitch, the output of which was down-sampled after collection to 128×128 pixels. The use of a diffuser conferred the appreciable advantage of being able to capture diffraction patterns in a single CCD exposure [23]. The patterns nevertheless contained significant readout noise, artifacts generated from dust on the sensor and probe-forming optics, and reflections primarily from the chrome surface of the resolution target used as a specimen. The robustness of the SR-PIE to this noise is remarkable given the sensitivity of Gerchberg’s original method.

We have obtained optimal results from the SR-PIE when the recorded diffraction patterns consist of uniform speckle that decays to zero within the larger area of the virtual detector. This ensures consistency of the real diffracted intensity with the high spatial frequency suppression carried out during the algorithm’s Fourier update step. For this reason, two ptychographical scans were carried out using a positive chrome-on-glass resolution target as a test specimen, one for evaluation purposes and a second to demonstrate an impressive NA at a large working distance. In the evaluation scan (referred to as dataset 1), the weak diffuser was used and z was set at 86mm, producing diffraction patterns such as the example shown in Fig. 3a. The central 32×32 pixels of these diffraction patterns (with approximately uniform speckle intensity) were inputted to the SR-PIE, which then attempted to recover the remaining portion of the recorded data (which fell to a low intensity at the edge of the detector). The extrapolated and recorded diffraction patterns could then be compared to assess the performance of the algorithm. In the high NA scan (dataset 2), the system was set up using the strong diffuser such that the resulting diffraction patterns exhibited approximately uniform speckle intensity across the entire area of the detector [Fig. 3b]. A value of z=94.4mm then ensured that this intensity dropped to a low level at the perimeter of the extrapolation area, since the NA of the probe-forming optics fell between the edges of the true and virtual detectors. Further scans were subsequently carried out using this setup: first, to demonstrate our method for less strongly diffracting specimens by replacing the resolution target with a prepared microscope slide containing lily pollen grains, and second, to verify the predictions made by a preliminary theory concerning the fundamental limits of the technique (presented in Section 5).

4. RESULTS

To provide a reference from which to assess the reconstructions produced by the SR-PIE, 100 iterations of the conventional ePIE were carried out using the full 128×128 pixel extent of the diffraction patterns comprising dataset 1. Free space was used as the initial estimate of the specimen and an aperture of 100μm diameter as the initial estimate of the probe. Figure 4a shows the modulus of the resulting reconstruction; this would be the result were the SR-PIE to achieve a perfect extrapolation of the central 32×32 pixels of the diffraction patterns. In this figure and in the reconstructions appearing in subsequent figures, a 350μm2 crop from the center of the full 1mm2 image is shown. To provide initial inputs to the SR-PIE, a second ePIE reconstruction was carried out, this time using only the central 32×32 pixels of the recorded data. Figure 4b shows a crop from the modulus of this reconstruction; this would be the approximate result were no accurate extrapolation realized by the SR-PIE. The image appears noisy because the intensity remains high at the edges of the diffraction patterns, resulting in aliasing problems [18]; however, the finest resolved features here are in group 5, element 2 (36lp/mm), which agrees with the spatial frequency at the edge of the 32×32 pixel diffraction patterns (33lp/mm) given the offset to the centers of the bar features that generally afflicts coherent imaging [24].

The low-resolution specimen estimate of Fig. 4b together with the low-resolution probe reconstruction also generated by the ePIE were up-sampled by 4 times using a bicubic interpolator and provided as the O0(r) and P0(r) inputs to the SR-PIE. Four versions of the algorithm were tested over 1000 iterations. In the first, neither subpixel shifting of the probe nor suppression of high spatial frequencies in the Fourier update step was implemented, resulting in the reconstructed modulus shown in Fig. 4c. Although the resolution here is clearly much improved over Fig. 4b, the image is degraded by a high level of background noise due to the detrimental influence of Fourier repeats. In the second version of the SR-PIE, subpixel shifting was introduced, producing the modulus shown in Fig. 4d, where a small increase in resolution is evident but a high level of random noise has been retained. In the third version, the subpixel shifting was deactivated, but the high spatial frequency suppression was included, with a single-pixel border of the extrapolated diffraction patterns clamped at zero. This produced Fig. 4e, where resolution has been reduced slightly from Fig. 4d, but background noise has also been considerably reduced. In the final implementation, both subpixel shifts and high spatial frequency suppression were included, leading to the low background noise and high resolution shown in Fig. 4f. In each of Figs. 4c, 4d, 4e, 4f, the resolution has been increased by a considerable margin—at least 2.24 times from 36 to 80.6lp/mm (group 6, element 3), or 0.65× the resolution achieved in Fig. 4a.

The performance of each version of the algorithm was quantified using the error metric:

E=j=1JuSdp(u)(Is(j)(u)|Ψj(u)|)2j=1JuSdp(u)Is(j)(u),
where Sdp(u) is 1 in the extrapolated region of the diffraction patterns and 0 on their constrained border and in the area where measured data was used. Figure 5 plots the evolution of E over the 1000 iterations of each algorithm implementation. Only with the inclusion of both the high spatial frequency constraint and subpixel shifting of the probe does E converge. In each of the alternative cases, the error begins to diverge after around 100 iterations. Note, however, that the smallest error is achieved when the high spatial frequency suppression is omitted from the algorithm, although the error rapidly increases subsequent to this minimum. We have observed that, over several thousand iterations, the fully implemented SR-PIE convergences for every dataset we have collected, but a mathematical proof of its behavior is still required to confirm the findings of our experimental results.

The additional features introduced into the ePIE have, then, resulted in an apparently stable and robust algorithm that considerably improves image resolution. Concentrating on this fully implemented algorithm, Fig. 6 provides further detail of its performance. Figures 6a, 6b give a visual comparison of a randomly chosen recorded diffraction pattern (its square root is shown) and the modulus of the corresponding pattern extrapolated by the SR-PIE. The white square demarks the 32×32 pixel region of the measured data used in the reconstruction. There is a clear correlation between the speckle structure extrapolated by the SR-PIE and the actual diffracted intensity, but the intensity of the recovered diffraction pattern has a more rapid radial decay than the measured data. This is attributable to the outer-border suppression enacted by the algorithm, combined with the requirement for smoothness in the Fourier transform of a band-limited function.

Further detail of the SR-PIE’s performance was gained using the following error metric, which compares the recorded and extrapolated diffraction patterns at each pixel location:

EΨ(u)=j=1J(Is(j)(u)|Ψj(u)|)2j=1JIs(j)(u).
This metric is plotted in Fig. 6c. The circle represents the spatial frequency of group 6, element 3 of the resolution target, the finest well-resolved feature in Fig. 4f. Figure 6d shows a radial average of Fig. 6c, with the dashed line representing the extent of the measured data used in the reconstruction. The smear of high-error pixels seen in Fig. 6c is a common feature of our experiments that we attribute to inaccuracies in the measurement of specimen positions.

Having investigated the performance of the SR-PIE using dataset 1, dataset 2 was used to attempt extrapolation beyond the extent of the detector and realize a high NA at a 95mm working distance. A conventional ePIE reconstruction using the full extent of the diffraction patterns was up-sampled and used as the seed input to the SR-PIE. Figures 7a, 7b show the modulus and phase of the seed probe estimate, and Fig. 8a shows the modulus of the seed specimen estimate. The specimen reconstruction is again noisy because the diffraction patterns have significant intensity at their edges, but the finest resolved features in Fig. 8a belong to group 7, element 1 (128lp/mm), in line with the highest spatial frequency captured by the detector. This figure can be compared to the resolution obtained in Fig. 8b, showing the modulus of the image recovered after 150 iterations of the SR-PIE. A single-pixel border of the extrapolated patterns was clamped at zero for this reconstruction, which used c=4 and, so, extrapolated the diffraction patterns from 128×128 to 512×512 pixels. In physical terms, this equates to a 60.6mm2 virtual detector. The inset of Fig. 8b shows that element 5 of group 8 of the target, whose features are at 406lp/mm, is resolved, giving a resolution gain of 3.17×; it should be noted, however, that the resolution can be increased further by reducing the illuminating wavelength. The superresolved reconstruction of the probe shown in Figs. 7c, 7d is consistent with a slightly defocused and aberrated image of the pinhole and diffuser. In fact, the probe reconstruction can be backpropagated and a spherical aberration term removed to show a reasonably sharp-edged pinhole.

Figure 9 provides further details of the superresolved image. Figure 9a is an example of the extrapolated diffraction patterns the SR-PIE produces (the modulus is shown here), where the square indicates the extent of the detector. An interesting possibility is that the data shown here are not those that would have been recorded by a flat detector equal in size to the extrapolated diffraction patterns. This could be the case if the real detector falls in the Fresnel zone of diffraction, where wavefront curvature is accurately approximated by a parabolic phase, but the virtual detector extends beyond this region, where a Fourier transform relationship exists between the specimen plane and a spherical shell of radius z [25]. This may mean that the SR-PIE solves for the intensity that would have been recorded on a curved detector array, although we have not investigated this idea fully. Figure 9b plots on a log scale the power spectrum of the recovered image, where the circle represents the spatial frequency of group 8, element 5 of the resolution target. The power in the higher diffraction orders here is minute: a diffraction peak near the plotted circle is approximately 108 times less intense than the zeroth order. This lends credence to our assertion that accurate extrapolation of the diffraction patterns is made possible by the synthetic aperture effect discussed in the introduction, and not by the convolution argument on which Gerchberg’s method relies.

While it is a good way to quantify the various aspects of our method, using a resolution target as the sole specimen in these experiments could be somewhat misleading since it diffracts strongly and into distinct orders. A further experiment was therefore undertaken using an identical configuration to that of dataset 2 and a more representative specimen, a sample of lily pollen mounted on a microscope slide. Figures 10a, 10b show a crop from the modulus and phase, respectively, of a conventional ePIE reconstruction carried out on the pollen data. These initial reconstructions were up-sampled and used as inputs to the SR-PIE, again using a value c=4. A first reconstruction contained high-frequency noise, especially evident in the featureless regions of the specimen and attributable to the very weak scatter to higher diffraction angles of this sample. To counteract this, the constrained region of each extrapolated diffraction pattern was extended from a single-pixel border to one of 16 pixels, which resulted in the images in Figs. 10c, 10d. The structure of the exine layer, the tough outer shell that protects the lily pollen as it passes through the anther, has been revealed in both the modulus and phase of the superresolved reconstruction, neither of which has suffered from an appreciable increase in background noise. It is difficult to estimate the resolution of these images accurately, but the spars visible in the exine layer of the pollen grains are roughly 5μm apart.

5. DISCUSSION AND CONCLUSIONS

It is clear from the results presented above that ptychographic data encodes a great deal of untapped information—but how much? We offer here a preliminary commentary on the degree of this redundancy.

In conventional CDI, where a single diffraction pattern is recorded and the additional constraint of a known specimen support conditions the phase-retrieval process, a minimum degree of redundancy is ensured provided the over-sampling ratio

σ=number of pixels in diffraction patternnumber of pixels in support
is greater than 2 [26]. This metric can be misleading, since a diffraction pattern sampled well above the Nyquist sampling rate will have a high over-sampling ratio, but contains no new information since the additional samples can be recovered from the Nyquist-sampled pattern by interpolation. Recently, Elser and Millane [27] introduced the constraint ratio:
Ω=number of pixels in autocorrelation of support2(number of pixels in support),
which does not vary with the degree of over-sampling, but does increase when the support is triangle shaped, for example, reflecting the improved performance of phase-retrieval algorithms in this case [28]. σ=2Ω for a rectangular support and a Nyquist-sampled diffraction pattern.

Arguments along similar lines can be used to give a rough estimate of the redundancy in a ptychographical dataset. The most straightforward measure, similar to the over-sampling ratio, is

σpty=J(pixels per diffraction pattern)2(pixels in specimen&probe reconstructions)
(recalling that J is the total number of diffraction patterns recorded). The requirement for an accurate reconstruction is σpty>1. Taking dataset 2 as an example, the full reconstruction from which the crop shown in Fig. 8b was extracted consisted of 1088×1088 pixels. Each diffraction pattern used to carry out this reconstruction consisted of 512×512 pixels, of which only the central 128×128 pixels corresponded to measured data. The number of measured data points was therefore 400×1282 (400 recorded diffraction patterns each of 128×128 pixels), while the number of unknown variables solved for by the SR-PIE was 2(10882+5122): the number of unknown pixels in the specimen reconstruction, plus the unknown pixels in the probe reconstruction, multiplied by two to account for the fact that both probe and specimen are complex valued and, thus, each has a real and an imaginary part. For dataset 2, this gives a redundancy value of 2.3, implying the phase-retrieval problem is well conditioned, and perhaps that, given a higher NA in the illumination optics, a larger degree of superresolution could be achieved.

σpty gives no account to the independence of the measurements in each diffraction pattern, nor does it consider the fact that the areas of the probe and specimen reconstructions that are accurately recovered by the SR-PIE do not span every pixel. A further subtlety not addressed by the metric is the independence not of the measurements within a single diffraction pattern, but of the measurements in several patterns recorded from neighboring specimen positions. As such, Eq. (9) is intended only to give a useful indication of the degree of redundancy in ptychographical data and further research is needed in this area to expand upon this initial discussion. Nevertheless, Eq. (9) has some interesting implications. For example, decreasing the specimen translation step size should increase the obtainable degree of superresolution, since this will give a smaller number of pixels in the specimen reconstruction for the same number of diffraction patterns. To test this theory, scans were carried out using the strong-diffuser experimental setup (realignment of the system led to a slightly smaller value of z=80.1mm) and the resolution target as a specimen, with average step sizes of 30, 20, and 10μm. Table 1 summarizes the parameters for these datasets. Central squares of decreasing size were taken from the recorded data and input to the SR-PIE, each time extrapolating the diffraction patterns to 512×512 pixels. Figure 11 plots the resolution observed in the reconstructions as the extent of the data used by the algorithm was reduced. Clearly, the maximum resolution (governed by the NA of the illumination optics) can be realized from significantly less data when the step size is reduced, as suggested by Eq. (9)— the point at which σpty=1 for each set of data is indicated by the dashed lines. The redundancy metric underestimates the point at which the clarity of the reconstructions begins to degrade, as should be expected given diffraction pattern noise and other experimental inaccuracies. It is interesting that, although noise increases substantially and resolution reduces, the reconstruction does not fail completely when σpty1.

To conclude, we have shown in this paper that superresolved imaging using ptychographic data is not only possible, but is also practical, and can be carried out robustly. Two modifications to a conventional ptychographic algorithm have been described that control the convergence of its superresolution extension: subpixel translations and a modified Fourier modulus update step. Our surprising findings are that, by using this algorithm, large increases in resolution, over 3 times, can be achieved without the introduction of substantial noise and that diffraction orders containing very little power can be accurately recovered. An initial study of the limits on the process has been presented that suggests extrapolation by larger factors should be possible. In fact, very recent work has demonstrated resolution improvements of >5×, realizing a resolution of 367lp/mm at a 191mm working distance and a resolution >645lp/mm at a 71mm working distance. The methods presented here have applications beyond long working distance optical microscopy, including, for example, solving for dark field data in electron microscopy, replacing missing data due to sectioned detectors or beam stops in x-ray microscopy, or broader applications in imaging [29].

ACKNOWLEDGMENTS

The authors thank Phase Focus Ltd for the use of their equipment and for technical assistance and gratefully acknowledge the support of the Engineering and Physical Sciences Research Council (EPSRC) for funding this work, which was part of the Basic Technology (EP/E034055/1)–Ultimate Microscopy Grant.

Tables Icon

Table 1. Parameters for SR-PIE Reconstructions Using Varying Step Sizes

 figure: Fig. 1

Fig. 1 The superresolution algorithm addresses diffraction patterns sequentially—estimates of the probe and specimen are updated a number of times equal to the number of diffraction patterns during a single iteration. This flow diagram illustrates one of these update steps, using the s(j)th diffraction pattern to form the jth probe and specimen estimates from the (j1)th. These new estimates are, in turn, updated using the s(j+1)th diffraction pattern, and so on, until each diffraction pattern has been addressed, completing a single iteration of the algorithm.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Experimental setup used for the presented results. The specimen was mounted on a computer-controlled motorized x/y stage and the lenses were achromatic doublets. Alignment of the system led to different values for z for the various experiments, as detailed in the text.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Example of a diffraction pattern recorded for (a) dataset 1, using a weak diffuser, and (b) dataset 2, using a strong diffuser.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 (a) Crop corresponding to the central 350μm2 of the 1mm2 ePIE reconstruction using the full 128×128 extent of each diffraction pattern in dataset 1 (subsequent reconstructions are similarly cropped). The scale bar here and in every figure is 100μm. (b) ePIE reconstruction using only the central 32×32 pixel region of each diffraction pattern in dataset 1. (c)–(f) SR-PIE reconstructions using only the 32×32 central pixels from the diffraction patterns in dataset 1 to attempt recovery of the remaining recorded data. The algorithm was implemented with and without restriction of high spatial frequencies and subpixel shifting of the probe: (c) without either constraint; (d) without high spatial frequency restriction, but with subpixel shifts; (e) with spatial frequency restriction, without subpixel shifts; (f) with both constraints.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Evolution of the error metric E over 1000 iterations of the SR-PIE. Circles, no subpixel shift or high spatial frequency suppression. Squares, subpixel shift, no high spatial frequency suppression. Triangles, no subpixel shift, including high spatial frequency suppression. Crosses, both subpixel shifts and high spatial frequency suppression.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Analysis of the high spatial frequency content recovered by the SR-PIE. (a) The square root of a randomly chosen diffraction pattern as recorded by the detector. The square indicates the extent of the data that was extracted and input to the SR-PIE. (b) Estimate of the same diffraction pattern recovered by the superresolution process, showing speckle structure in agreement with (a). (c) Combined error, EΨ, at each pixel location of every recovered diffraction pattern. The circle corresponds to the spatial frequency of the finest features well resolved in Fig. 4f (group 6, element 3). (d) Radial average of (c); the dashed line indicates the extent of the recorded data.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 (a) and (b) Modulus and phase of the up-sampled ePIE probe reconstruction used as the seed input to the SR-PIE. (c) and (d) Modulus and phase, respectively, of the superresolved probe function.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 (a) Crop from the modulus of the ePIE reconstruction using the full 128×128 pixel extent of the diffraction patterns comprising dataset 2, up-sampled by 4 times using a bicubic spline method. This result was used as the seed input to the SR-PIE. (b) Crop from the same area of the superresolved reconstruction and (inset) a magnification from the center of this image showing that group 8, element 5 is resolved. The inset has been up-sampled by 4 times to show clearly the finest resolved features.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 (a) Example of an extrapolated diffraction pattern that resulted from the SR-PIE. The square indicates the extent of the detector. (b) Power spectrum of the reconstructed image of the resolution target, plotted on a log scale—the circle has a radius of 406lp/mm, equating to group 8, element 5 of the target.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 (a) and (b) Modulus and phase of the lily pollen images reconstructed using the conventional ePIE. (c) and (d) Superresolved modulus and phase images produced by the SR-PIE.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 The degree of superresolution that can be achieved by the SR-PIE is a function of the amount of overlap between the areas of the specimen illuminated by the probe. To generate this figure, ptychographical scans using step sizes of 30 (triangles), 20 (squares), and 10μm (circles) were recorded and reconstructions were carried out using a central square of each diffraction pattern, whose width was as indicated on the x axis. The insets show crops from the modulus of the reconstructions resulting from the scans indicated by arrows. The dashed vertical lines indicate where σpty=1.

Download Full Size | PDF

1. J. M. Rodenburg, “Ptychography and related diffractive imaging methods,” in Advances in Imaging and Electron Physics, P. W. Hawkes, ed. (Elsevier, 2008), Vol. 150, pp. 87–184. [CrossRef]  

2. W. Hoppe, “Trace structure analysis, ptychography, phase tomography,” Ultramicroscopy 10, 187–198 (1982). [CrossRef]  

3. P. D. Nellist, B. C. McCallum, and J. M. Rodenburg, “Resolution beyond the ‘information limit’ in transmission electron microscopy,” Nature 374, 630–632 (1995). [CrossRef]  

4. H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93, 023903 (2004). [CrossRef]   [PubMed]  

5. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85, 4795–4797 (2004). [CrossRef]  

6. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16, 7264–7278 (2008). [CrossRef]   [PubMed]  

7. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-resolution scanning x-ray diffraction microscopy,” Science 321, 379–382 (2008). [CrossRef]   [PubMed]  

8. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109, 1256–1262 (2009). [CrossRef]   [PubMed]  

9. J. M. Rodenburg, A. C. Hurst, and A. G. Cullis, “Transmission microscopy without lenses for objects of unlimited size,” Ultramicroscopy 107, 227–231 (2007). [CrossRef]  

10. J. M. Rodenburg, A. C. Hurst, A. G. Cullis, B. R. Dobson, F. Pfeiffer, O. Bunk, C. David, K. Jefimovs, and I. Johnson, “Hard-x-ray lensless imaging of extended objects,” Phys. Rev. Lett. 98, 034801 (2007). [CrossRef]   [PubMed]  

11. F. Hüe, J. M. Rodenburg, A. M. Maiden, F. Sweeney, and P. A. Midgley, “Wave-front phase retrieval in transmission electron microscopy via ptychography,” Phys. Rev. B 82, 121415 (2010). [CrossRef]  

12. M. Dierolf, A. Menzel, P. Thibault, P. Schneider, C. M. Kewish, R. Wepf, O. Bunk, and F. Pfeiffer, “Ptychographic x-ray computed tomography at the nanoscale,” Nature 467, 436–439 (2010). [CrossRef]   [PubMed]  

13. A. Schropp, P. Boye, A. Goldschmidt, S. Hönig, R. Hoppe, J. Patommel, C. Rakete, D. Samberg, S. Stephan, S. Schöder, M. Burghammer, and C. G. Schroer, “Non-destructive and quantitative imaging of a nano-structured microchip by ptychographic hard x-ray scanning microscopy,” J. Microsc. 241, 9–12 (2011). [CrossRef]  

14. V. Mico, Z. Zalevsky, P. García-Martínez, and J. García, “Synthetic aperture superresolution with multiple off-axis holograms,” J. Opt. Soc. Am. A 23, 3162–3170 (2006). [CrossRef]  

15. A. Kirkland, W. Saxton, K. L. Chau, K. Tsuno, and M. Kawasaki, “Super-resolution by aperture synthesis: tilt series reconstruction in CTEM,” Ultramicroscopy 57, 355–374 (1995). [CrossRef]  

16. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198, 82–87 (2000). [CrossRef]   [PubMed]  

17. J. W. Goodman, Introduction to Fourier Optics3rd ed. (Roberts, 2005), Chap. 6, pp. 162–167.

18. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with Fourier-weighted projections,” J. Opt. Soc. Am. A 25, 701–709 (2008). [CrossRef]  

19. R. W. Gerchberg, “Super-resolution through error energy reduction,” Opt. Acta 21, 709–720 (1974). [CrossRef]  

20. H. Ur and D. Gross, “Improved resolution from subpixel shifted pictures,” CVGIP Graph. Models Image Process. 54, 181–186 (1992). [CrossRef]  

21. G. R. Brady, M. Guizar-Sicairos, and J. R. Fienup, “Optical wavefront measurement using phase retrieval with trans verse translation diversity,” Opt. Express 17, 624–639 (2009). [CrossRef]   [PubMed]  

22. M. Dierolf, P. Thibault, A. Menzel, C. M. Kewish, K. Jefimovs, I. Schlichting, K. von König, O. Bunk, and F. Pfeiffer, “Ptychographic coherent diffractive imaging of weakly scattering specimens,” New J. Phys. 12, 035017 (2010). [CrossRef]  

23. A. M. Maiden, J. M. Rodenburg, and M. J. Humphry, “Optical ptychography: a practical implementation with useful resolution,” Opt. Lett. 35, 2585–2587 (2010). [CrossRef]   [PubMed]  

24. G. O. Reynolds, J. B. D. Velis, G. B. Parrent, and B. J. Thompson, The New Physical Optics Notebook: Tutorials in Fourier Optics (American Institute of Physics, 1998), Chap. 13, p. 107.

25. Y. Takaki and H. Ohzu, “Fast numerical reconstruction tech nique for high-resolution hybrid holographic microscopy,” Appl. Opt. 38, 2204–2211 (1999). [CrossRef]  

26. J. Miao, D. Sayre, and H. Chapman, “Phase retrieval from the magnitude of the Fourier transforms of nonperiodic objects,” J. Opt. Soc. Am. A 15, 1662–1669 (1998). [CrossRef]  

27. V. Elser and R. P. Millane, “Reconstruction of an object from its symmetry-averaged diffraction pattern,” Acta Crystallogr. A 64, 273–279 (2008). [CrossRef]   [PubMed]  

28. J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” J. Opt. Soc. Am. A 4, 118–123 (1987). [CrossRef]  

29. J. R. Fienup, “Lensless coherent imaging by phase retrieval with an illumination pattern constraint,” Opt. Express 14, 498–508 (2006). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 The superresolution algorithm addresses diffraction patterns sequentially—estimates of the probe and specimen are updated a number of times equal to the number of diffraction patterns during a single iteration. This flow diagram illustrates one of these update steps, using the s ( j ) th diffraction pattern to form the jth probe and specimen estimates from the ( j 1 ) th. These new estimates are, in turn, updated using the s ( j + 1 ) th diffraction pattern, and so on, until each diffraction pattern has been addressed, completing a single iteration of the algorithm.
Fig. 2
Fig. 2 Experimental setup used for the presented results. The specimen was mounted on a computer-controlled motorized x / y stage and the lenses were achromatic doublets. Alignment of the system led to different values for z for the various experiments, as detailed in the text.
Fig. 3
Fig. 3 Example of a diffraction pattern recorded for (a) dataset 1, using a weak diffuser, and (b) dataset 2, using a strong diffuser.
Fig. 4
Fig. 4 (a) Crop corresponding to the central 350 μm 2 of the 1 mm 2 ePIE reconstruction using the full 128 × 128 extent of each diffraction pattern in dataset 1 (subsequent reconstructions are similarly cropped). The scale bar here and in every figure is 100 μm . (b) ePIE reconstruction using only the central 32 × 32 pixel region of each diffraction pattern in dataset 1. (c)–(f) SR-PIE reconstructions using only the 32 × 32 central pixels from the diffraction patterns in dataset 1 to attempt recovery of the remaining recorded data. The algorithm was implemented with and without restriction of high spatial frequencies and subpixel shifting of the probe: (c) without either constraint; (d) without high spatial frequency restriction, but with subpixel shifts; (e) with spatial frequency restriction, without subpixel shifts; (f) with both constraints.
Fig. 5
Fig. 5 Evolution of the error metric E over 1000 iterations of the SR-PIE. Circles, no subpixel shift or high spatial frequency suppression. Squares, subpixel shift, no high spatial frequency suppression. Triangles, no subpixel shift, including high spatial frequency suppression. Crosses, both subpixel shifts and high spatial frequency suppression.
Fig. 6
Fig. 6 Analysis of the high spatial frequency content recovered by the SR-PIE. (a) The square root of a randomly chosen diffraction pattern as recorded by the detector. The square indicates the extent of the data that was extracted and input to the SR-PIE. (b) Estimate of the same diffraction pattern recovered by the superresolution process, showing speckle structure in agreement with (a). (c) Combined error, E Ψ , at each pixel location of every recovered diffraction pattern. The circle corresponds to the spatial frequency of the finest features well resolved in Fig. 4f (group 6, element 3). (d) Radial average of (c); the dashed line indicates the extent of the recorded data.
Fig. 7
Fig. 7 (a) and (b) Modulus and phase of the up-sampled ePIE probe reconstruction used as the seed input to the SR-PIE. (c) and (d) Modulus and phase, respectively, of the superresolved probe function.
Fig. 8
Fig. 8 (a) Crop from the modulus of the ePIE reconstruction using the full 128 × 128 pixel extent of the diffraction patterns comprising dataset 2, up-sampled by 4 times using a bicubic spline method. This result was used as the seed input to the SR-PIE. (b) Crop from the same area of the superresolved reconstruction and (inset) a magnification from the center of this image showing that group 8, element 5 is resolved. The inset has been up-sampled by 4 times to show clearly the finest resolved features.
Fig. 9
Fig. 9 (a) Example of an extrapolated diffraction pattern that resulted from the SR-PIE. The square indicates the extent of the detector. (b) Power spectrum of the reconstructed image of the resolution target, plotted on a log scale—the circle has a radius of 406 lp / mm , equating to group 8, element 5 of the target.
Fig. 10
Fig. 10 (a) and (b) Modulus and phase of the lily pollen images reconstructed using the conventional ePIE. (c) and (d) Superresolved modulus and phase images produced by the SR-PIE.
Fig. 11
Fig. 11 The degree of superresolution that can be achieved by the SR-PIE is a function of the amount of overlap between the areas of the specimen illuminated by the probe. To generate this figure, ptychographical scans using step sizes of 30 (triangles), 20 (squares), and 10 μm (circles) were recorded and reconstructions were carried out using a central square of each diffraction pattern, whose width was as indicated on the x axis. The insets show crops from the modulus of the reconstructions resulting from the scans indicated by arrows. The dashed vertical lines indicate where σ pty = 1 .

Tables (1)

Tables Icon

Table 1 Parameters for SR-PIE Reconstructions Using Varying Step Sizes

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

[ Δ x , Δ y ] = λ z Δ p [ 1 c M , 1 c N ] .
R j = [ Δ x ( p ( x , j ) + q ( x , j ) ) , Δ y ( p ( y , j ) + q ( y , j ) ) ] .
ϕ j ( u ) = 2 π ( q ( x , s ( j ) ) u c M + q ( y , s ( j ) ) v c N ) .
o j ( r ) = o j 1 ( r ) + P j 1 * ( r q s ( j ) ) | P j ( r q s ( j ) ) | max 2 ( ψ j ( r ) ψ j ( r ) ) , P j ( r q s ( j ) ) = P j 1 ( r q s ( j ) ) + o j 1 * ( r ) | o j 1 ( r ) | max 2 ( ψ j ( r ) ψ j ( r ) ) .
E = j = 1 J u S dp ( u ) ( I s ( j ) ( u ) | Ψ j ( u ) | ) 2 j = 1 J u S dp ( u ) I s ( j ) ( u ) ,
E Ψ ( u ) = j = 1 J ( I s ( j ) ( u ) | Ψ j ( u ) | ) 2 j = 1 J I s ( j ) ( u ) .
σ = number of pixels in diffraction pattern number of pixels in support
Ω = number of pixels in autocorrelation of support 2 ( number of pixels in support ) ,
σ pty = J ( pixels per diffraction pattern ) 2 ( pixels in specimen & probe reconstructions )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.