Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Phase retrieval applied to coherent Fourier scatterometry using the extended ptychographic iterative engine

Open Access Open Access

Abstract

We have demonstrated that the extended ptychographic iterative engine (ePIE) algorithm can be applied to retrieve the phase information of the vectorial scattered field of a subwavelength object with the amplitude of the scattered field as input. We applied this technique combined with coherent Fourier scatterometry to determine the phase of the scattered field of a subwavelength grating, illuminated by a focused laser beam.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Fourier scatterometry is a well-known technique for retrieval of information of an unknown object. The technique works by measuring the scattered light of an object that is illuminated by a known light field followed by inverse reconstruction methods. This technique is being widely used in the semiconductor industry for retrieval of grating parameters in order to check the quality of the lithographic process. In recent years, an alternative version to the traditional (incoherent) Fourier Scateromerotry has been introduced, namely Coherent Fourier Scatterometry (CFS) [1]. In this technique, the object is illuminated not by a single angle of incidence but by a focused laser beam such that the intensity of the scattered field at all angles within the numerical aperture of the collection lens is acquired in one shot using a CCD camera. With the information on the scattered field combined with a priori information about the object, certain features of the object can be determined with subwavelength accuracy. CFS has been applied to reconstruct several parameters of gratings printed on wafers such as period, critical dimension, height and side wall angles [2]. It has also been shown in Ref. [3] that if CFS could be extended to the knowledge of not only the amplitude but also the phase of the scattered field, a gain in sensitivity in the grating parameter retrieval can be achieved. The latter becomes important in certain applications such as in parameter retrieval of subwavelength gratings where the accurate determination of the parameters with small error margin becomes very hard. The most conventional method to obtain the phase information in this case would be interferometry; however, it is well known that its implementation requires very stable setups, given that it is very sensitive to vibrations. Another option to obtain phase information is to use phase retrieval techniques [46]. With the fast development of computing power, the combination of computational algorithms with not-so-sophisticated optical system can be used to retrieve the phase information using only measured intensities in the far field. Among these techniques, ptychography has been found to be successful in terms of robustness to noise and convergence to the correct solution. It has been widely used for visible wavelengths, x-rays as well as e-beams [712]. Ptychography relies on partially illuminating the object with a probe function and obtain several images corresponding to different overlapping probe positions. Among several algorithms for ptychography encountered in the literature, the ptychography iterative engine (PIE) is very popular [6,13]. However, when the probe function is not exactly known, it is possible to modify PIE so that both the object and the probe function are reconstructed [7]. This technique, also called extended PIE (ePIE), is preferred here since it makes the experiment simpler.

In this paper, we combine CFS with ptychography to determine the amplitude and phase of the scattered field from a subwavelength object using only far field intensity measurements. The paper is organized as follows: in Section 1 we introduce the general ptychography method (Ptychography Iterative engine, PIE) and the extended Ptygrography Iterative Engine (ePIE), with the latter being the method that we have used to reconstruct the phase and amplitude of the scattered field. In Section 2 we show details of the experimental setup as well as the obtained data. In Section 3 we explain the computational procedure followed by the results in Section 4. Finally, we present the last section, Section 5 our conclusions and outlook.

1. Ptychography

In 1972, the term ptychography was coined by Hoppe [14,15]. Hoppe reconstructed an object using two far-field intensity measurements corresponding to two different illumination settings. Later on, Rodenburg’s team combined ptychography with iterative scheme and proposed ptychographic iterative engine (PIE) [6,13]. In PIE an object is illuminated by a light spot (also called ‘probe’) that is translated to different positions in a way that the probe at adjacent positions overlap, creating redundancy in the measured far-fields. The optimum overlap between neighboring probe positions is 60% [16]. For each position, the far-field intensity pattern is recorded and using these measurements, the object’s complex-valued transmission function is computed iteratively. In this section, we start by explaining how the ptychographic iterative Engine (PIE) works followed by the extended PIE (e-PIE). The latter is the method we have used for the reconstruction. The reason for this choice relies on the fact that ePIE offers the possibility to relax the accurate knowledge of the probe, since this algorithm also reconstructs it as well and thus makes the experiment easier.

1.1 Ptychographic iterative engine (PIE)

In order to explain the process of the PIE, let us consider an object $O(\textbf {r})$, and a probe $P(\textbf {r})$ that is shifted to a j$^{th}$ position $\textbf {R}_j$. Here, $\textbf {r}$ is a 2D coordinate vector in real space. The complex-valued transmission function can be written as

$$\psi_j(\textbf{r})=O(\textbf{r})P(\textbf{r}-\textbf{R}_j).$$
Hence, the measured intensity in the far-field will be
$$I_j(\textbf{r}') = |\mathscr{F}\{\psi_j(\textbf{r})\}|^2.$$
Here, $\textbf {r}'$ is a 2D coordinate vector in reciprocal space.

The algorithm starts with a guessed object function $O_g(\textbf {r})$ which could be a 2D random valued matrix. The steps of the algorithm for $k^{th}$ iteration and $j^{th}$ probe positions are as follows:

  • 1. Calculate the estimated exit wave as
    $$\psi_{k,j}(\textbf{r})=O_k(\textbf{r})P(\textbf{r}-\textbf{R}_j).$$
    Here, $O_k(\textbf {r})$ is the estimated object for the $k^{th}$ iteration.
  • 2. Fourier transform the exit wave to obtain the guessed diffraction pattern.
    $$\Psi_{k,j}(\textbf{r}') = \mathscr{F} \{ \psi_{k,j}(\textbf{r}) \}(\textbf{r}').$$
  • 3. Update the amplitude of the guessed diffraction pattern with the square root of the measured intensity pattern and keep the phase as it is. Therefore,
    $$\Psi_{k,j}^{upd} (\textbf{r}') = \sqrt{I_j(\textbf{r}')}\frac{\Psi_{k,j}(\textbf{r}')}{|\Psi_{k,j}(\textbf{r}')|}.$$
  • 4. Perform inverse Fourier transform of the updated diffraction pattern to obtain an improved estimation of the exit wave function.
    $$\psi_{k,j}^{upd}(\textbf{r}) = \mathscr{F}^{-1} \{ \Psi_{k,j}^{upd}(\textbf{r}') \}.$$
  • 5. Update the estimated object as follows
    $$O_{k+1}(\textbf{r}) = O_k(\textbf{r}) + \frac{P(\textbf{r}-\textbf{R}_j)^{\ast}}{|P(\textbf{r}-\textbf{R}_j)|^2_{\textrm{max}}} \times [\psi_{k,j}^{upd}(\textbf{r}) - \psi_{k,j}(\textbf{r})].$$
  • 6. Move the probe to the next position.
Repeat the steps from 1 to 6 until convergence is reached. One way to verify the convergence is to calculate the mean squared error (MSE) between measured intensity and estimated intensity; when this value is not changing, the algorithm has found its solution.

1.2 Extended ptychographic iterative engine (ePIE)

In the previous section, the probe function $P(\textbf {r})$ has been assumed to be known; the object function $O(\textbf {r})$ has been assumed to be unknown. However, in a far-field consideration, it is also possible to have the inverse case, where the probe function is unknown and the object is known. On shifting the object by $-\textbf {R}_j$, we will get the intensity pattern in the far-field as:

$$I'_j(\textbf{r})=|\mathscr{F}\{ O(\textbf{r}+\textbf{R}_j)P(\textbf{r})\}|^2.$$
Thus, a probe estimate can be updated if the object is known, analogous to the Eq. (7). If $P_k$ is the estimate of the probe for $k^{\textrm {th}}$ iteration, the update function for the probe can be written as [17]
$$P_{k+1}(\textbf{r}) = P_k(\textbf{r}) + \frac{O_k^*(\textbf{r}+\textbf{R}_j)}{|O_k(\textbf{r}+\textbf{R}_j)|^2_{\textrm{max}}} \times [\psi_{k,j}^{upd}(\textbf{r}) - \psi_{k,j}(\textbf{r})].$$
Therefore, in the ePIE algorithm, the object function and the probe function can be updated using Eq. (7) and (9) respectively.

2. Experimental procedure

In this section we show the setup constructed with the purpose of showing that phase retrieval of the scattered field of a diffraction grating that is obtained using Coherent Fourier Scatterometry, is possible by using the ePIE algorithm.

2.1 Experimental setup

An experimental setup, as shown in Fig. 1, was built in order to obtain the required measurements. In this particular configuration, a He-Ne laser of $\lambda =632.8$ nm is expanded, collimated and used to illuminate a reflective object. An objective of numerical aperture NA = 0.4 is used to focus the incoming laser beam onto the object plane. The used object was a silicon-on-silicon square grating with a period of 500 nm, 130 nm height, mid critical dimension of 216 nm, and side wall angle of 85$^{\circ }$. The reflected beam passes through the same objective and its back focal plane is imaged by two lenses of $f=25$ cm and $f'=10$ cm in a telescope configuration. At the position of the back focal plane of the second lens, a micro-controlled stage with a mask is placed. As mentioned in the previous sections, the mask works as the probe, blocking some parts of the scattered field distribution. After the mask, a lens with focal length of $20$ cm performs the Fourier transform of the mask plane into the camera plane (CCD camera Prosilica, GC1290 with $960\times 1280$ pixels, with square pixel size of $3.75 \mu$m). The displacement of the stage and acquisition of the image (exposure time and number of averaged frames) were controlled by a PC using Labview. Furthermore, the incident light was linearly polarized with the polarization axis at the back focal plane of the objective being perpendicular (TM) to the grating structure. It is important to mention here that this particular configuration has been chosen because it is of the interest for grating parameter reconstruction.

 figure: Fig. 1.

Fig. 1. Experimental setup to retrieve the amplitude and phase of the scattered field (as shown in Fig. 2) of a diffraction grating.

Download Full Size | PDF

In Fig. 2 we show the scattered field of the grating (illuminated by a focused field) that is recorded at the plane of the mask. The red rectangle shows the scanned area of the scattered field of which the amplitude and phase has been retrieved using ptychography. Since the grating is symmetric, recovering the amplitude and phase of only one quart of the far field is sufficient. The feature on the lower left of the figure is due to dust or defect in the optical path.

 figure: Fig. 2.

Fig. 2. Intensity distribution of the experimental scattered field of the grating at the pupil plane. The red square indicates the region that is scanned by the check-board mask.

Download Full Size | PDF

On the contrary to most ptychography schemes which uses a circular mask to scan the object, we use an unconventional mask to scan the scattered field area. The mask function is a checkerboard type amplitude mask with five square tiles: one in the center and four around it (see Fig. 4 in Sect. 4). The reason for this choice relies on obtaining more spread of the intensity pattern in the far-field captured at the camera. The mask has been fabricated using 3D printer and each tile is a square with dimensions of $400\times 400$ $\mu m^2$.

2.2 Methodology

We restate the goal of this paper that it is to retrieve the amplitude and phase of the scattered field (at the mask plane—see Fig. 1). In order to do so, the mask is translated transverse to the beam path, and the mask scanned the scattered field. The corresponding intensity patterns at the far-field were measured in the camera. The mask scanned the scattered field in a way that the overlap between the neighboring mask positions was 79% and the number of scan positions was 12$\times$12.

The captured intensity in the far-field corresponding to $j^{\textrm {th}}$ mask position is $I_j(\textbf {r}')$ as in Eq. (1). We used a matrix with random values as the guessed scattered field which is $O_g(\textbf {r})$. The measured mask function (see Fig. 4a) is used as the initial estimate of the mask. Note that this mask is used as a probe function (P) in the algorithm. We reconstruct the scattered field ($O(\textbf {r})$) iteratively using ePIE.

2.3 Dynamic range

The saturation of the camera was one important issue because the diffraction pattern may have very levels of intensities. As we have mentioned before, the CCD had a 12-bit mode with a dynamic range from 0 to 4095. The fine tuning of exposure time to use full dynamic range was not enough because only the central pattern was distinguishable from the background noise while the information contained in the higher spatial frequencies were lost. To overcome this problem, we increased the dynamic range using the method as explained in the Ref. [18]. For each mask position, several intensity patterns for different exposure times were acquired. For example, in Fig. 3a, the intensity profile of the same cross section is shown for two different exposure times. For t = 2000 $\mu$s, the central peak is saturated; for t = 300 $\mu$s, it is not. Since the input power of light is linear with the output intensity, the ratio between the lobes for different exposure times should be constant. Therefore, first we calculated the ratio between the lobes—which were not saturated—for different exposure times. Using this ratio, we modified the saturated peak of overexposed cross section and kept the rest as it was. Consequently, the intensity profile for higher exposure was acquired.

 figure: Fig. 3.

Fig. 3. (a) The cross-sections of the same intensity pattern is shown for different exposure times of $t=300 \mu s$ and $t=2000 \mu s$ where the former is not overexposed unlike the latter one. It can be seen here that the central peak of the modified intensity is several times higher than the measured intensity. (b) A zoom of the central peaks shows a clear saturation of the brightest peak.

Download Full Size | PDF

This process was repeated about four times, thus, five intensity images were obtained for each mask position with gradually increasing exposure times, which we increased up to one hundred times from the first one.

Figures 3a and 3b illustrate the procedure mentioned above with two cross-sections, corresponding to the same measurement but different exposure times. The lowest profile (continuous blue line) is a measurement when the camera was not saturated while the profile with a continuous black line represents the second measurement when the central part was overexposed. Finally, the red-dotted line is the modified cross section which is obtained by using the above explained method. For the next step, the red-dotted line was used as the cross-section for low exposure time and the corresponding calculation was performed with a third profile obtained with higher exposure time, and so on.

2.4 Mask shift

In order to have an overlap of about 80% between the neighbouring mask positions, we have chosen the overlap value of 79%, since this corresponds to an integer number of pixels (3 pixels) per shift. Non-integer number of pixels can affect the reconstruction.

3. Computational procedure

For this specific work, we implemented the extended Ptychography Iterative Engine (ePIE), as explained in Section 1, in MATLAB. The entire code was adapted to our experimental setup while featuring the possibility of varying some computational parameters that may enhance the reconstruction and computing time. In order to validate the implemented ePIE, before applying it to experimental data, we used simulated intensity patterns that have been calculated using rigorous electromagnetic simulations (see figures of Figs. 4 and 5 in Ref. [19]). This simulation considered the nominal parameters of the fabricated grating and it contains the main features of the far field that it expected from the experiment. Furthermore, we optimized the reconstruction (using simulated data) by varying initial parameters, for example, mask shift and overlap between neighbouring mask positions.

 figure: Fig. 4.

Fig. 4. (a) Direct imaging of the 5 tiles mask that was used for the experiment. (b) The reconstructed mask by using the ePIE algorithm with 30 iterations and 79\% overlap.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Retrieved amplitude and phase of the scattered field of a grating at the pupil illuminated by a focused laser beam. The input polarization is TM (perpendicular to the grating grooves)

Download Full Size | PDF

3.1 Mask function

The mask function can be obtained in two ways. In the first way, an image of the mask is acquired, by adjusting the distance of the lens $f_1 = 20$ cm (see Fig. 1) in a way that mask is at $2f_1$ behind the lens and camera is at $2f_1$ after the lens. One disadvantage of this method is that it involves the movement and alignment of the lens and the camera, however the image is directed measured.

In second way, we replace the lens ($f_1 = 20$ cm) with a lens ($f_2 = 10$ cm) and keep the setup as it is. However, this will not have the correct magnifying factor as the Fourier transform that is captured using the lens $f_1 = 20$ cm. Therefore, we would have to multiply the image size by the magnifying factor. To calculate that factor, let us suppose the pixel size of the detector is $\Delta x_d$, and $\Delta m$ is the size of the mask image in one dimension in pixels when $f_2 = 10$ cm lens was used. The size of the mask will be $\Delta m \times \Delta x_d$. When $f_1$ lens is used to capture the diffraction patterns, the pixel size in the mask plane will be $\Delta x_o$ given by:

$$\Delta x_o = \frac{\lambda f_1}{N\Delta x_d}.$$
Here, $N$ = 1024 which is the number of pixel in the camera in one dimension. Hence, the correct size of the mask in one dimension (in pixels) will be
$$\Delta m' = \frac{\Delta m \times \Delta x_d}{\Delta x_o}.$$
In this work, we used the second way to obtain the mask function.

In addition, we have also calibrated the displacement of the mask introduced by the stage’s software (given in steps) into microns in order to proceed further with real displacement units and convert it from real to Fourier space units.

3.2 Intensity matrix acquisition

To minimize the camera noise and increase the dynamic range, we performed the following steps for each mask position:

  • 1. 30 images were taken in complete darkness and averaged—this data determined the realistic background noise distribution.
  • 2. 30 intensity patterns were captured for each exposure time and averaged.
  • 3. Average background noise was subtracted from the average intensity patterns for each exposure time.
  • 4. Using the resultant intensity patterns, the patterns were combined to increase the dynamic range, as explained in Subsect. 3.
Note that the above procedure was performed for each mask position.

4. Results

We present here the results in two parts: in the the first part we present the optimization of the experimental parameters, explicitly the dynamic range, overlapped area and the reconstruction of the probe function. In the second part, we present the reconstruction results of the amplitude and phase of the far-field of the grating with the optimal parameters. The convergence of the reconstruction was achieved within 15 iterations of the simulation, therefore, we have performed the analysis with 30 iterations.

4.1 Calibration

Dynamic range increase

We emphasize here that the dynamic range played a major role in the accomplishment of the reconstruction and it allowed us to obtain the expected results. The CCD camera dynamic range was 4095. We increased this dynamic range using the method mentioned in Section 2. The dynamic range was increased by 2 orders of magnitude, with this been achieved with 5 exposure times (ranging from 300 $\mu$s to 30000 $\mu$s). Additionally, we found that the increase of dynamic range not only improved the reconstructed object but also enhanced the computing time. The squared root error converged quickly so that only a few iterations were needed (less than 10 iterations to reach convergence), whilst the intensity pattern with lower dynamic range needed a higher number of iterations to converge.

Overlapped area

The second parameter that we determined was the optimal displacement of the mask that we quantified with the percentage of overlapped area between positions. We began by using an arbitrary displacement value which led to an overlap of 81% and 2.6 pixels shift in the object plane. In the simulation, it was rounded to 3 pixels. However, as discussed in previous sections, we also worked on displacements of 3 and 4 pixels shift, corresponding to 79% and 71% overlap. We found that the optimal overlap between mask positions for this case was 79%.

Probe reconstruction

We also present here, briefly, that the extended Ptychographic Iterative Engine (ePIE) works properly as it also allows to reconstruct the mask that we used in the experimental setup. It can be seen from Fig. 4a that the mask had some imperfections, because the edges were not sharp. As the reconstructions were performed, we observed that those imperfections and the shape of each tile were also retrieved accurately as shown in Fig. 4b. Thus, it has recovered the initial parameter which is crucial for the reconstruction of the scattered field.

4.2 Reconstruction of the scattered field of a diffraction grating

In this subsection, we show the reconstruction of the scattered field of a sub wavelength diffraction grating illuminated in a TM polarization mode. The results are shown in Fig. 5. The reconstruction of the scattered field amplitude has been successful, as it is close to the expected amplitude as shown in Fig. 2. Note that, even the detailed features such as the diffraction rings due to dust in the grating were reconstructed. Furthermore, the reconstructed phase agrees with the expected theoretical results (see bottom figures of Figs. 4 and 5 in Ref. [19]). The artifacts outside the pupil area (at the upper right and lower right corners of the Fig. 5) occurs because the intensity outside the pupil is zero, and therefore the phase is random in this area. The mean square error (MSE) is shown in Fig. 6. Note that the MSE here has been computed for the complete scanned area of the scattered field, which also includes the area outside the pupil area. Consequently, due to random phase outside the pupil area, the MSE is higher than if it would be computed only for pupil area. We also would like to point out that there is a clear discontinuity in the phase at the right side of the reconstructed scattered field as expected, showing that the method works even for large phase discontinuities.

 figure: Fig. 6.

Fig. 6. Mean square error of the retrieved scattered field as a function of the number of interactions.

Download Full Size | PDF

5. Conclusions

In this work, we have shown the successful implementation of ePIE—a phase retrieval technique—to reconstruct the scattered field of a sub-wavelength grating in a coherent Fourier scaterometry setup. As expected, within 10 iterations of this method, we found the correct solution, which resembles the expected amplitude and phase distribution of the scattered field of the subwavelength that has been used. This iterative method can replace the interferometer, which is a cumbersome method to use, whereas iterative method can be easily adapted to a scatterometry setup. If this method is applied in combination with a complete characterization of the setup and with precise forward calculations, it could be applicable to optical inspection in lithography. Additionally, we foresee that it can also be used for characterization of nanostructures with high accuracy, inspection of surfaces and defects on printed structures, and other measurements where the phase play an important role.

Funding

Erasmus+; FP7 People: Marie-Curie Actions (PEOPLE) (PITN-GA-2013-608082).

Acknowledgments

We thank Matthias Strauch for his help, which was crucial for the accomplishment of this work. We also thank Thim Zuidwijk and Mauricio Larodé Díaz for designing and fabricating the mask with the 3D printer and R.C. Horsten for technical assistance with Labview. We also thank Europhotonics coordinators Dr. H. Giovannini, Dr. J. Natoli and Ms. N. Guillem.

References

1. O. El Gawhary, N. Kumar, S. F. Pereira, W. M. J. Coene, and H. P. Urbach, “Performance analysis of coherent fourier scatterometry,” Appl. Phys. B 105(4), 775–781 (2011). [CrossRef]  

2. N. Kumar, P. Petrik, G. K. P. Ramanandan, O. El Gwhary, S. F. Pereira, W. M. J. Coene, and H. P. Urbaach, “Reconstruction of sub-wavelength features and nano-positioning of gratings using coherent fourier scatterometry,” Opt. Express 22(20), 24678–24688 (2014). [CrossRef]  

3. S. Roy, N. Kumar, S. F. Pereira, and H. P. Urbach, “Interferometric coherent fourier scatterometry: a method for obtaining high sensitivity in the optical inverse-grating problem,” J. Opt. 15(7), 075707 (2013). [CrossRef]  

4. R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

5. J. R. Fienup, “Reconstruction of an object from the modulus of its fourier transform,” Opt. Lett. 3(1), 27–29 (1978). [CrossRef]  

6. H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [CrossRef]  

7. A. M. Maiden, J. M. Rodenburg, and M. J. Humphry, “Optical ptychography: a practical implementation with useful resolution,” Opt. Lett. 35(15), 2585–2587 (2010). [CrossRef]  

8. G. R. Brady, M. Guizar-Sicairos, and J. R. Fienup, “Optical wavefront measurement using phase retrieval with transverse translation diversity,” Opt. Express 17(2), 624–639 (2009). [CrossRef]  

9. J. M. Rodenburg, A. C. Hurst, A. G. Cullis, B. R. Dobson, F. Pfeiffer, O. Bunk, C. David, K. Jefimovs, and I. Johnson, “Hard-x-ray lensless imaging of extended objects,” Phys. Rev. Lett. 98(3), 034801 (2007). [CrossRef]  

10. M. Dierolf, A. T. P. Menzel, P. Schneider, C. M. Kewish, R. Wepf, O. Bunk, and F. Pfeiffer, “Ptychographic x-ray computed tomography at the nanoscale,” Nature 467(7314), 436–439 (2010). [CrossRef]  

11. F. Hüe, J. M. Rodenburg, A. M. Maiden, F. Sweeney, and P. A. Midgley, “Wave-front phase retrieval in transmission electron microscopy via ptychography,” Phys. Rev. B 82(12), 121415 (2010). [CrossRef]  

12. P. Wang, F. Zhang, S. Gao, M. Zhang, and A. I. Kirkland, “Electron ptychographic diffractive imaging of boron atoms in LaB6 crystals,” Sci. Rep. 7(1), 2857 (2017). [CrossRef]  

13. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 4795–4797 (2004). [CrossRef]  

14. W. Hoppe, “Beugung im inhomogenen primärstrahlwellenfeld. i. prinzip einer phasenmessung von elektronenbeungungsinterferenzen,” Acta Crystallogr., Sect. A 25(4), 495–501 (1969). [CrossRef]  

15. R. Hegerl and W. Hoppe, “Dynamische theorie der kristallstrukturanalyse durch elektronenbeugung im inhomogenen primärstrahlwellenfeld,” Berichte der Bunsengesellschaft für Physikalische Chemie 74(11), 1148–1154 (1970). [CrossRef]  

16. O. Bunk, M. Dierolf, S. Kynde, I. Johnson, O. Marti, and F. Pfeiffer, “Influence of the overlap parameter on the convergence of the ptychographical iterative engine,” Ultramicroscopy 108(5), 481–487 (2008). [CrossRef]  

17. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

18. X. Xu, A. P. Konijnenberg, S. F. Pereira, and H. P. Urbach, “Phase retrieval of the full vectorial field applied to coherent fourier scatterometry,” Opt. Express 25(24), 29574–29586 (2017). [CrossRef]  

19. N. Kumar, L. Cisotto, S. Roy, G. K. P. Ramanandan, S. F. Pereira, and H. P. Urbaach, “Determination of the full scattering matrix using coherent fourier scatterometry,” Appl. Opt. 55(16), 4408–4413 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Experimental setup to retrieve the amplitude and phase of the scattered field (as shown in Fig. 2) of a diffraction grating.
Fig. 2.
Fig. 2. Intensity distribution of the experimental scattered field of the grating at the pupil plane. The red square indicates the region that is scanned by the check-board mask.
Fig. 3.
Fig. 3. (a) The cross-sections of the same intensity pattern is shown for different exposure times of $t=300 \mu s$ and $t=2000 \mu s$ where the former is not overexposed unlike the latter one. It can be seen here that the central peak of the modified intensity is several times higher than the measured intensity. (b) A zoom of the central peaks shows a clear saturation of the brightest peak.
Fig. 4.
Fig. 4. (a) Direct imaging of the 5 tiles mask that was used for the experiment. (b) The reconstructed mask by using the ePIE algorithm with 30 iterations and 79\% overlap.
Fig. 5.
Fig. 5. Retrieved amplitude and phase of the scattered field of a grating at the pupil illuminated by a focused laser beam. The input polarization is TM (perpendicular to the grating grooves)
Fig. 6.
Fig. 6. Mean square error of the retrieved scattered field as a function of the number of interactions.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

ψ j ( r ) = O ( r ) P ( r R j ) .
I j ( r ) = | F { ψ j ( r ) } | 2 .
ψ k , j ( r ) = O k ( r ) P ( r R j ) .
Ψ k , j ( r ) = F { ψ k , j ( r ) } ( r ) .
Ψ k , j u p d ( r ) = I j ( r ) Ψ k , j ( r ) | Ψ k , j ( r ) | .
ψ k , j u p d ( r ) = F 1 { Ψ k , j u p d ( r ) } .
O k + 1 ( r ) = O k ( r ) + P ( r R j ) | P ( r R j ) | max 2 × [ ψ k , j u p d ( r ) ψ k , j ( r ) ] .
I j ( r ) = | F { O ( r + R j ) P ( r ) } | 2 .
P k + 1 ( r ) = P k ( r ) + O k ( r + R j ) | O k ( r + R j ) | max 2 × [ ψ k , j u p d ( r ) ψ k , j ( r ) ] .
Δ x o = λ f 1 N Δ x d .
Δ m = Δ m × Δ x d Δ x o .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.