Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Characterization of the reference wave in a compact digital holographic camera

Open Access Open Access

Abstract

A hologram is a recording of the interference between an unknown object wave and a coherent reference wave. Providing the object and reference waves are sufficiently separated in some region of space and the reference beam is known, a high-fidelity reconstruction of the object wave is possible. In traditional optical holography, high-quality reconstruction is achieved by careful reillumination of the holographic plate with the exact same reference wave that was used at the recording stage. To reconstruct high-quality digital holograms the exact parameters of the reference wave must be known mathematically. This paper discusses a technique that obtains the mathematical parameters that characterize a strongly divergent reference wave that originates from a fiber source in a new compact digital holographic camera. This is a lensless design that is similar in principle to a Fourier hologram, but because of the large numerical aperture, the usual paraxial approximations cannot be applied and the Fourier relationship is inexact. To characterize the reference wave, recordings of quasi-planar object waves are made at various angles of incidence using a Dammann grating. An optimization process is then used to find the reference wave that reconstructs a stigmatic image of the object wave regardless of the angle of incidence.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. INTRODUCTION

Digital holography is a coherent imaging technique that, in contrast with conventional incoherent imaging methods, provides both phase and amplitude information that can be exploited in microscopy, vibration analysis, and shape and deformation measurements [15]. In comparison to traditional optical holography, digital recording not only removes the time-consuming steps involved in the chemical development of photographic plates but also greatly extends the capability of coherent imaging since recorded fields can be further processed, combined, and compared, numerically. The feasibility of digitally reconstructing a hologram was first demonstrated in 1967 with a lensless Fourier transform geometry [6]. Since then developments in high-speed computing and semiconductor sensors have allowed digital holography to move into more mainstream applications.

A hologram (optical or digital) is essentially a recording of the interference between an unknown object wave and a reference wave over an area defined by the detector aperture [7]. Providing certain conditions are met (see Section 2) the resulting interferogram contains the information required to unambiguously reconstruct the phase and the amplitude of the object wave at the detector. It is important to realize that the reference wave is in effect a comparator or datum from which phase and amplitude are measured, and failure to characterize this wave has been shown to result in wavefront aberrations similar to those encountered in conventional imaging systems [8].

Several methods have been considered to measure the reference wave in digital holography. One method is to record a known test object wave. Typically, a plane wave is used [9], since a well-collimated object beam can be generated and verified relatively easily [10]; however, the tilt of the object beam relative to the sensor is difficult to measure and can be considered unknown. In digital holographic microscopy, spherical waves diverging from a pinhole are often used as reference waves [11]. In this case, if the 3D position of the pinhole is not known precisely, both the divergence and the tilt of the test object wave at the detector are unknown. If, however, the detector is at sufficient distance from the entrance pupil for the Fresnel approximation to be valid [i.e., a recording with low numerical aperture (NA)], it can be shown that unknown tilt and divergence of the test object wave results in a similar tilt and divergence in the estimated reference wave and has no apparent effect on image quality [12]. Furthermore Qui et al. note that an error in the estimate of the reference wave divergence can be deduced from consideration of frequency content in the interferogram [13]. It is observed once again that these methods are applicable only for low NA recordings, when the Fresnel approximation is valid. For the case of an inline (Gabor) geometry at large NA, characterization of the illumination wave (that may also be considered as the reference) has been discussed by Riesenberg and Kanka [14]. In this case, the diverging wave from a pinhole was assumed to be spherical and the pinhole position was found by an iterative method according to the magnification of a known transmissive test object. The authors also show the dramatic effects of positional errors on image aberration in biological images recorded by a large NA system.

In this paper, we consider a lensless compact digital holographic camera that can make recordings over a comparably large NA (NA0.5). In this case, the Fresnel approximation is not applicable. Although the reference wave used in the camera originates from a pinhole, reflection from the beam splitter means that a spherical wavefront is not guaranteed, and it is necessary to measure the deviation from this nominal form. For these reasons, we have developed a variation on the method of Hillman et al. [12]. The method we use is based on the fundamental assumption that there is only one reference wave that will reproduce stigmatic images from any point within the field of view. Accordingly, we make a test hologram of point-like objects and find the reference beam that optimizes imaging performance. Before discussing this method in detail, we will begin by discussing our compact camera design.

2. COMPACT DIGITAL HOLOGRAPHIC CAMERA

Figure 1 shows a schematic diagram of the compact digital holographic camera. The camera design is required to be compact, since it is to be used as a single unit in a synthetic aperture array of 225 similar cameras that are placed in close proximity to each other. Furthermore, the cameras must have a large NA (NA0.5) to allow the synthetic array to provide high-resolution imaging (λ) over a large field of view as shown in Fig. 2. This configuration consists of an array of 225 digital holographic cameras offering a resolution of better than 1 μm over a field of view of 100mm×100mm.

 figure: Fig. 1.

Fig. 1. Compact digital holographic camera.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Array of 225 compact holographic cameras.

Download Full Size | PDF

A. Design Considerations

Central to the compact camera is a Sony IMX219 sensor, an inexpensive 8MPixel CMOS device [15] that is used extensively within midpriced mobile phones. The sensor provides an array of 3280×2464 pixels placed on a regular 1.12 μm grid such that the sensor dimensions are 3.67×2.76mm. Although it is usually a color-sensitive camera with integral Bayer and NIR-blocking filters, the Sony IMX219 camera is available without the NIR blocking filter and at wavelengths greater than 780 nm operates successfully as a monochrome sensor.

The compact camera design is built around a 5 mm cube beam splitter that provides a stable frame to mount the sensor and other components. In this design, the reference beam is a highly divergent beam originating from a Thorlabs SM600 monomode fiber. In this case, the effective NA of the fiber has been increased to approximately NA=0.5, from its native value (NA=0.12), to illuminate the entire sensor. This was achieved using inhouse processing, by coating the fiber end with silver and introducing a pinhole aperture of approximately 0.5 μm diameter in the center of the core region using focused ion beam (FIB) etching. FIB processing for optical fibers has been used in a wider variety of applications from microlenses [16] to fiber Bragg gratings [17]; the effects of an aperture milled into the end of a tapered fiber [18] have also been studied. The coating of the fiber core creates back reflections that can affect the stability of the laser source, and an optical isolator was included to reduce this effect.

The beam splitter design effectively “folds” the reference beam such that it appears to originate from a point adjacent to a square aperture that defines the entrance pupil as shown in Fig. 3. To avoid surface reflection the sensor and reference fiber are cemented to the beam splitter with index matching epoxy. The cube is subsequently surrounded in a potting compound, MG Chemicals 832, to provide an index matched absorbing layer.

 figure: Fig. 3.

Fig. 3. Reference and entrance pupil.

Download Full Size | PDF

Although the camera is ultimately designed to be used at NIR wavelengths, in the following investigation it was operated at a wavelength of 632.8 nm to facilitate an easier setup. At this wavelength the sensor’s Bayer filter blocks all but the “red” pixels, and the sensor resolution is effectively halved, a 1640×1232 array with 2.24 μm pixel spacing. Accordingly, the size of the entrance pupil was chosen to limit the maximum spatial frequency of the object wave at the sensor to be below the Nyquist frequency. Noting that the aperture is 5 mm from the sensor and that the wavelength is reduced as it propagates through the beam splitter (BK7), the entrance pupil dimensions are limited to 0.44×0.44mm. The separation of the fiber and the entrance pupil is also an important parameter that influences the spatial frequencies recorded by the sensor and the SNR in the reconstruction process as discussed in the following section.

B. Recording and Reconstruction

In this section, we briefly describe the reconstruction method used to calculate the optical field that passes through the entrance pupil of the compact camera illustrated in Fig. 1 from a recording of the intensity H, in the plane of the sensor. Denoting the object and reference waves in this plane by SA and RA, respectively, we have the relationship

H=|SA+RA|2=|SA|2+|RA|2+SA*RA+SARA*,
where the first and second terms on the right-hand side of Eq. (1) represent the intensity of the object and reference waves, respectively, while the final two terms are the well-known interference or conjugate image terms. By virtue of the off-axis recording geometry originally proposed by Leith and Upatnieks [19], the reference beam provides a carrier frequency and the terms in Eq. (1) can usually be separated in the frequency domain.

It is worth noting that the compact camera illustrated in Fig. 1 has greater NA but otherwise is similar to the lensless Fourier transform holographic geometry described by Goodman [7]. To simplify our discussion for the moment, we will assume that it exhibits similar properties. As its name suggests a Fourier transform hologram is one where the complex amplitudes of the fields in the plane of the recording medium are related to those in the object plane (here the plane entrance pupil) by a scaled Fourier transform, and this is the case for a low NA setup where the Fresnel (paraxial) approximation can be applied faithfully. Under this assumption, Fourier transformation of Eq. (1) yields the autocorrelation of the object and reference fields in the plane of the entrance pupil [terms 1 and 2 in Eq. (1)] and the cross correlations of these fields (terms 3 and 4, respectively). As shown by Goodman [7], this geometry would mean that the terms are completely separated in frequency space if the reference fiber is separated from the entrance pupil by a distance at least as large as the maximum dimension of the entrance pupil from the edge of the aperture. It is also noted that for the case of a Fourier transform hologram, Fourier transformation of the recorded interferogram can be considered as back-propagation to the entrance pupil.

Although complete separation of the terms is desirable, it has been noted by others [20] that the first term in this equation can be neglected if the intensity of the reference wave is significantly greater than that of the object wave at every point in the sensor plane. In the geometry shown in Fig. 1, it is sufficient to bring the reference beam to the corner of the entrance pupil (as in Fig. 3) for the remaining terms to be separated in frequency space. Although as noted previously, the relatively large NA of our geometry precludes the use of the Fresnel approximation, and the recording is not strictly a Fourier transform hologram; we make use of this idea in our reconstruction process as follows. With knowledge of the complex amplitude of the reference wave we first subtract the second term in Eq. (1) and divide by the complex conjugate of the reference wave to give a complex field UA in the plane of the sensor, as follows:

UA=H|RA|2RA*RARA*SA*+SA.

In a similar manner to the Fourier transform hologram, separation of these terms is best undertaken by back-propagation to give the complex amplitude UB, in the plane of the entrance pupil. Owing to the high NA of the compact camera, this is more precisely calculated by the convolution (denoted )

UB=UAG=UA(x,y)G(xx,yy)dxdy
or in the frequency domain, by the product
U˜B=U˜AG˜,
where tilde denotes Fourier transformation and G(x,y) is the free space Greens’ function such that [7]
G˜(kx,ky)=G(x,y)exp(j2π(kxx+kyy))dxdy,
G˜(kx,ky)=circ((λkx)2+(λky)2)×exp(j2πk0z1(λkx)2(λky)2),
where k0 is the wavenumber such that k0=1/λ, λ is the wavelength in the medium of the beam splitter and “circ” is defined as follows [7]:
circ(r)={10.50r<1r=1otherwise.

Returning to the complex amplitude UB, in the plane of the entrance pupil, we can write

UB(RARA*SA*)G+(RA*RA*SA)G,
UB(RARA*SA*)G+SB.

The second term in Eq. (9) yields the desired complex amplitude in the plane of the entrance pupil while the first term represents the unwanted conjugate. By analogy with the Fourier transform geometry, the terms are separated if the illumination source is located at the corner of the entrance pupil as shown in Fig. 3. Finally, we note once again that the reconstruction process relies on precise knowledge of the complex amplitude that characterizes the reference wave. The process of reference wave characterization is now described in detail.

3. CHARACTERIZATION OF THE REFERENCE WAVE

To characterize the reference wave, we first decompose it into its constituent parts: an ideal spherical wavefront modulated by amplitude and phase aberration functions given by A(x,y) and ϕ(x,y), respectively, such that an estimate of the reference RAe(x,y) is given by

RAe=A(x,y)exp(j(2πk0r+ϕ(x,y))),
wherein terms of the position of the fiber are defined by x0,y0,z0, r=[(xx0)2+(yy0)2+zAB2]1/2, and it is noted that, since a cube beam splitter is used, z0zAB and once again k0 is the wavenumber within the beam splitter material. It is convenient to write the phase aberration, ϕ(x,y), in a parametric form. For this work, we have used a decomposition based on Zernike polynomials modified using the process defined by Mahajan [21] to provide an orthogonal basis over a rectangular aperture such that
ϕ(x,y)=c1ϕ1(x,y)+c2ϕ2(x,y)+cnϕn(x,y),
where ϕ1,,ϕn are the Zernike polynomials defined in the appendix with coefficients c1,,cn. The advantage of using Zernike polynomials in this instance is that they can be identified easily with low-order aberrations (tilt, defocus, astigmatism, etc.). Since |RA(x,y)|2=A2(x,y), the amplitude of the reference wave, A(x,y), can be calculated from a straightforward recording of the reference beam intensity. To estimate the Zernike polynomial coefficients that define the phase aberration ϕ(x,y) together with the position variables, x0,y0 and z0, a more complex procedure was undertaken as follows. A set of point-like objects was generated using a diffraction grating illuminated with a plane wave as shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Experimental arrangement utilizing a diffraction grating as the object wave.

Download Full Size | PDF

A Dammann grating (Laser Components DOE-258) was chosen as this type of phase-only diffraction grating offers greater efficiency and uniformity than other types of grating [22]. In this configuration, a grid of 11×11 diffracted orders was obtained. When used in the test configuration of Fig. 4, each diffracted order projects through the entrance pupil onto a region of the sensor defined by the diffraction angle where the field is mixed with the reference wave. This is illustrated in the hologram shown in Fig. 5, where it is noted that the reference beam intensity has been subtracted as described in Section 2.

 figure: Fig. 5.

Fig. 5. Test hologram with |R|2 term removed.

Download Full Size | PDF

It can be seen that the zeroth order of the diffraction pattern has a greater intensity when compared to the other orders. Interference with the reference beam results in circular fringes that can be seen in each projection of the entrance pupil that is shown in Fig. 3. It is noted that not all of the sensor has been illuminated; this is due to the 14° maximum diffraction angle at 635 nm of the available Dammann grating.

For a test hologram recorded in this way, the field in the aperture can be written as the sum of plane wave components, such that

SB(x,y)=W(x,y)×m,nexp(j2π(mkgx+nkgy)),
where W(x,y) is a window that describes the aperture and kg is the spatial frequency of the grating. Let us now consider the reconstruction of this field using an estimate of reference wave RAe(x,y). If it is assumed that the estimate is close to the correct form such that the terms in the reconstructed field can be separated, then following a similar derivation to that used in Section 2 we can write
UB(RΔSA)G,
where RΔ=RA/RAe. Since the object is made up of plane waves it is instructive to consider the power spectrum, |UB|2. Accordingly, denoting the Fourier transform by tilde we can write
I=|U˜B|2|(R˜ΔS˜A)·G˜|2,
I|(R˜ΔS˜A)|2.

With reference to Eq. (4), in the frequency domain the object or signal field, S˜A, is related to that in the aperture by the transfer function, G˜*, such that S˜A=S˜BG˜* and, consequently,

I=|U˜B|2|R˜Δ(S˜BG˜*)|2.

Finally, from Eq. (12), the spectrum S˜B can be calculated and we have

I|R˜Δ(G˜*×(m,nW˜m,n))|2,
where W˜m,n=sinc(kx/wx)sinc(ky/wy)δ(kxkgm,kykgn) and wx, wy, are the half-width of the entrance pupil in the x and y directions, respectively. Equation (12) describes the power spectrum of the field reconstructed in the entrance pupil of a holographic camera. If the estimate of the reference beam is perfect, RΔ=RA/RAe=1, R˜Δ=δ(kx,ky), and since |G˜*|2=1, the power spectrum reduces to
I|m,nW˜m,n|2.

This can be recognized as an image of the grating orders that is (diffraction) limited by the entrance pupil. Returning to the more general result defined by Eq. (17), it is noted that the phase aberrations in the reference wave result in a finite kernel, R˜Δ, and aberration in the images of the diffracted orders. It is noted that in this case typically large phase variations in the transfer function, G˜*, result in aberrations that vary across the angular field. In other words, the quality of the image of a diffracted order depends on the estimate of the reference in a region defined by the projection of the entrance pupil onto the sensor by that particular order (see Section 2).

Using Eq. (17), it is possible to estimate the parameters that define the reference wave by optimizing the quality of the image. There are many suitable measures of image quality. For this work a merit function, Q, was used such that

Q=m,nmax(Im,n),
where max(Im,n) is the maximum intensity of the peak (2D sinc function) corresponding to order m, n. Using this merit function, the parameters were optimized using the MATLAB nonlinear multivariable minimization tool fminsearch [23].

4. RESULTS

Figure 6 shows the power spectrum I=|U˜B|2 corresponding to an initial estimate of the fiber position based on the nominal camera design, x0=0.22mm, y0=0.22mm, z0=5mm, where (0, 0, 5 mm) is the center of the array.

 figure: Fig. 6.

Fig. 6. Reconstruction of the Dammann grating with initial reference beam parameters.

Download Full Size | PDF

Using the nonlinear optimization described above, the estimate of the fiber position was improved. Figure 7 shows the power spectrum obtained when x0=0.22mm, y0=0.33mm, z0=5.03mm. The estimate was further improved using the rectangular aperture Zernike polynomials, 4–15. Figure 7 shows the resulting phase aberration ϕ(x,y), and an optimized image is shown in Fig. 8.

 figure: Fig. 7.

Fig. 7. Reconstruction of the Dammann grating after reference beam parameters have been optimized.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Phase error (rad) when compared to a spherical wave.

Download Full Size | PDF

Finally, to demonstrate that the compact camera can produce diffracted limited images across its whole field of view a point fiber source was positioned at the center and corners of the camera’s field of view using a linear translation stage. The fiber point source was positioned at a plane 50 mm from the camera’s entrance pupil and was translated 40 mm in x and 30 mm in the y directions. Holograms were taken at each of extremes of the scan region with another taken at the center of the scan. These holograms were then reconstructed using the estimate of the reference wave. A composite image was obtained by summing together the different reconstructed holograms, the result of which is shown in Fig. 9.

 figure: Fig. 9.

Fig. 9. Reconstruction of multiple point sources to compare the point spread function over the entire field of view of the camera.

Download Full Size | PDF

5. DISCUSSION AND CONCLUSIONS

In this paper, we have shown that it is possible to retrieve the reference wave parameters for a high NA, large field of view compact digital holographic camera. This was achieved by making a hologram of a Dammann grating and subsequently calculating the reference wave that best produced stigmatic images across the field of view using a nonlinear, multivariable minimization tool. According to the design of the compact digital holographic camera used in this study, the reference wave was characterized in terms of the physical position of the fiber-optic source used to generate the reference wave together with phase aberrations that were defined in terms of Zernike polynomials, modified for use with a rectangular aperture.

The results presented in this paper show that as required the proposed method produces diffraction-limited images of point-like objects placed at the center and extremes of the camera’s field of view. It is observed that although the diffracted orders of the chosen Dammann grating do not cover the full NA of the holographic camera, the image quality (at the extremes of the image shown in Fig. 9) is remarkably good. It is noted, however, that the images shown in Figs. 6 and 7 show signs of distortion as the diffracted orders are not equally spaced in a rectangular grid. Evaluation of the positions of the diffracted orders shows that the distortion largely corresponds to third-order Siedel aberration (approximately 12% at the extremes of the field of view). Although distortion is easily compensated and is not critical in our application, it is worth discussing this finding in a little more detail.

Owing to the design of our compact digital holographic camera, the origin of the distortion in the final image is third- and higher-order spherical aberration of the reference beam caused by propagation through a layer of material of refractive index substantially different from the beam splitter material, in the camera for example. Although spherical aberration is accounted for by the Zernike polynomial terms used to describe the reference beam, it is evident that the merit function that we have chosen is not sensitive to distortion. It is clear that when a Dammann grating is used to define a set of object plane waves defined by a regular grid in frequency space, additional information concerning the relative position of the peaks is made available and, if required, this can be incorporated into the merit function.

APPENDIX A

Mahajan [18] defines a number of sets of orthonormal polynomials for use on different pupil shapes. For this set of polynomials, the half widths of the pupil along the x and y axes are defined as a and 1a2, respectively. The aspect ratio of the pupil is then defined as 1a2/a. The following equations describe the polynomials used in Eq. (11):

ϕ1=1,ϕ2=(3/a)x,ϕ3=3/(1a2)y,ϕ4=[5/(212a2+2a4)](3ρ21),ϕ5=[3/(a1a2)]xy,ϕ6={5/[2a2(1a2)12a2+2a4]}[3(1a2)2x23a4y2a2(13a2+2a4)],ϕ7=[21/(22754a2+62a4)](15ρ29+4a2)y,ϕ8=[21/(2a3570a2+62a4)](15ρ25+4a2)x,ϕ9={5(2754a2+62a4)/(1a2)/[2a2(2781a2+116a4+62a6)]}[27(1a2)2x235a4y2+a2(939a2+30a4)]y,ϕ10={5/[2a3(1a2)3570a2+62a4]}[35(1a2)2x227a4y2a2(2151a2+304)]x,ϕ11=[1/(8μ)][315ρ430(7+2a2)x230(92a2)y2+27+16a216a4],ϕ12=[3μ/(8a2νη)][35(1a2)2(1836a2+67a4)x4+630(12a2+2a4)x2y235a4(4998a2+67a4)y430(1a2)(710a212a4+75a667a8)x230a2(777a2+189a4193a6+67a8)y2+a2(1a2)(12a2)(70233a2+233a4),ϕ13=[21/(2a13a2+4a42a6)](5ρ23)xy,ϕ14=16τ[735(1a2)4x4540a4(1a2)2x2y2+735a8y490a2(1a2)3(79a2)x2+90a6(1a2)(29a2)y2+3a4(1a2)2(2162a2+62a4)],ϕ15={21/[2a3(1a2)13a2+4a42a6]}[5(1a2)2x25a4y2a2(39a2+6a4)]xy.

The following constants used above are defined as follows:

μ=(936a2+103a4134a6+67a8)1/2,ν=(49196a2+330a4268a6+134a8)1/2,τ=1/[128νa4(1a2)2],η=945a2+139a4237a6+210a867a10.

Funding

Engineering and Physical Sciences Research Council (EPSRC) (EP/M020940/1); Horizon 2020 Framework Programme (H2020) (141ND09 MetHPM).

Acknowledgment

We are grateful to Taylor Hobson Ltd., EPSRC, and the EU for their support of this and closely related work.

REFERENCES

1. M. K. Kim, “Applications of digital holography in biomedical microscopy,” J. Opt. Soc. Korea 14, 77–89 (2010). [CrossRef]  

2. F. Zhang, J. D. R. Valera, I. Yamaguchi, M. Yokota, and G. Mills, “Vibration analysis by phase shifting digital holography,” Opt. Rev. 11, 297–299 (2004). [CrossRef]  

3. S. Seebacher, W. Osten, and W. P. O. Jueptner, “Measuring shape and deformation of small objects using digital holography,” Proc. SPIE 3479, 104–115 (1998). [CrossRef]  

4. X. Sang, “Applications of digital holography to measurements and optical characterization,” Opt. Eng. 50, 91311 (2011). [CrossRef]  

5. T. H. Jeong, “Basic principles and applications of holography BT—fundamentals of photonics,” in Fundamentals of Photonics (2008), p. 381–417.

6. J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11, 77–79 (1967). [CrossRef]  

7. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (1996), Vol. 35.

8. R. Jozwicki and S. Pasko, “Influence of reference beam aberrations in digital holography on the image quality,” in Optical Measurement Systems for Industrial Inspection III (SPIE, 2003), Vol. 5144, pp. 132–137.

9. J. Hahn, D. L. Marks, K. Choi, S. Lim, and D. J. Brady, “Thin holographic camera with integrated reference distribution,” Appl. Opt. 50, 4848–4854 (2011). [CrossRef]  

10. D. Malacara, Optical Shop Testing, Wiley Series in Pure and Applied Optics (Wiley, 2007).

11. E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt. 38, 6994–7001 (1999). [CrossRef]  

12. T. R. Hillman, T. Gutzler, S. A. Alexandrov, and D. D. Sampson, “High-resolution, wide-field object reconstruction with synthetic aperture Fourier holographic optical microscopy,” Opt. Express 17, 7873–7892 (2009). [CrossRef]  

13. P. Qiu, Z. Mei, and T. Pang, “Extracting the parameters of digital reference wave from a single off-axis digital hologram,” J. Mod. Opt. 62, 816–821 (2016). [CrossRef]  

14. R. Riesenberg and M. Kanka, “Self-calibrating lensless inline-holographic microscopy by a sample holder with reference structures,” Opt. Lett. 39, 5236–5239 (2014). [CrossRef]  

15. Sony, “IMX219PQ,” http://www.sony-semicon.co.jp/products_en/new_pro/april_2014/imx219_e.html.

16. H. Melkonyan, K. Al Qubaisi, A. Khilo, and M. Dahlem, “Optical fibre lens with parabolic effective index profile fabricated using focused ion beam,” in CLEO: Science and Innovations (SM1P—3) (2016).

17. J. Huang, A. Alqahtani, J. Viegas, and M. S. Dahlem, “Fabrication of optical fibre gratings through focused ion beam techniques for sensing applications,” in Photonics Global Conference (PGC) (2012), Vol. 3, pp. 1–4.

18. C. Obermüller and K. Karrai, “Far field characterization of diffracting circular apertures,” Appl. Phys. Lett. 67, 3408–3410 (1995). [CrossRef]  

19. E. N. Leith and J. Upatnieks, “Reconstructed wavefronts and communication theory,” J. Opt. Soc. Am. 52, 1123 (1962). [CrossRef]  

20. T. Fricke-Begemann and J. Burke, “Speckle interferometry: three-dimensional deformation field measurement with a single interferogram,” Appl. Opt. 40, 5011–5022 (2001). [CrossRef]  

21. V. N. Mahajan and G. Dai, “Orthonormal polynomials in wavefront analysis: analytical solution,” J. Opt. Soc. Am. A 24, 2994–3016 (2007). [CrossRef]  

22. J. Jahns, M. M. Downs, M. E. Prise, N. Streibl, and S. J. Walker, “Dammann gratings for laser beam shaping,” Opt. Eng. 28, 1267–1275 (1989). [CrossRef]  

23. MathWorks, “fminsearch,” https://uk.mathworks.com/help/matlab/ref/fminsearch.html.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Compact digital holographic camera.
Fig. 2.
Fig. 2. Array of 225 compact holographic cameras.
Fig. 3.
Fig. 3. Reference and entrance pupil.
Fig. 4.
Fig. 4. Experimental arrangement utilizing a diffraction grating as the object wave.
Fig. 5.
Fig. 5. Test hologram with |R|2 term removed.
Fig. 6.
Fig. 6. Reconstruction of the Dammann grating with initial reference beam parameters.
Fig. 7.
Fig. 7. Reconstruction of the Dammann grating after reference beam parameters have been optimized.
Fig. 8.
Fig. 8. Phase error (rad) when compared to a spherical wave.
Fig. 9.
Fig. 9. Reconstruction of multiple point sources to compare the point spread function over the entire field of view of the camera.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

H=|SA+RA|2=|SA|2+|RA|2+SA*RA+SARA*,
UA=H|RA|2RA*RARA*SA*+SA.
UB=UAG=UA(x,y)G(xx,yy)dxdy
U˜B=U˜AG˜,
G˜(kx,ky)=G(x,y)exp(j2π(kxx+kyy))dxdy,
G˜(kx,ky)=circ((λkx)2+(λky)2)×exp(j2πk0z1(λkx)2(λky)2),
circ(r)={10.50r<1r=1otherwise.
UB(RARA*SA*)G+(RA*RA*SA)G,
UB(RARA*SA*)G+SB.
RAe=A(x,y)exp(j(2πk0r+ϕ(x,y))),
ϕ(x,y)=c1ϕ1(x,y)+c2ϕ2(x,y)+cnϕn(x,y),
SB(x,y)=W(x,y)×m,nexp(j2π(mkgx+nkgy)),
UB(RΔSA)G,
I=|U˜B|2|(R˜ΔS˜A)·G˜|2,
I|(R˜ΔS˜A)|2.
I=|U˜B|2|R˜Δ(S˜BG˜*)|2.
I|R˜Δ(G˜*×(m,nW˜m,n))|2,
I|m,nW˜m,n|2.
Q=m,nmax(Im,n),
ϕ1=1,ϕ2=(3/a)x,ϕ3=3/(1a2)y,ϕ4=[5/(212a2+2a4)](3ρ21),ϕ5=[3/(a1a2)]xy,ϕ6={5/[2a2(1a2)12a2+2a4]}[3(1a2)2x23a4y2a2(13a2+2a4)],ϕ7=[21/(22754a2+62a4)](15ρ29+4a2)y,ϕ8=[21/(2a3570a2+62a4)](15ρ25+4a2)x,ϕ9={5(2754a2+62a4)/(1a2)/[2a2(2781a2+116a4+62a6)]}[27(1a2)2x235a4y2+a2(939a2+30a4)]y,ϕ10={5/[2a3(1a2)3570a2+62a4]}[35(1a2)2x227a4y2a2(2151a2+304)]x,ϕ11=[1/(8μ)][315ρ430(7+2a2)x230(92a2)y2+27+16a216a4],ϕ12=[3μ/(8a2νη)][35(1a2)2(1836a2+67a4)x4+630(12a2+2a4)x2y235a4(4998a2+67a4)y430(1a2)(710a212a4+75a667a8)x230a2(777a2+189a4193a6+67a8)y2+a2(1a2)(12a2)(70233a2+233a4),ϕ13=[21/(2a13a2+4a42a6)](5ρ23)xy,ϕ14=16τ[735(1a2)4x4540a4(1a2)2x2y2+735a8y490a2(1a2)3(79a2)x2+90a6(1a2)(29a2)y2+3a4(1a2)2(2162a2+62a4)],ϕ15={21/[2a3(1a2)13a2+4a42a6]}[5(1a2)2x25a4y2a2(39a2+6a4)]xy.
μ=(936a2+103a4134a6+67a8)1/2,ν=(49196a2+330a4268a6+134a8)1/2,τ=1/[128νa4(1a2)2],η=945a2+139a4237a6+210a867a10.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.