Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Three-dimensional imaging of trapped cold atoms with a light field microscope

Open Access Open Access

Abstract

This research images trapped atoms in three dimensions, utilizing light field imaging. Such a system is of interest in the development of atom interferometer accelerometers in dynamic systems where strictly defined focal planes may be impractical. In this research, a light field microscope was constructed utilizing a Lytro Development Kit micro lens array and sensor. It was used to image fluorescing rubidium atoms in a magneto optical trap. The three-dimensional (3D) volume of the atoms is reconstructed using a modeled point spread function (PSF), taking into consideration that the low magnification (1.25) of the system changed typical assumptions used in the optics model for the PSF. The 3D reconstruction is analyzed with respect to a standard off-axis fluorescence image. Optical axis separation between two atom clouds is measured to a 100 μm accuracy in a 3 mm deep volume, with a 16 μm in-focus standard resolution with a 3.9 mm by 3.9 mm field of view. Optical axis spreading is observed in the reconstruction and discussed. The 3D information can be used to determine properties of the atom cloud with a single camera and single image, and can be applied anywhere 3D information is needed but optical access may be limited.

1. INTRODUCTION

In experimental science, observance of the phenomenon of interest is critical to the success of the research effort. Modern scientific instruments and experiments are of increasing complexity, and an observation solid angle can be a constraint, especially for experiments that require environmental isolation, such as those in a vacuum. Laser-cooling experiments require environmental isolation in a vacuum and optical access for multiple large laser beams. In this paper, we demonstrate an imaging technique utilizing a light field microscope that allows scientists to greatly increase the information available to them when the observation solid angle is constrained. Furthermore, laser-cooled systems used for inertial sensing and timing are hoped to be made small enough to be portable, which leads to architectures where laser optics are located near the atomic or molecular sample. In the case that the observer would like information about the sample volume at higher resolution, one needs either to install two orthogonal cameras or scan a focus plane through the sample, or both. The example motivating our work is the case of an atom interferometer accelerometer on a dynamic platform. In this case, the atoms could move in the volume of interest relative to a pre-set focus plane, and the atomic distribution at measurement may have volumetric structure that would need to be observed to produce a useful accelerometer signal. For example, in both [1] and [2], useful interference patterns were observable only when the pattern was in the focus plane of the camera; therefore, the whole experiment had to be designed around that constraint. The ability to observe similar information along the imaging axis in addition to the information in the focus plane would be beneficial in extending the use of atom interferometry into more complicated geometries. Furthermore, volumetric imaging could impact observation of three-dimensional (3D) arrays of atoms or ions in some quantum computing architectures and thus the scalability of the qubits in those systems by allowing access to multiple planes of the array simultaneously [3]. In this paper, we demonstrate the capability to reconstruct volumetric information about cold atom clouds and describe the design considerations and accuracies important to understanding the technique that might make this imaging technique useful for general scientific experimentation.

The ability to image a volume with a single objective camera can be found in light field imaging systems. The concepts of a light field camera can be traced back to as early as 1908 [4], but a light field camera specifically for retrieving depth information from an image was proposed by Adelson and Wang [5] in 1992. Ng applied the technology to refocusing images in post processing [6], and the concept has recently been developed into a commercial product. A light field camera takes the sensor plane and replaces it with a micro lens array (MLA) and moves the sensor one micro lens focal length behind the array. This arrangement records a sampling of the light field at the plane of the MLA. Each micro lens samples the two-dimensional (2D) distribution of the light, and the pixels behind each micro lens sample the angles of the rays hitting that micro lens. This added information allows depth information to be extracted from the light field images.

This use of micro lenses was applied to microscopy by a group at Stanford [7], dubbed light field microscopy. Using the light field microscope (LFM), they developed methods for recovering a 3D volume [79]. This was done using a technique that works to recover the structure in a volume by removing blurred light via deconvolution with a point spread function (PSF). This allows for extracting 3D structure using a LFM as opposed to refocusing or getting surface 3D information, as done with light field cameras. The LFM deconvolution technique is used in this work.

The LFM was first applied to dilute atomic clouds in [10], where they showed 3D structure in a cross pattern of fluorescing rubidium created by crossing laser beams passing through a dilute rubidium gas. This is taken a step further in this research and used to image fluorescing rubidium in a magneto optical trap (MOT). In this paper, a Lytro Development Kit MLA and sensor are used in combination with a low-magnification microscope system to image a 3D cloud of cold rubidium atoms in a MOT. The basics of the LFM used will be presented first, then the theory behind the deconvolution and the creation of the PSF. The experimental setup will be briefly described, and the images and 3D volumes constructed will be shown and compared to a reference image.

2. LIGHT FIELD MICROSCOPE

The properties of a LFM system are discussed in detail in [79]. Here, an overview of the optics of a LFM is given. Then, more details are given to relevant pieces in presenting the optical system developed in this work.

A. Optics Overview

The LFM for this experiment is composed of an objective, a tube lens, a MLA, and a sensor. The objective and tube lens work as a 4f optical system. This would typically image the object (specimen) to an eye-piece optic or to an imaging sensor array. In a LFM, the MLA is placed at the image plane in place of the sensor array. The sensor array is then placed behind the MLA, by one micro lens focal length; a diagram of the optical system is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Diagram of the optical layout of the light field microscope made for this experiment. The 4f optical system is simply the microscope objective and tube lens with focal lengths of fo and ftl, respectively. The system images onto the micro lens array, and the camera sensor is one micro lens focal length behind the micro lens array fml.

Download Full Size | PDF

The MLA can be thought of as the new sensor plane. Each micro lens is a pixel and samples the light field in space, (s,t). There are several sensor pixels behind each micro lens that sample the angles of the rays that pass through the micro lens, (u,v). The MLA samples the location, and then the pixels behind a given micro lens sample the angle of the light. The result is that the sensor, once calibrated based on the micro lens center location, collects a 4D block of data, tiled across the 2D sensor. The 4D data block has coordinates (s,t,u,v) representing the spatial location and the angle at which a ray hit the sensor. Figure 2 shows the tiling of the 4D data over the sensor.

 figure: Fig. 2.

Fig. 2. The sensor is a standard array of pixels represented here as a grid. The micro lenses (thick circles) are tiled across the surface in front of the sensor (as shown in Fig. 1) in a hexagon pattern and determine the (s,t) spatial positions. The pixels behind a given micro lens determine the (u,v) angular positions and are repeated for each micro lens.

Download Full Size | PDF

Two types of images can be easily extracted from this data, which will be referred to later. The first is a standard image, which can be created by summing over u and v, the pixels behind each micro lens. This gives the total intensity of the light that hit a micro lens (s0,t0), as a standard pixel would give in an image. The second creates the 2D image by choosing a single (u0,v0) coordinate for each (s,t) location. This is a perspective view based on the ray angle represented by that (u0,v0) coordinate. This is also called a sub-aperture view, because a given pixel behind a micro lens, when projected to the exit pupil of the objective, forms an aperture that is a fraction (based on the number of pixels behind the micro lens) of the full aperture of the system.

In application, a physical 3D volume is being imaged, and standard (x,y,z) coordinates will be used to describe the object-side volume being imaged by the LFM. The microscope system (including the tube lens) has an object side-focal plane and a corresponding image plane. Ultimately, the z axis of the volume to be reconstructed from the light field image is centered around the focal plane. The geometry is shown in Fig. 3, where the objective and tube lens are considered as a single lens, (x,y,0) is the focal plane, and (x3,y3,0) is the image plane.

 figure: Fig. 3.

Fig. 3. Geometric relation between the different coordinates used in calculating the point spread function. This represents only the microscope objective and tube lens portions of the optical system shown in Fig. 1. Here the coordinates are given, where the (x,y) plane is the object plane, and the (x3,y3) plane is the image plane, which corresponds to the MLA in Fig. 1.

Download Full Size | PDF

B. Experimental Details of Optics

The optics system used here consisted of a Mitutoyo Plan Apo NIR Infinity Corrected Objective with a numerical aperture (NA) of 0.26, a focal length of fo=20.0mm, a working distance of 30.5 mm, and a tube lens with a focal length of ftl=25mm, giving a magnification of M=ftl/fo=1.25. The micro lenses are f/2 lenses, 2 μm in diameter, with approximately 14 pixels behind each micro lens. This is summarized in Table 1.

Tables Icon

Table 1. Optical System Values

The imaging capabilities of this system depend primarily on the NA and the magnification. Using the Sparrow limit, Robj, the number of resolvable spots behind a micro lens (along one dimension), Nu, can be determined from

Robj=0.47λNAM,
Nu=DmlRobj,
where Dml is the diameter of a micro lens, and λ is the wavelength of the light [7]. This gives 11 resolvable spots for the system used here, ensuring the optical system is not limited by pixel resolution.

The ideal depth of the reconstructed volume is given by the full depth of field (FDOF), which is the depth of field for a sub-aperture image, given by

FDOF(2+Nu2)λn2NA2,
where n is the index of refraction of the medium the object is in [7], which is typically air but can be different if water- or oil-immersion microscope techniques are used. This gives a FDOF of 745 μm for this system. The reconstruction is not limited to this region, but the lateral (x,y) resolution falls as the distance from the focal plane increases. The depth-dependent band limit (in cycles/m), derived in [8], is given by
ν(z)=Dml0.94λM|z|,
for values of |z|Dml2/(2M2λ). The band limit drops to a resolution of about 70 μm at the greatest extent of the deconvolved volume done here.

A notable difference in this system when compared to the LFM systems of [7,8] is the magnification. The magnification of this system is very small at only 1.25. The system used here takes advantage of the high-resolution light field sensor from a Lytro Development Kit. The constraint of f/# matching the microscope optics to that of the micro lenses of this sensor limits the degree of magnification possible. In this system, a low magnification was desired because the size of the atom clouds was on the order of a millimeter. A low-magnification system was also looked at in [11]. They used a first-generation Lytro camera, including its objective lens system, in combination with a microscope objective, and worked in what they refer to as the inverse regime, where a virtual image is viewed by the Lytro objective. Here, because a standard objective did not need to be used, the magnification could be adjusted directly using the tube lens of the microscope system. This reduced the complexity of the system, though their emphasis was a direct use of the Lytro camera off the shelf. The low magnification also impacts the modeled PSF discussed in the next section.

3. POINT SPREAD FUNCTION MODEL

In order to extract the 3D distribution of the light, a deconvolution is used to reverse the effects of the PSF of the LFM system. The Richardson–Lucy deconvolution method for a LFM is developed in [8]. The basic idea is to use a model to produce an accurate PSF to describe the transfer of light from the 3D volume being imaged to the 2D sensor array. This can be represented as an operator, H, operating on a vector, g, which represents points in a volume being imaged. The resulting vector, f, represents the pixels on the sensor:

f=Hg.
A Richardson–Lucy deconvolution works to invert this equation and give an accurate volume, g, given f and H.

H is determined by making a model of the PSF, based on either a fast Fourier transform (FFT) method for propagating the light through the system, described in detail in [9], or on the analytical PSFs developed in Advanced Optical Imaging Theory by Min Gu [12], as discussed in [8]. This analytical model is used here and described in the following.

Considering the objective and tube lens as a single lens, Fig. 3 represents the geometry of the objective. Gu gives the scalar wave U3, given a point source at the origin in object space. But there are several coordinate transforms. The apodization function of the lens, P(x2,y2), is assumed to be a radially symmetric and have an aperture of radius a and is given in terms of ρ=x22+y22/a, with a range from 0 to 1. The image space coordinates (x3,y3,z3) are changed to (v,u), based on the NA and magnification of the system as follows:

NA=nsin(αo)=Mnsin(αi),
v2πλr3sin(αi),
u8πλz3sin2(αi2),
where r3=x32+y32 and αo and αi are the acceptance angles of the objective in object and image space, respectively. Finally, U3 is given by
U3(v,u)=exp(iu4sin2(αi/2))Md12λ2exp[iv24N(1+1/M)]01P(ρ)exp(iu2ρ2)J0(ρv)2πdρ,
where d1=fo+z, and J0 is the zeroth order Bessel function of the first kind. The inclusion of the second exponential phase term in Eq. (9) is typically dropped but is important for low-NA and low-magnification situations.

U3 is the 3D scalar wave in image space. The 2D scalar wave at the MLA is U3(v,0). The wave then passes through the MLA and on to the sensor. The MLA is modeled by applying the lens phase for a single micro lens tiled across the full wave. Then the Fresnel transfer function is used to propagate the wave to the sensor, where the intensity is sampled. This produces the PSF for a point at the origin in the object-space volume. To create the full H matrix, the PSF needs to be calculated for every value in g representing voxels of some finite size in the full volume being imaged. U3 is space-invariant and is used to determine the full matrix H, mapping every point in the volume to the pixels on the sensor. The Richardson–Lucy deconvolution can then be performed to solve Eq. (5) by this iterative process:

g(k+1)=diag(HT1)1diag(HTdiag(Hg(k)+b)1f)g(k),
where b accounts for noise, and the diag(·) operator returns a matrix with the argument on the diagonal and zeros elsewhere [8]. This equation takes the approximated volume and, using the calculated PSF, calculates the resulting f. Each value is compared to the corresponding value in the data, f, then back projected (the application of HT) into the volume. As g(k) approaches the correct value, daig(Hg(k)+b)1f approaches 1, and g(k+1) converges.

Using the optical system described above, along with modeling the PSF described here, this Richardson–Lucy deconvolution was performed on images of fluorescing rubidium atoms cooled in a MOT. The description of the MOT, processing of the images, and analysis of the resulting 3D volumes is discussed next.

4. MAGNETO OPTICAL TRAP

The MOT for the atoms was created using a simple setup with the intent of making it easy to access the trapped atoms with the imaging system and get as close as needed with the objective. The MOT was created around a long rectangular glass chamber attached to a vacuum system and rubidium source. A single cooling beam was expanded to about 2.5 cm and split three ways using half-wave plates and polarizing beam splitters to create the optical portion of the trap. The long side length of the chamber was used to bring one of the laser cooling beams in at a 45° angle to the chamber (see Fig. 4). A second beam ran down the length of the chamber, and the third came down from above, normal to the plane of the table. The anti-Helmholtz magnetic coils were oriented along this normal axis along the third beam, creating the magnetic portion of the trap.

 figure: Fig. 4.

Fig. 4. Diagram of the magneto optical trap and representation of the light field and reference camera orientations. The main image shows a top-down view of the vacuum chamber, and corresponding lasers and optics needed to create the MOT. The lower beam is actually coming down from off of the page through the vacuum chamber, and the reference camera is also off of the page looking down at approximately 53°. The inset image looks down the optical axis of the light field camera and shows how the reference camera is positioned. The coordinate vectors match the light field camera’s reference frame in both images. This is the same reference frame used in the images.

Download Full Size | PDF

A wire was placed in the beam path of the cooling beam running along the chamber, the x axis of the light field camera, and oriented along the y axis. This was done in order to create an atom distribution that could be clearly observed to have density variation along the optical axis. The shadow was imaged onto the atoms using a 4f optical system. This caused the atoms to split along an xy plane, creating two clouds of atoms, displaced along the optical z axis of the light field camera. Figure 4 is a diagram of the layout of the MOT system.

5. IMAGES AND 3D RECONSTRUCTIONS

Images of atoms were taken for three different MOTs. For each MOT, three pictures were taken in succession, giving a total of nine images, three for each of the three MOTs. The exposure was adjusted for each MOT to get good images, resulting in exposures of 2, 4, and 3 seconds for MOT One, MOT Two, and MOT Three, respectively. The exposures times chosen were different because each atom cloud had a different number of atoms in the cloud, producing a different amount of fluorescence. The long exposures resulted in an averaging of the atoms’ fluorescing intensity, which did oscillate slightly at a rate faster than a second. The measured intensity can be used for calculations of the number of atoms in a cloud, which was not important for this study but is an area for further research.

Each picture used for the deconvolution was the difference between an image taken with atoms present and one without atoms. This removed background light from the images. The magnetic field was turned off for images without the atoms. Figure 5 shows an image of MOT One after background subtraction and being cropped, but before processing the light field data. The MLA pattern can be seen in the inset.

 figure: Fig. 5.

Fig. 5. This is the raw sensor data after cropping and the background has been subtracted. It is the sensor’s measurement of the light field. When looked at as a whole, the micro lenses are small enough to appear as individual pixels. The inset is a close up of the sensor and highlights the pattern produced on the sensor by the micro lens array, as diagrammed in Fig. 2.

Download Full Size | PDF

The Richardson–Lucy deconvolution was then performed on these images. The code used was developed by the LFM group at Stanford [79], and they graciously provided a copy, which was slightly modified and used here. The process involves a calibration step that finds the centers of the micro lenses on the sensor pixel grid and then creates the regular 4D block of data representing the spatial and angular data. The desired depth and axial resolution of the deconvolved volume is prescribed, then during the calibration, the necessary PSFs are calculated as described above. After the calibration, the images can be deconvolved using Eq. (10). Each volume reconstruction done here was iterated 15 times; beyond this, there was little improvement of the residual norm. Figure 6 shows a contour surface of a deconvolved volume of MOT One with an axial depth of 3 mm and resolution of 77μm.

 figure: Fig. 6.

Fig. 6. Contour surface through the normalized deconvolved volume at 0.44 voxel intensity produced from the light field data in Fig. 5. The elongation effect is noticeable and is why part of one of the clouds extends out of the volume.

Download Full Size | PDF

In order to analyze the reliability of the reconstructed 3D volume and analyze the axial spreading observed, a reference image was taken of each MOT studied, at the same time as the light field images. The inset in Fig. 4 represents the approximate orientations of the light field camera and the reference image with respect to the vacuum chamber. The deconvolution attempts to assign to the voxels in the volume the amount of light coming from only that voxel. In order to compare this data to the reference image, a projection image was made from the volume data. The angle of the projection was based on the physical location and angle of the reference camera, assuming the optical axis of the reference camera was in a single xy plane of the volume data.

The projection image was made using a Radon transform [13]. The transform was performed on each depth slice through the volume, creating a 2D projection of the volume at an angle matching the reference camera. Figure 7 shows the projected image and the corresponding reference image for MOT One, where the axes have been labeled to match the light field volume coordinates, but the (z,x) translation has not been accounted for. The z-axis shift in the centers of the two atom clouds can be seen in both.

 figure: Fig. 7.

Fig. 7. Left image is the projected image created from doing a Radon transform on each z slice in the reconstructed volume at the angle of the reference image. The right image is the reference image taken at an orthogonal view to the light field camera. Both have been scaled and plotted over the same range for comparison.

Download Full Size | PDF

The projections were compared to the reference images by looking at the relative locations of the two peaks. The centers of the two peaks were found by fitting two 2D Gaussians to the intensity data. First, a threshold was set in the image based on the maximum in order to create two regions around each peak separated by zeros. These two areas could then be identified as connected components by image processing software and the intensity-weighted centroids of each region calculated. These estimations of the centers were used as initial guesses for a fit of two 2D Gaussians to the data. The centers of the fitted Gaussians provided the locations of the centers. The relative distance between the peaks was found simply by subtracting one from the other.

As mentioned above, the resolution of reconstructed depth slices degrades for planes farther from the focal plane. This makes the data look smoother. In contrast, planes at z depths inside the bounds |z|Dml2/(2M2λ) [see Eq. (4)] suffer from reconstruction artifacts during the deconvolution [8]. These planes have been replaced with interpolated data. The planes within the depth of field and beyond the region suffering from reconstruction artifacts are at the highest resolution, and the noise of the camera affects the data. In the projection at planes near the focal plane, the SNR got as low as 8. In order to account for this uncertainty, depth-dependent weights were added when fitting the 2D Gaussians to the data.

The mean values for peak displacement for each MOT were calculated from the three images taken of each. The results are shown in Table 2. The displacement between the two peaks is correct within error for all three peaks except in the case of MOT One. This is the only one where the reconstruction of the shifted peak was affected by edge effects, as shown in Fig. 8. The two atom clouds are the farthest apart (see Table 2), and one is centered on the volume. Because of this, the intensity of the second peak does not drop off before the edge of the reconstruction volume is reached. Near the edge of the volume, the reconstructed intensity is artificially increased. This causes the fitted Gaussian to be pulled toward the edge and to stretch along the optical axis.

 figure: Fig. 8.

Fig. 8. There are two major areas where reconstruction artifacts affect the projection image data. The first is z-axis positions near the focal plane (z=0). As discussed in the text, this creates increases in intensity. The second is edge effects; near the z-axis edges of the volume, the intensity tends to increase. This is two z-axis slices through the data from MOT One chosen to give an example of the effect.

Download Full Size | PDF

Tables Icon

Table 2. Calculated Relative Separation of the Two Peaks in the Projected Image and Reference Image

The errors in the location measurements primarily come from the error in the rotation angle used to project the volume into 2D and the resolution of the image. The error for MOT One tends to be higher than for MOTs Two and Three because there was greater uncertainty in the measurement of the reference camera angle, increasing the projection image error. Also in the case of MOT One, a different objective was used for the reference image; the magnification was lower, hence there is a larger error in the reference image as well.

There is clear spreading along the optical axis in the reconstruction, and Table 3 presents the optical axis spreading of the data compared to calculated predictions. The z-axis spreading of the reconstructed image was determined using the Gaussian fit values for the width, σ, along the z axis.

Tables Icon

Table 3. Calculated Lengths of the Peaks Along the Optical Axis and Back-Projection-Based Estimations of the Optical Axis Lengths Given the Minor Axis of an Ellipse Fit to the FWHM Data of the Standard On-Axis Image of the Atoms

The length of the optical axis stretching was predicted using principles from tomography [14], which is closely related to this method of reconstruction [7]. The light field data from a sub-aperture view is the projection of the volume in the direction associated with the sub-aperture’s angle. The angles of the projections captured by the LFM are given by the NA and range from 15° to 15°. A ray of light passing by the “edge” of the atom cloud will cross the center along the optical axis at l=btan(π/2α0), where b is the semi-minor axis of an ellipse fit to the cloud peak, which is used to determine the edge of the cloud. Looking at a standard image created from the light field data, the peak intensity was found, and then the two peaks were separated by thresholding the intensity values at half the maximum. The intensity-weighted centroids were then found and an ellipse fit to the resulting regions. The minor axis of the fitted ellipse approximates the narrowest full width at half maximum (FWHM) across the atom cloud for the more intense peak. The prediction of the optical axis spreading is 2l, and Table 3 compares the measured and predicted optical axis spreading of each peak in each MOT cloud where Peak 1 is the more intense peak. In the case of the second peak, the ellipse is not made at the FWHM, so the prediction calculated was adjusted by scaling by the ratio of the peak values of Peak 1 and Peak 2.

The predicted length of MOT Three Peak 2 shows the greatest discrepancy, even after adjusting. The stretching of the fit is because of the reconstruction artifacts near the center of the volume. An example is shown in Fig. 8. In the case of MOT Three, the two atom clouds are closer together. Peak 1 is nearly centered, and Peak 2 has a much lower peak intensity. The result is that the reconstruction noise significantly widens the center of the fitted Gaussian. All the predicted values are estimates, as the actual cloud of atoms does not have sharp edges (nor is it an actual Gaussian distribution), but match well with measured values.

The optical axis spreading does limit the ability to see objects directly behind each other. If the two peaks shown here were in line along the optical axis, they would be spread over each other and could not be identified. This will need to be taken into account for any application of a similar system. Based on the reconstructions created here, the two improvements to consider are increasing the NA to reduce optical axis spreading, and increasing the FDOF. There are higher NA objectives available, but it comes at the cost of working distance. In order to observe an object inside a vacuum chamber, a long working distance was needed. Increasing the diameter of the micro lenses, with a fixed pixel size, will increase the FDOF at the cost of spatial resolution. Here, this was fixed by the Lytro Development Kit sensor, but custom MLAs and sensors can be used.

6. CONCLUSION

The use of a low-magnification LFM to reconstruct a 3D volume containing cold atoms in a vacuum was demonstrated. The impact of the low magnification introduces a phase term in the PSF scalar wave not typically considered. The inclusion of the MLA to create the LFM decreases the spatial resolution in order to provide depth information. The limited acceptance angle of the optical system along with the effects of the back projection reconstruction method produces artifacts along the optical axis of the full 3D reconstruction, but does not prevent the extraction of the 3D orientation of two separate clouds. The use of a Lytro Development Kit sensor allowed for a simple optical system while providing high-resolution standard images and still allowing for 3D reconstruction. This can be used to aid in determining trap shape with a single camera and single image. The LFM developed here can also have applications in any system where there is limited optical access but 3D information is desired to help in analysis. Not pursued here were determining atom number in a cloud and the impact of optical axis spreading and radiation trapping in the atom cloud. Also, improvements could possibly be made to the back-projection reconstruction in the Richardson–Lucy deconvolution. These are subjects of further research.

Acknowledgment

The authors would like to thank Michael Broxton and Kaspar Sakmann for helpful communications on the subject, and the Air Force Research Laboratory for its support of this research.

REFERENCES

1. S. M. Dickerson, J. M. Hogan, A. Sugarbaker, D. M. Johnson, and M. A. Kasevich, “Multiaxis inertial sensing with long-time point source atom interferometry,” Phys. Rev. Lett. 111, 083001 (2013). [CrossRef]  

2. J. Burke, B. Deissler, K. Hughes, and C. Sackett, “Confinement effects in a guided-wave atom interferometer with millimeter-scale arm separation,” Phys. Rev. A 78, 023619 (2008). [CrossRef]  

3. K. D. Nelson, X. Li, and D. S. Weiss, “Imaging single atoms in a three-dimensional array,” Nat. Phys. 3, 556–560 (2007). [CrossRef]  

4. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908). [CrossRef]  

5. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992). [CrossRef]  

6. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2 (2005).

7. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006). [CrossRef]  

8. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-d deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013). [CrossRef]  

9. N. Cohen, S. Yang, A. Andalman, M. Broxton, L. Grosenick, K. Deisseroth, M. Horowitz, and M. Levoy, “Enhancing the performance of the light field microscope using wavefront coding,” Opt. Express 22, 24817–24839 (2014). [CrossRef]  

10. K. Sakmann and M. Kasevich, “Single-shot three-dimensional imaging of dilute atomic clouds,” Opt. Lett. 39, 5317–5320 (2014). [CrossRef]  

11. L. Mignard-Debise and I. Ihrke, “Light-field microscopy with a consumer light-field camera,” in International Conference on 3D Vision (3DV) (IEEE, 2015), pp. 335–343.

12. M. Gu, Advanced Optical Imaging Theory (Springer, 2000), Vol. 75.

13. S. R. Deans, The Radon Transform and Some of Its Applications (Courier Corporation, 2007).

14. J. Hsieh, Computed Tomography: Principles, Design, Artifacts, and Recent Advances (SPIE, 2003), Vol. 114.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Diagram of the optical layout of the light field microscope made for this experiment. The 4f optical system is simply the microscope objective and tube lens with focal lengths of f o and f tl , respectively. The system images onto the micro lens array, and the camera sensor is one micro lens focal length behind the micro lens array f ml .
Fig. 2.
Fig. 2. The sensor is a standard array of pixels represented here as a grid. The micro lenses (thick circles) are tiled across the surface in front of the sensor (as shown in Fig. 1) in a hexagon pattern and determine the ( s , t ) spatial positions. The pixels behind a given micro lens determine the ( u , v ) angular positions and are repeated for each micro lens.
Fig. 3.
Fig. 3. Geometric relation between the different coordinates used in calculating the point spread function. This represents only the microscope objective and tube lens portions of the optical system shown in Fig. 1. Here the coordinates are given, where the ( x , y ) plane is the object plane, and the ( x 3 , y 3 ) plane is the image plane, which corresponds to the MLA in Fig. 1.
Fig. 4.
Fig. 4. Diagram of the magneto optical trap and representation of the light field and reference camera orientations. The main image shows a top-down view of the vacuum chamber, and corresponding lasers and optics needed to create the MOT. The lower beam is actually coming down from off of the page through the vacuum chamber, and the reference camera is also off of the page looking down at approximately 53°. The inset image looks down the optical axis of the light field camera and shows how the reference camera is positioned. The coordinate vectors match the light field camera’s reference frame in both images. This is the same reference frame used in the images.
Fig. 5.
Fig. 5. This is the raw sensor data after cropping and the background has been subtracted. It is the sensor’s measurement of the light field. When looked at as a whole, the micro lenses are small enough to appear as individual pixels. The inset is a close up of the sensor and highlights the pattern produced on the sensor by the micro lens array, as diagrammed in Fig. 2.
Fig. 6.
Fig. 6. Contour surface through the normalized deconvolved volume at 0.44 voxel intensity produced from the light field data in Fig. 5. The elongation effect is noticeable and is why part of one of the clouds extends out of the volume.
Fig. 7.
Fig. 7. Left image is the projected image created from doing a Radon transform on each z slice in the reconstructed volume at the angle of the reference image. The right image is the reference image taken at an orthogonal view to the light field camera. Both have been scaled and plotted over the same range for comparison.
Fig. 8.
Fig. 8. There are two major areas where reconstruction artifacts affect the projection image data. The first is z -axis positions near the focal plane ( z = 0 ). As discussed in the text, this creates increases in intensity. The second is edge effects; near the z -axis edges of the volume, the intensity tends to increase. This is two z -axis slices through the data from MOT One chosen to give an example of the effect.

Tables (3)

Tables Icon

Table 1. Optical System Values

Tables Icon

Table 2. Calculated Relative Separation of the Two Peaks in the Projected Image and Reference Image

Tables Icon

Table 3. Calculated Lengths of the Peaks Along the Optical Axis and Back-Projection-Based Estimations of the Optical Axis Lengths Given the Minor Axis of an Ellipse Fit to the FWHM Data of the Standard On-Axis Image of the Atoms

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

R obj = 0.47 λ NA M ,
N u = D ml R obj ,
FDOF ( 2 + N u 2 ) λ n 2 NA 2 ,
ν ( z ) = D ml 0.94 λ M | z | ,
f = H g .
NA = n sin ( α o ) = M n sin ( α i ) ,
v 2 π λ r 3 sin ( α i ) ,
u 8 π λ z 3 sin 2 ( α i 2 ) ,
U 3 ( v , u ) = exp ( i u 4 sin 2 ( α i / 2 ) ) M d 1 2 λ 2 exp [ i v 2 4 N ( 1 + 1 / M ) ] 0 1 P ( ρ ) exp ( i u 2 ρ 2 ) J 0 ( ρ v ) 2 π d ρ ,
g ( k + 1 ) = diag ( H T 1 ) 1 diag ( H T diag ( H g ( k ) + b ) 1 f ) g ( k ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.