Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Speckle-structured illumination for 3D phase and fluorescence computational microscopy

Open Access Open Access

Abstract

High-content biological microscopy targets high-resolution imaging across large fields-of-view, often achieved by computational imaging approaches. Previously, we demonstrated 2D multimodal high-content microscopy via structured illumination microscopy (SIM) with resolution >2× the diffraction limit, using speckle illumination from Scotch tape. In this work, we extend the method to 3D by leveraging the fact that the speckle illumination is in fact a 3D structured pattern. We use both a coherent and an incoherent imaging model to develop algorithms for joint retrieval of the 3D super-resolved fluorescent and complex-field distributions of the sample. Our reconstructed images resolve features beyond the physical diffraction-limit set by the system’s objective and demonstrate 3D multimodal imaging with 0.6×0.6×6 μm 3 resolution over a volume of 314×500×24 μm 3.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

High-content optical microscopy is a driving force for large-scale biological study in fields such as drug discovery and systems biology. With fast imaging speeds over large fields-of-view (FOV) and high spatial resolutions [1–8], one can visualize rare cell phenotypes and dynamics. The traditional solution for 2D high-content microscopy is to mechanically scan samples through the limited FOV of a high-NA (i.e. high resolution) imaging objective and then digitally stitch the images together. However, this scheme is limited in imaging speed due to the large-distance translations of the sample, as well as the need for auto-refocusing at each position [9]. These issues are further compounded when extending this high-content imaging strategy to 3D.

Recently, computational imaging has demonstrated efficient strategies for high-content 2D microscopy. In contrast with slide scanning, these strategies often employ a low-NA imaging objective to acquire low-resolution (large-FOV) measurements, then use computational techniques like synthetic aperture [10–12] and super-resolution (SR) [13–18] to digitally reconstruct a high-resolution image. This eliminates the requirement for large-distance mechanical scanning in high-content imaging, which results in faster acquisition and more cost-effective optical setups, while also relaxing the sample’s auto-refocusing requirements due to the low-NA objective’s longer depth-of-field (DOF) [19–36]. Examples of such approaches include lensless microscopy [19–21] and Fourier ptychography [22–28] for coherent absorption and quantitative phase imaging. For incoherent fluorescent imaging, micro-lenslet arrays [29–32], Talbot plane scanning [33–35], diffuse media [36], or meta-surfaces [37] have also been demonstrated. Among these examples, 3D high-content imaging capability has only been demonstrated in the coherent imaging context (quantitative phase and absorption) by Fourier ptychography [25, 27].

Our previous work demonstrated multimodal coherent (quantitative phase) and incoherent (fluorescence) imaging for high-content 2D microscopy [38]. Multimodal imaging is important for biological studies requiring cross-correlative analysis [39–43]. Structured illumination microscopy (SIM) [10, 16, 17, 44] with speckle illumination [36, 45–53] was used to encode 2D SR quantitative phase and fluorescence. However, because propagating speckle contains 3D features, it also encodes 3D information. Considering speckle patterns as random interference of multiple angled plane waves, the scattered light from interactions with the sample carries 3D phase (coherent) information, similar to the case of non-random angled illumination in diffraction tomography [54–57] and 3D Fourier ptychography [25, 27]. Simultaneously, the fluorescent (incoherent) light excited by the 3D speckle pattern encodes 3D SR fluorescence information as in the case of 3D SIM [58]. Combining these, we propose a method for 3D SR quantitative phase and fluorescence microscopy using speckle illumination.

 figure: Fig. 1

Fig. 1 3D multimodal structured illumination microscopy (SIM) with laterally translating Scotch tape as the patterning element. The coherent arm (Sensor-C1 and Sensor-C2) simultaneously captures images with different defocus at the laser illumination wavelength (λex = 532 nm), used for both 3D phase retrieval and speckle trajectory calibration. The incoherent (fluorescence) arm (Sensor-F) captures low-resolution raw fluorescence acquisitions at the emission wavelength (λem = 605 nm) for 3D fluorescence super-resolution reconstruction. OBJ: objective, DM: dichroic mirror, SF: spectral filter, ND-F: neutral-density filter.

Download Full Size | PDF

Experimentally, we position a Scotch tape patterning element just before the sample, mounted on a translation stage to generate a translating speckle field that illuminates the sample (Fig. 1). Because the speckle grain size is smaller than the PSF of the low-NA imaging objective (which provides large-FOV), the coherent scattered light from the speckle-sample interaction encodes 3D SR quantitative phase information. In addition to lateral scanning of the Scotch tape, axial sample scanning is necessary to efficiently capture 3D SR fluorescence information. Nonlinear optimization methods based on the 3D coherent beam propagation model [25, 59–61] and the 3D incoherent imaging model [58] were formulated to reconstruct the 3D speckle field and imaging system aberrations, which are subsequently used to reconstruct the sample’s 3D SR quantitative phase and fluorescence distributions. Since the Scotch tape is directly before the sample, the illumination NA is not limited by the objective lens, allowing for >2× lateral resolution gain across the entire FOV. This framework enables us to achieve 3D imaging at sub-micron lateral resolution and micron axial resolution across a half-millimeter FOV.

2. Theory

We start from the concept of 3D coherent and incoherent transfer functions (TFs), using the Born (weak scattering) assumption [54], to analyze the information encoding process. We then lay out our 3D coherent and incoherent imaging models and derive the corresponding inverse problems to extract SR quantitative phase and fluorescence from the measurements.

 figure: Fig. 2

Fig. 2 3D coherent and incoherent transfer function (TF) analysis of the SIM imaging process. The 3D (a) coherent and (b) incoherent TFs of the detection system are auto correlated with the 3D Fourier support of the (c) illumination speckle field and (d)illumination intensity, respectively, resulting in the effective Fourier support of 3D (e) coherent and (f) incoherent SIM. In (e) and (f), we display decomposition of the auto-correlation in two steps: ① tracing the illumination support in one orientation and ② replicating this trace in the azimuthal direction.

Download Full Size | PDF

First, we introduce linear space-invariant relationships between raw measurements and 3D coherent scattering and incoherent fluorescence [54, 58, 62, 63], by invoking the Born (weak scattering) approximation [54]. These relationships enable us to define TFs for the coherent and incoherent imaging processes. The supports of these TFs in 3D Fourier space determine how much spatial frequency content of the sample can be passed through the system (i.e. the 3D diffraction-limited resolution).

In a coherent imaging system with on-axis plane-wave illumination, the TF describes the relationship between the sample’s scattering potential and the measured 3D scattered field, taking the shape of a spherical cap in 3D Fourier space (Fig. 2(a)). In an incoherent imaging system, the TF is the autocorrelation of the coherent system’s TF [63], relating the sample’s fluorescence distribution to the 3D measured intensity. It takes the shape of a torus (Fig. 2(b)). The spatial frequency bandwidth of these TFs are summarized in Table 1, where the lateral resolution of the system is proportional to the lateral bandwidth of the TF. The incoherent TF has 2× greater lateral bandwidth than the coherent TF. Axial bandwidth generally depends on the lateral spatial frequency, so axial resolution is specified in terms of the best-case. Note that the axial bandwidth of the coherent TF is zero, which means there is zero axial resolution for coherent imaging; hence the poor depth sectioning ability in 3D holographic imaging [41, 56, 64].

Tables Icon

Table 1. Summary of spatial frequency bandwidths

SIM enhances resolution by creating beat patterns. When a 3D structured pattern modulates the sample, the sample’s sub-diffraction features create lower-frequency beat patterns which can be directly measured and used to reconstruct a SR image of the sample via post-processing [17, 58]. This process is generally applicable to both coherent and incoherent imaging [40–43], enabling 3D SR multimodal imaging. Mathematically, a modulation between the sample contrast and the illumination pattern in real space can be interpreted as a convolution in Fourier space. This convolution result is then passed through the 3D TF defined in Fig. 2(a,b). The effective support of information going into the measurements can be estimated by conducting cross-correlations between the 3D TFs and the Fourier content of the illumination patterns, as shown in Fig. 2(c,e) and 2(d,f) for coherent and incoherent systems, respectively. The lateral and axial spatial frequency bandwidth of both illumination and 3D SIM Fourier supports for coherent and incoherent imaging are summarized in Table 1. Assuming approximately equal excitation and emission wavelengths, the achievable lateral resolution gain of 3D SIM (ratio between lateral bandwidths of 3D SIM and 3D TF) is (NAdet+NAillum)/NAdet for both coherent and incoherent imaging. Axially, coherent SIM builds up the spatial frequency bandwidth in the axial direction, and incoherent SIM can achieve axial resolution gain with a factor of (2cos θdetcos θillum)/(1cos θdet).

In this work, because the Scotch tape does not pass through an objective, it is able to create high-resolution speckle illumination such that NAillu>NAdet, enabling >2× lateral resolution gain without sacrificing FOV [38]. From the TF analysis, we also see that information beyond diffraction-limit in the axial dimension is obtainable. The next sections outline our computational scheme for 3D SR phase and fluorescence reconstruction. To provide higher-quality reconstructions and more robust operation, our algorithm jointly estimates the illumination speckle field, system pupil function (aberrations), the sample’s 3D transmittance function, and the sample’s 3D fluorescence distribution.

 figure: Fig. 3

Fig. 3 3D multi-slice model: (a) coherent and (b) incoherent imaging models for the interaction between the sample and the speckle field as light propagates through the sample.

Download Full Size | PDF

2.1. 3D super-resolution phase imaging

We adopt a multi-slice coherent scattering model to describe the 3D multiple-scattering process [25, 59–61] and solve for 3D SR quantitative phase. Our system captures intensity at two focus planes, zc1 and zc2, for every speckle-scanned point [38]. With these measurements and the multi-slice model, we are able to reconstruct the sample’s 3D SR complex-field and the scattered field inside the 3D sample, which is used in the fluorescence inverse problem.

2.1.1. Forward model for 3D coherent imaging

Figure 3(a) illustrates the 3D multi-slice coherent imaging model. Plane-wave illumination of the Scotch tape, positioned at the l-th scanned point, rl, creates speckle field pc(rrl), where r=(x,y) is the lateral spatial coordinate. This speckle field propagates a distance Δsl to the sample. The field interacting with the first layer of the sample is described as:

fl,1(r)=C{pc(rrl)hΔsl,λex(r)},
where hz,λ(r)=F1{exp(i2πzn02/λ2u22)} is the angular spectrum propagation kernel inside a homogeneous media with refractive index n0[65] u=(ux,uy) is the spatial frequency coordinate, and C{} is a cropping operator that selects the part of the speckle field that illuminates the sample. To model scattering and propagation inside the sample, the multi-slice model treats the 3D sample as multiple slices of complex transmittance function, tm(r) (m=1,,M), where m is the slice index number. As the field propagates through each slice, it first multiplies with the 2D transmittance function at that slice, then propagates to the next slice. The spacing between slices is modeled as uniform media of thickness Δzm. Hence, at each layer we have:
gl,m(r)=fl,m(r)tm(r),m=1,,M,fl,m+1(r)=gl,m(r)hΔzm,λex(r),m=1,,M1.

After passing through all the slices, the output scattered field, gl,M(r), propagates to the focal plane to form Gl(r)=gl,M(r)hΔzM,l,λex(r) and gets imaged onto the sensor (with defocus z), forming our measured intensity:

Ic,lz(r)=|Gl(r)hc(r)hz,λex(r)|2,l=1,,Nimg,z=zc1,zc2,
where hc(r) is the system’s coherent point spread function (PSF). The measured intensity subscripts c and l denote indices for the coherent imaging channel and acquisition number, respectively. Nimg is the total number of translations of the Scotch tape. Note that all the spacing distances, Δzm, are independent of the axial scanned position, Δsl, except for the distance to the focal plane, which is ΔzM,l=Δsl+z0, where z0 is the distance from the last layer of the sample to the focal plane (before axial scanning). As the sample is scanned, we account for this shift by propagating extra distance back to the focal plane.

2.1.2. Inverse problem for 3D coherent imaging

We take the intensity measurements from both coherent cameras, {Ic,lz(r)|z=zc1,zc2}, and the scanning trajectory, rl (calculated via standard rigid-body 2D registration [38, 66]), as inputs to jointly estimate the sample’s 3D SR transmittance function, t1(r),,tM(r), as well as the illumination complex-field, pc(r), and the system’s coherent PSF, hc(r), including aberrations.

Based on the forward model in the previous section, we formulate the inverse problem as:

minimizet1,,tM,pc,hcec(t1,tM,pc,hc)=l,zec,lz(t1,,tM,pc,hc)whereec,lz(t1,,tM,pc,hc)=r|Ic,lz(r)|Gl(r)hc(r)hz,λex(r)||2.

Here we adopt an amplitude-based cost function, ec, which minimizes the difference between the measured and estimated coherent amplitude in the presence of noise [67]. In order to solve this optimization problem, we use a sequential gradient descent algorithm [67, 68]. The gradient based on each single measurement is calculated and used to update the sample’s transmittance function, illumination speckle field, and coherent PSF. A whole iteration of variable updates is complete after running through all the measurements. In Appendix A, we provide a detailed derivation of the gradients and in Appendix B we lay out our reconstruction algorithm.

2.2. 3D super-resolution fluorescence imaging

Reconstruction of 3D SR images for the fluorescence channel involves an incoherent multi-slice forward model (Fig. 3(b)) and a joint inverse problem solver. The coherent result provides a good starting estimate of the 3D speckle intensity throughout the sample, which, together with the fluorescent channel’s raw data, is used to reconstruct the sample’s 3D SR fluorescence distribution and the system’s aberrations at the emission wavelength, λem.

2.2.1. Forward model for 3D fluorescence imaging

The 3D fluorescence distribution is also modeled by multiple slices of 2D distributions, om(r) (m=1,,M), as shown in Fig. 3(b). Each layer is illuminated by the m-th layer’s excitation intensity, |fl,m(r)|2, for Scotch tape position rl. The excited fluorescent light is mapped onto the sensor through 2D convolutions with the incoherent PSF at different defocus distances, zm,l. The sum of contributions from different layers form the measured fluorescence intensity:

If,l(r)=m=1M[om(r)|fl,m(r)|2]|hf,zm,l(r)|2,l=1,,Nimg,
where hf,zm,l(r) is the coherent PSF at defocus distance zm,l, which could be further decomposed into hf,zm,l(r)=hf(r)hzm,l,λem(r), where hf(r) is the in-focus system’s coherent PSF at λem. The incoherent PSF is the intensity of the coherent PSF at λem. The subscript f denotes the fluorescence channel and the defocus distance, zm,l, depends on the axial scanning position, Δsl.

2.2.2. Inverse problem for 3D fluorescence imaging

The fluorescence inverse problem takes as input the raw fluorescence intensity measurements, If,l(r), the registered scanning trajectory, rl, and the 3D estimates from the coherent model, in order to estimate the sample’s 3D SR fluorescence distribution and aberrations at the emission wavelength. We also refine the speckle field estimate using the fluorescence measurements.

Based on the incoherent forward model, our 3D SR fluorescence inverse problem is:

minimizeo1,,oM,pc,hfef(o1,oM,pc,hf)=lef,l(o1,,oM,pc,hf)whereef,l(o1,,oM,pc,hf)=r|If,l(r)m=1M[om(r)|fl,m(r)|2]|hf,zm,l(r)|2|2,
where ef is the cost function. Similar to the coherent inverse problem, we adopt a sequential gradient descent algorithm for estimation of each unknown variable. The detailed derivation of gradients and algorithm implementation are summarized in Appendix A and B, respectively.

3. Experimental results

Figure 1 shows the experimental setup. A green laser beam (BeamQ, 532 nm, 200 mW) is collimated through a single lens and illuminates the layered Scotch tape element, creating a speckle pattern at the sample. The number of layers of Scotch tape sets the degree of scattering; we use 16 layers here. The layered Scotch tape and the sample are mounted on a 3-axis closed-loop piezo-stage (Thorlabs, MAX311D) and a 1-axis open-loop piezo-stage (Thorlabs, NFL5DP20), respectively, to enable lateral speckle scanning and axial sample scanning. The separation between the tape and the sample is approximately 1 mm, which is the minimal distance we can achieve for high-angle and high-power illumination without physically touching the sample. The transmitted diffracted and fluorescent light from the sample then travels through the subsequent 4f system formed by the objective lens (Nikon, CFI Achro 20×, NA=0.4) and a tube lens. The coherent and fluorescent light have different wavelengths and are optically separated by a dichroic mirror (Thorlabs, DMLP550R), after which the fluorescence is further spectrally filtered before being imaged onto Sensor-F (PCO.edge 5.5). The coherent light is ND-filtered and then split by a beam-splitter onto two sensors (FLIR, BFS-U3-200S6M-C). Sensor-C1 is in focus, while Sensor-C2 is defocused by 3 mm, enabling efficient phase retrieval across a broad swath of spatial frequencies, according to the phase transfer function [69].

Successful reconstruction relies on appropriate choices for the scanning range and step size [38]. Generally, the translation step size should be 2-3× smaller than the targeted resolution and the total translation range should be larger than the diffraction-limited spot size of the original system. Our system has detection NA of 0.4 and targeted resolution of 500 nm, so a 36 × 36 Cartesian scanning path with a step size of 180 nm is appropriate for 2D SR reconstruction. For coherent imaging, since there is zero axial bandwidth in the coherent TF (Fig. 2(a)), the sample’s complete diffraction information is projected axially and encoded in the measurement. This enables SR reconstruction of the sample’s 3D quantitative phase from just the translating speckle. Incoherent imaging, however, has optical sectioning due to its torus-shaped TF (Fig. 2(b)); hence, fluorescent light that is outside the DOF of the objective will have weak contrast. In order to reconstruct 3D fluorescence with high fidelity, we add axial scanning to our acquisition scheme [58].

A direct combination of lateral xy-scanning of the speckle and axial z-scaning of the sample will result in 36×36×Nz measurements for both channels, where Nz is the number of axial scan positions. Fortunately, there is a high-degree of redundancy in this data. As previously stated, the 3D coherent information does not require axial scanning, and the speckle pattern measured from the coherent channel is used to initialize the fluorescent reconstruction. Thus, only minor refinements are needed for faithful fluorescent reconstruction.

To save acquisition time, we use an interleaving scanning scheme, alternating between axial sample scanning and lateral speckle scanning (Fig. 1). We laterally scan the speckle pattern through 36×36 xy positions, while incrementing the z position for each patch of 12 × 12 positions. The 36 × 36 Cartesian speckle scanning path is divided into 9 blocks of 12 × 12 sub-scanning paths. Each sub-scanning path is associated with a z-scan position. This means the distance from incident speckle field to sample is

Δsl=(n1)s,forl=122(n1)+1,,122n,wheren=1,,9,
where s is the axial step size and n is the index for different z planes. We set the fifth z-scan position as the middle of the sample. The total scanning range is roughly the thickness of the sample and the step size is at least 2× smaller than the Nyquist-limited axial resolution of the fluorescence microscope. This interleaving measurement scheme enables high quality coherent and fluorescent 3D SR reconstructions.

3.1. 3D super-resolution demonstration

With a 0.4 NA objective, our system’s native lateral resolution is 1.33 μm for coherent imaging and 760 nm for fluorescence (Table 1). The intrinsic DOF is infinite for coherent imaging and 7.3 μm for fluorescence imaging. In order to characterize the resolution capability of our method, we begin by imaging a sample with features below both diffraction limits - a mono-layer of fluorescent polystyrene microspheres with diameter 700 nm. We use a z-scan step size of 1 μm across8 μm range, fully covering the thickness of the sample. 15 axial layers are assigned to the transmittance function, separated by 1.7 μm based on Nyquist sampling of the expected axial resolution for our 3D reconstruction, resulting an overall reconstructed axial range that spans the sum of the axial scanning range and the two axial ranges of the effective 3D PSF.

Figure 4 shows that our 3D reconstructions (400×400×15 voxels with voxel size of 0.096×0.096×1.7 μm 3) clearly resolve the sub-diffraction individual microspheres and demonstrate better sectioning ability in both coherent and fluorescent channels compared to standard widefield imaging (without deconvolution). In the reconstruction, the average lateral peak-to-peak distance of these microspheres is around 670 nm, which is smaller than the nominal size of each microsphere. This is likely due to vertical staggered stacking of the microspheres. Given that our lateral resolution is at least 670 nm, we do break the lateral diffraction limit for both coherent and fluorescent channels, and the coherent channel achieves >2× lateral resolution improvement. Axially, we demonstrate 6 μm resolution for both channels, which is beyond the axial diffraction limit for both channels. The coherent channel improves the axial resolution from no sectioning ability to 6 μm. Given lateral resolution of 670 nm in the coherent channel, we can deduce the illumination NA of this speckle to be >0.4, which suggests the speckle intensity grain size is smaller than 670 nm.

 figure: Fig. 4

Fig. 4 3D multimodal (fluorescence and phase) SIM reconstruction compared to widefield fluorescence and coherent intensity images for 700 nm fluorescent microspheres. Resolution beyond the system’s diffraction limit is achieved in both the (a) coherent and (b) fluorescent arms.

Download Full Size | PDF

3.2. 3D large-FOV multimodal demonstration

Next we use the same setup to demonstrate 3D multimodal imaging for our full sensor area (FOV ∼314μm × 500μm). As shown previously, our method achieves ∼0.6×0.6×6μm 3 resolution for both fluorescence and phase imaging over an axial range of ∼24 μm. This corresponds to ∼14 Mega-voxels of information. Our experiments are only a prototype; this technique is scalable to the Gigavoxel range with a higher-throughput objective and higher illumination NA.

Figure 5 shows the full-sensor 3D quantitative phase and fluorescence reconstructions (5200×3280×15 voxels with voxel size of 0.096×0.096×1.7 μm 3) of a multi-size sample (mixed 2 μm, 4 μm fluorescent, and 3 μm non-fluorescent polystyrene microspheres). We adopt the same z-scan step size and number of slices as in Fig. 4. Zoom-ins on 2 regions of interest (ROIs) display 4 axial layers for each. The arrows highlight 2μm fluorescent microspheres, which defocus more quickly than the larger ones. The locations of the fluorescent microspheres match well in both channels. However, there are some locations in the fluorescence reconstruction where 4 μm microspheres collapse because the immersing media is dissolving the beads over time.

Finally, we demonstrate our technique on human colorectal adenocarcinoma (HT-29) cells fluorescently tagged with AlexaFluor phalloidin (5200×3280×18 voxels with voxel size of 0.096×0.096×1.7 μm 3), which labels F-actinfilaments (sample preparation details in Appendix C). We use a z-scan step size of 1.6 μm across a 12.8 μm range and reconstruct 19 axial layers, separated by 1.7 μm. Figure 6 shows the full-sensor 3D quantitative phase and fluorescence reconstructions, with zoom-ins on 2 ROIs. The sample’s morphological features, as visualized with quantitative phase, match well with the F-actin visualization of the fluorescent channel. This is expected since F-actin filaments are generally known to encapsulate the cell body.

4. Discussion

Unlike traditional 3D SIM or 3D quantitative phase methods which use expensive spatial-light-modulators (SLMs) [70, 71] or galvonemeter/MEMs mirrors [57, 72, 73], our technique is relatively simple and inexpensive. Layered Scotch tape efficiently creates speckle patterns with NAillum>0.4, which is hard to achieve with traditional patterning approaches to high-content imaging (e.g. lenslet array or grating masks [29–35]). Furthermore, the random structured illumination conveniently multiplexes both phase and fluorescence information into the system’s aperture, enabling us to achieve multimodal 3D SR.

One limitation of our technique is that the fluorescent reconstruction relies on the recovered 3D speckle from the coherent imaging channel, so mismatch between the two channels can result in artifacts that degrade resolution. Indeed, the SR gain we achieve experimentally in the fluorescent channel does not match that achieved in the coherent channel. We attribute this mainly to mismatch in axial alignment between the coherent and fluorescent cameras, since the long DOF of the objective made it difficult to axially align the cameras to within the axial resolution limit of the high-resolution speckle pattern. In addition, our 3D coherent reconstruction suffers from coherent noise due to system instabilities during the acquisition process. Specifically, 3D phase information is encoded into the speckle-like (high dynamic range) features within the measurements, which are affected by Poisson noise. These factors reduce performance in both the 3D phase and fluorescence reconstructions.

Another limitation is the relatively long acquisition time - 1200 translations of the Scotch tape results in 180 seconds (without hardware optimization). The number of acquisitions could potentially be reduced with further investigation of the redundancy in the data, which would also reduce computational processing time for the reconstruction, which currently takes ∼6 hours on a NVIDIA, TITAN Xp GPU with MATLAB, for each 40×40 μm 2 patch. Cloud computing could also parallelize the reconstruction by patches.

 figure: Fig. 5

Fig. 5 Reconstructed 3D multimodal (fluorescence and phase) large-FOV for mixed 2 μm, 4μm fluorescent and 3 μm non-fluorescent polystyrene microspheres. Zoom-ins for two ROIs show fluorescence and phase at different depths.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Reconstructed 3D multimodal (fluorescence and phase) large-FOV imaging for HT-29 cells (See Visualization 1 and Visualization 2). Zoom-ins for two ROIs show fluorescence and phase at different depths. The blue arrows in two ROIs indicate two-layer cell clusters that come in and out of focus. The orange arrows indicate intracellular components, including higher-phase-contrast lipid vesicles at z = −5.1 μm, nucleolus at z = 0, as well as the cell nucleus and cell-cell membrane contacts.

Download Full Size | PDF

5. Conclusion

We have presented a 3D SIM multimodal (phase and fluorescence) technique using Scotch tape as the patterning element. The Scotch tape efficiently generates high-resolution 3D speckle patterns over a large volume, which multiplexes 3D super-resolution phase and fluorescence information into our low-NA imaging system. A computational optimization algorithm based on 3D coherent and incoherent imaging models is developed to both solve the inverse problem and self-calibrate the unknown 3D random speckle illumination and the system’s aberrations. The result is 3D sub-diffraction fluorescence reconstruction and 3D sub-diffraction phase reconstruction with >2× lateral resolution enhancement. The method is potentially scalable for Gigavoxel imaging.

Appendix A Gradient derivation

A.1. Vectorial notation

In order to derive the gradient to solve for multivariate optimization problem in Eq. (4) and (6), it is more convenient to represent our 3D coherent and fluorescent model in the linear algebra vectorial notation in the following sections.

According to Eq. (1) and (2), we are able to re-express the multi-slice scattering model using the vectorial formulation into

fl,1=HΔsl,λexQSlpc,gl,m=diag(fl,m)tm,m=1,,M,fl,m+1=HΔzm,λexgl,m,m=1,,M1,Gl=HΔzM,l,λexgl,M,
where the boldface symbols are the vectorial representation of the 2D variables in non-boldface form in the original model except for the cropping operator Q, the shifting operator Sl that shifts the speckle pattern with rl amount, and the defocus convolution operation expressed as
Hz,λ=F1diag(h˜z,λ)F,
where F and F1 are Fourier and inverse Fourier transform operator, respectively, h˜z,λ is the vectorized coherent TF for propagation distance z and wavelength λ. With all these equations defined in vectorial form, we rewrite our coherent and fluorescence intensity as
Ic,lz=|HcHz,λexGl|2,If,l=m=1MKzm,ldiag(|fl,m|2)om,
where Hc is also a convolution operation as expressed in Eq. (9) with the TF vector, h˜z,λ, replaced by the pupil vector h˜c, and
Kzm,l=F1diag(F|F1diag(h˜zm,l,λem)h˜f|2)F
is the convolution operation with the incoherent TF at zm,l.

Next we use this vectorial model to represent the coherent and fluorescent cost functions for a single intensity measurement as

ec,lz(t1,,tM,pc,h˜c)=ec,lzTec,lz=Ic,lz|HcHz,λexGl|22,ef,l(o1,,oM,pc,h˜f)=ef,lTef,l=If,lm=1MKzm,ldiag(|fl,m|2)om22,
where ec,lz=Ic,lz|HcHz,λexGl| and ef,l=If,lm=1MKzm,ldiag(|fl,m|2)om are the coherent and fluorescent cost vectors, respectively.

A.2. Gradient derivation

The following derivation is based on CR calculus and is similar to the derivation introduced by our previous work [38, 67].

A.2.1. Gradient derivation for 3D coherent imaging

To optimize Eq. (4) for t1,,tM, pc, h˜c, we need to take the derivative of the coherent cost function with respect to them. We first express the gradients of all the transmittance function vectors, t1,,tM as

tmec,lz=(ec,lzgl,mgl,mtm)=diag(fl,m¯)(ec,lzgl,m)=diag(fl,m¯)vl,m,
where
vl,M=(ec,lzgl,M)=HΔzM,l,λexHz,λexHcdiag(HcHz,λexGl|HcHz,λexGl|)ec,lzvl,m=(ec,lzgl,m+1gl,m+1gl,m)=HΔzm,λexdiag(tm+1¯)vl,m+1,m=1,,M1,
are auxiliary vectors for intermediate gradient derivation steps, is the Hermitian operation, and ¯ is the complex conjugate operation. With these auxiliary vectors, it is relatively simple to derive the gradient of the speckle field vector, pc, as
pcec,lz=(ec,lzgl,1gl,1pc)=SlQHΔsl,λexdiag(t1¯)vl,1.

As for the gradient of the pupil function, h˜c, we have

h˜cec,lz=(ec,lzh˜c)=diag(FGl¯)diag(h˜z,λex¯)Fdiag(HcHz,λexGl|HcHz,λexGl|)ec,lz

A.2.2. Gradient derivation for 3D fluorescence imaging

To optimize Eq. (6) for o1,,oM, pc, h˜f, we need to take the derivative of the fluorescent cost function with respect to each. First, we express the gradient for the fluorescence distribution vectors from different layers, o1,,oM as

omef,l=(ef,lom)=2diag(|fl,m|2)Kzm,lef,l,m=1,,M

Then, we would like to derive the gradient of the speckle field, pc, as

pcef,l=m=1M(ef,lfl,mfl,mpc)=2m=1M(fl,mpc)diag(fl,m)diag(om)Kzm,lef,l,
where
(fl,mpc)=(fl,mgl,m1gl,m1gl,m2tsgl,2gl,1gl,1pc)=SlQHΔs,λex[diag(t1¯)HΔz1,λex][diag(tm1¯)HΔzm1,λex].

As for the gradient of the pupil function at the fluorescent wavelength, h˜f, we can express as

h˜fef,l=2m=1Mdiag(h˜zm,l,λem¯)Fdiag(F1diag(h˜zm,l,λem)h˜f)F1diag(Fdiag(|fl,m|2)om¯)Fef,l

Appendix B Reconstruction algorithm

B.1. Initialization of the variables

Since we use a gradient-based algorithm to solve, we must initialize each output variable, ideally as close as possible to the solution, based on prior knowledge.

For 3D coherent reconstructions, the targeted variables are transmittance function, tm(r), incident speckle field, pc(r), and pupil function, h˜c(u). We have no prior knowledge of the transmittance function or pupil function, so we set tm(r)=1 for m=1,,M and h˜c(u) to be a circle function with radius defined by NAdet/λex. This initializes with a completely transparent sample andnon-aberrated system. If the sample is mostly transparent, the amplitude of our incident speckle field is the overlay of all the in-focus shifted coherent intensities:

pcinitial(r)=l=1NimgIc,l,z=0(r+rl)/Nimg.

For 3D fluorescence reconstruction, the targeted variables are sample fluorescence distribution, om(r), incident field, pc(r), and pupil function at the emission wavelength, h˜f(u). We have no prior knowledge of the system’s aberrations, so we set h˜f(u) to be a circle function with radius defined by NAdet/λem. For the incident speckle field, we use the estimated speckle field from the coherent reconstruction as our initialization. The key to a successful 3D fluorescence reconstruction with this dataset is an initialization of the sample’s 3D fluorescence distribution using the correlation-based SIM solver [53, 74–78] that gives us an approximate result to start with. We adapt the correlation-based solver in our case for rough 3D SR fluorescence estimation. The basic idea is that we use the knowledge of illumination speckle intensity from the coherent reconstruction to compute the correlation between the speckle intensity and our fluorescence measurement. This correlation is stronger when the speckle intensity lines up with the fluorescent light generated by this excitation in the measurement. Each layer of the estimated speckle intensity gates out out-of-focus fluorescent light in the measurement, so we could get a rough estimate of the 3D fluorescent sample. Mathematically, we express this correlation as

ominitial(r)=n=19(If,l(r)If,l(r)l(n))(|fm,l(r)|2|fm,l(r)|2 l(n))l(n),
where l(n) is the averaging operation over l index of fluorescence images with the same z-scan position (at the n-th layer) in the set defined l(n)={122(n1)+1,,122n}.

To understand why this correlation gives a good estimation of the 3D fluorescent sample, we go through a more detailed derivation with a short-hand notation Δ to denote the operation Δal(r)=al(r)al(r)l(n). Then, we examine one component of Eq. (22):

Δ|fm,l(r)|2ΔIf,l(r)l(n)=m=1Mom(r)Δ|fm,l(r)|2Δ|fm,l(r)|2 l(n)hf,zm,l(rr)drm=1Mom(r)Δ|fm,l(r)|2 l(n)2δm,mδ(rr)hf,zm,l(rr)drΔ|fm,l(r)|2 l(n)2om(r),
where we assume the speckle intensity is completely uncorrelated spatially in 3D, which is an approximation because the speckle has finite grain size depending on the illumination NA. Under this assumption, this correlation is almost the 3D fluorescence distribution with an extra modulation factor. Hence, it serves well as a initialization for our 3D fluorescence distribution.

B.2. Reconstruction algorithm

With all the initializations, Algorithm 1 and Algorithm 2 summarized by the following pseudo-code:

Tables Icon

Algorithm 1. 3D coherent imaging reconstruction

Tables Icon

Algorithm 2. 3D fluorescence imaging reconstruction

3D coherent reconstruction takes about 40 iterations, while the 3D fluorescence reconstruction takes around 25 iterations to reach convergence.

Appendix C Sample preparation

The sample shown in Fig. 4 is a monolayer of 700 nm diameter polystyrene microspheres (Thermofischer, R700), prepared by placing microsphere dilutions (60 uL stock-solution/500 uL isopropyl alcohol) onto #1.5 coverslips and then allowing to air dry. Water is subsequently placed on the coverslip to reduce the index-mismatch of the microspheres to the air. An adhesive spacer followed by another #1.5 coverslip was placed on top of the original coverslip to assure a uniform sample layer for imaging.

The sample used in Fig. 5 is a mixture of 2 μm (Thermofischer, F8826) and 4 μm (Thermofischer, F8858) fluorescently-tagged (λem=605 nm) and 3 μm non-fluorescent (Sigma-Aldrich, LB30) polystyrene microspheres. We follow a similar procedure as before, except that the dilution is composed of 60 uL stock solution of each type of microspheres and 500 uL isopropyl alcohol. Since the microspheres are larger in size, we adopt high-index oil (nm(λ)=1.52 at λ = 532 nm) for sample immersion.

Figure 6 uses a sample of HT-29 cells grown in DMEM with 10% FBS, trypsonized with 1× trypsin, passaged twice a week into 100mm dishes at 1/5, 1/6, 1/8 dilutions and stored in a 37C 5% CO 2 incubator. For imaging, HT-29 cells were grown on glass coverslips (12mm diameter, No. 1 thickness; Carolina Biological Supply Co.) and fixed with 3% paraformaldehyde for 20min. Fixed cells were blocked and permeabilized in phosphate buffered saline (PBS; Corning Cellgro) with 5% donkey serum (D9663, Sigma-Aldrich), 0.3% Triton X-100 (Fisher Scientific) for 30 minutes. Cells were incubated with Alexa Fluor 546 Phalloidin (A22283, ThermoFisher Scientific) for 1 hour, washed 3 times with PBS, and mounted onto a second glass coverslip (24×50mm, No. 1.5 thickness; Fisher Scientific) and immobilized with sealant (Cytoseal 60; Thermo Scientific).

Funding

STROBE: A National Science Foundation Science & Technology Center (DMR 1548924); Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative (GBMF4562); Chan Zuckerberg Biohub; Ruth L. Kirschstein National Research Service Award (F32GM129966).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. B. Mccullough, X. Ying, T. Monticello, and M. Bonnefoi, “Digital microscopy imaging and new approaches in toxicologic pathology,” Toxicol Pathol. 32, 49–58 (2004). [CrossRef]   [PubMed]  

2. M. H. Kim, Y. Park, D. Seo, Y. J. Lim, D.-I. Kim, C. W. Kim, and W. H. Kim, “Virtual microscopy as a practical alternative to conventional microscopyin pathology education,” Basic Appl. Pathol. 1, 46–48 (2008). [CrossRef]  

3. F. R. Dee, “Virtual microscopy in pathology education,” Human Pathol 40, 1112–1121 (2009). [CrossRef]  

4. R. Pepperkok and J. Ellenberg, “High-throughput fluorescence microscopy for systems biology,” Nat. Rev. Mol. Cell Biol. 7, 690–696 (2006). [CrossRef]   [PubMed]  

5. J. C. Yarrow, G. Totsukawa, G. T. Charras, and T. J. Mitchison, “Screening for cell migration inhibitors via automated microscopy reveals a Rho-kinase Inhibitor,” Chem. Biol. 12, 385–395 (2005). [CrossRef]   [PubMed]  

6. V. Laketa, J. C. Simpson, S. Bechtel, S. Wiemann, and R. Pepperkok, “High-content microscopy identifies new neurite outgrowth regulators,” Mol. Biol. Cell 18, 242–252 (2007). [CrossRef]  

7. A. Trounson, “The production and directed differentiation of human embryonic stem cells,” Endocr. Rev. 27(2), 208–219 (2006). [CrossRef]   [PubMed]  

8. U. S. Eggert, A. A. Kiger, C. Richter, Z. E. Perlman, N. Perrimon, T. J. Mitchison, and C. M. Field, “Parallel chemical genetic and genome-wide RNAi screens identify cytokinesis inhibitors and targets,” PLoS Biol. 2, e379 (2004). [CrossRef]   [PubMed]  

9. V. Starkuviene and R. Pepperkok, “The potential of high-content high-throughput microscopy in drug discovery,” Br. J. Pharmacol 152, 62–71 (2007). [CrossRef]   [PubMed]  

10. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit. II,” J. Opt. Soc. Am. 57, 932–941 (1967). [CrossRef]  

11. C. J. Schwarz, Y. Kuznetsova, and S. R. J. Brueck, “Imaging interferometric microscopy,” Opt. Lett. 28, 1424–1426 (2003). [CrossRef]   [PubMed]  

12. M. Kim, Y. Choi, C. Fang-Yen, Y. Sung, R. R. Dasari, M. S. Feld, and W. Choi, “High-speed synthetic aperture microscopy for live cell imaging,” Opt. Lett. 36, 148–150 (2011). [CrossRef]   [PubMed]  

13. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19, 780–782 (1994). [CrossRef]   [PubMed]  

14. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006). [CrossRef]   [PubMed]  

15. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nature Methods 3, 793–795 (2006). [CrossRef]   [PubMed]  

16. R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999). [CrossRef]  

17. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” Journal of Microscopy 198, 82–87 (2000). [CrossRef]   [PubMed]  

18. M. G. L. Gustafsson, “Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution,” PNAS 102, 13081–13086 (2005). [CrossRef]   [PubMed]  

19. W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” PNAS 98, 11301–11305 (2001). [CrossRef]   [PubMed]  

20. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18, 11181–11191 (2010). [CrossRef]   [PubMed]  

21. A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Scientific reports 3: 1717 (2013). [CrossRef]  

22. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photon. 7, 739–745 (2013). [CrossRef]  

23. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]   [PubMed]  

24. L. Tian, Z. Liu, L. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2, 904–911 (2015). [CrossRef]  

25. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015). [CrossRef]  

26. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3, 827–835 (2016). [CrossRef]  

27. R. Ling, W. Tahir, H.-Y. Lin, H. Lee, and L. Tian, “High-throughput intensity diffraction tomography with a computational microscope,” Biomed. Opt. Express 9, 2130 (2018). [CrossRef]   [PubMed]  

28. A. Pan, Y. Zhang, K. Wen, M. Zhou, J. Min, M. Lei, and B. Yao, “Subwavelength resolution Fourier ptychography with hemispherical digital condensers,” Opt. Express 26, 23119–23131 (2018). [CrossRef]   [PubMed]  

29. A. Orth and K. Crozier, “Microscopy with microlens arrays: high throughput, high resolution and light-field imaging,” Opt. Express 20, 13522–13531 (2012). [CrossRef]   [PubMed]  

30. A. Orth and K. Crozier, “Gigapixel fluorescence microscopy with a water immersion microlens array,” Opt. Express 21, 2361–2368 (2013). [CrossRef]   [PubMed]  

31. A. Orth and K. B. Crozier, “High throughput multichannel fluorescence microscopy with microlens arrays,” Opt. Express 22, 18101–18112 (2014). [CrossRef]   [PubMed]  

32. A. Orth, M. J. Tomaszewski, R. N. Ghosh, and E. Schonbrun, “Gigapixel multispectral microscopy,” Optica 2, 654–662 (2015). [CrossRef]  

33. S. Pang, C. Han, M. Kato, P. W. Sternberg, and C. Yang, “Wide and scalable field-of-view Talbot-grid-based fluorescence microscopy,” Opt. Lett. 37, 5018–5020 (2012). [CrossRef]   [PubMed]  

34. S. Pang, C. Han, J. Erath, A. Rodriguez, and C. Yang, “Wide field-of-view Talbot grid-based microscopy for multicolor fluorescence imaging,” Opt. Express 21, 14555–14565 (2013). [CrossRef]   [PubMed]  

35. S. Chowdhury, J. Chen, and J. Izatt, “Structured illumination fluorescence microscopy using Talbot self-imaging effect for high-throughput visualization,” arXiv 1801.03540 (2018).

36. K. Guo, Z. Zhang, S. Jiang, J. Liao, J. Zhong, Y. C. Eldar, and G. Zheng, “13-fold resolution gain through turbid layer via translated unknown speckle illumination,” Biomed. Opt. Express 9, 260–274 (2018). [CrossRef]   [PubMed]  

37. M. Jang, Y. Horie, A. Shibukawa, J. Brake, Y. Liu, S. M. Kamali, A. Arbabi, H. Ruan, A. Faraon, and C. Yang, “Wavefront shaping with disorder-engineered metasurfaces,” Nat. Photon. 12, 84–90 (2018). [CrossRef]  

38. L.-H. Yeh, S. Chowdhury, and L. Waller, “Computational structured illumination for high-content fluorescent and phase microscopy,” Biomed. Opt. Express 10, 1978–1998 (2019). [CrossRef]   [PubMed]  

39. Y. Park, G. Popescu, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Diffraction phase and fluorescence microscopy,” Opt. Express 14, 8263–8268 (2006). [CrossRef]   [PubMed]  

40. S. Chowdhury, W. J. Eldridge, A. Wax, and J. A. Izatt, “Structured illumination multimodal 3D-resolved quantitative phase and fluorescence sub-diffraction microscopy,” Biomed. Opt. Express 8, 2496–2518 (2017). [CrossRef]   [PubMed]  

41. S. Chowdhury, W. J. Eldridge, A. Wax, and J. A. Izatt, “Structured illumination microscopy for dualmodality 3D sub-diffraction resolution fluorescence and refractive-index reconstruction,” Biomed. Opt. Express 8, 5776–5793 (2017). [CrossRef]  

42. M. Schürmann, G. Cojoc, S. Girardo, E. Ulbricht, J. Guck, and P. Müller, “Three-dimensional correlative single-cell imaging utilizing fluorescence and refractive index tomography,” J. Biophoton. 2017, e201700145 (2017).

43. S. Shin, D. Kim, K. Kim, and Y. Park, “Super-resolution three-dimensional fluorescence and optical diffraction tomography of live cells using structured illumination generated by a digital micromirror device,” arXiv 1801.00854 (2018).

44. D. Li, L. Shao, B.-C. Chen, X. Zhang, M. Zhang, B. Moses, D. E. Milkie, J. R. Beach, J. A. Hammer, M. Pasham, T. Kirchhausen, M. A. Baird, M. W. Davidson, P. Xu, and E. Betzig, “Extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics,” Science 349, aab3500 (2015). [CrossRef]  

45. E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. L. Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photon. 6, 312–315 (2012). [CrossRef]  

46. R. Ayuk, H. Giovannini, A. Jost, E. Mudry, J. Girard, T. Mangeat, N. Sandeau, R. Heintzmann, K. Wicker, K. Belkebir, and A. Sentenac, “Structured illumination fluorescence microscopy with distorted excitations using a filtered blind-SIM algorithm,” Opt. Lett. 38, 4723–4726 (2013). [CrossRef]   [PubMed]  

47. J. Min, J. Jang, D. Keum, S.-W. Ryu, C. Choi, K.-H. Jeong, and J. C. Ye, “Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery,” Scientific Reports 3, 2075:1–6 (2013). [CrossRef]  

48. S. Dong, P. Nanda, R. Shiradkar, K. Guo, and G. Zheng, “High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography,” Opt. Express 22, 20856–20870 (2014). [CrossRef]   [PubMed]  

49. H. Yilmaz, E. G. V. Putten, J. Bertolotti, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Speckle correlation resolution enhancement of wide-field fluorescence imaging,” Optica 2, 424–429 (2015). [CrossRef]  

50. A. Jost, E. Tolstik, P. Feldmann, K. Wicker, A. Sentenac, and R. Heintzmann, “Optical sectioning and high resolution in single-slice structured illumination microscopy by thick slice blind-SIM reconstruction,” PLoS ONE 10, e0132174 (2015). [CrossRef]   [PubMed]  

51. A. Negash, S. Labouesse, N. Sandeau, M. Allain, H. Giovannini, J. Idier, R. Heintzmann, P. C. Chaumet, K. Belkebir, and A. Sentenac, “Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations,” J. Opt. Soc. Am. A 33, 1089–1094 (2016). [CrossRef]  

52. S. Labouesse, M. Allain, J. Idier, S. Bourguignon, A. Negash, P. Liu, and A. Sentenac, “Joint reconstruction strategy for structured illumination microscopy with unknown illuminations,” ArXiv: 1607.01980 (2016).

53. L.-H. Yeh, L. Tian, and L. Waller, “Structured illumination microscopy with unknown patterns and a statistical prior,” Biomed. Opt. Express 8, 695–711 (2017). [CrossRef]   [PubMed]  

54. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Optics Communications 1, 153–156 (1969). [CrossRef]  

55. V. Lauer, “New approach to optical diffraction tomography yielding a vector equation of diffraction tomography and a novel tomographic microscope,” J. Microscopy 205, 165–176 (2002). [CrossRef]  

56. M. Debailleul, B. Simon, V. Georges, O. Haeberlé, and V. Lauer, “Holographic microscopy and diffractive microtomography of transparent samples,” Meas. Sci. Technol. 19, 074009 (2008). [CrossRef]  

57. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009). [CrossRef]   [PubMed]  

58. M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys J. 94, 4957–4970 (2008). [CrossRef]   [PubMed]  

59. J. M. Cowley and A. F. Moodie, “The scattering of electrons by atoms and crystals. I. A new theoretical approach,” Acta Crystallographica 10, 609–619 (1957). [CrossRef]  

60. A. M. Maiden, M. J. Humphry, and J. M. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29, 1606–1614 (2012). [CrossRef]  

61. T. M. Godden, R. Suman, M. J. Humphry, J. M. Rodenburg, and A. M. Maiden, “Ptychographic microscope for three-dimensional imaging,” Opt. Express 22, 12513–12523 (2014). [CrossRef]   [PubMed]  

62. C. J. R. Sheppard, Y. Kawata, S. Kawata, and M. Gu, “Three-dimensional transfer functions for high-aperture systems,” J. Opt. Soc. Am. A 11, 593–598 (1994). [CrossRef]  

63. M. Gu, Advanced Optical Imaging Theory (Springer, 2000). [CrossRef]  

64. M. Debailleul, V. Georges, B. Simon, R. Morin, and O. Haeberlé, “High-resolution three-dimensional tomographic diffractive microscopy of transparent inorganic and biological samples,” Opt. Lett. 34, 79–81 (2009). [CrossRef]  

65. J. W. Goodman, Introduction to Fourier Optics (Roberts & Co., 2005).

66. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33, 156–158 (2008). [CrossRef]   [PubMed]  

67. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33213–33238 (2015). [CrossRef]  

68. L. Bottou, “Large-scale machine learning with stochastic gradient descent,” International Conference on Computational Statistics pp. 177–187 (2010).

69. Z. Jingshan, R. A. Claus, J. Dauwels, L. Tian, and L. Waller, “Transport of intensity phase imaging by intensity spectrum fitting of exponentially spaced defocus planes,” Opt. Express 22, 10661–10674 (2014). [CrossRef]   [PubMed]  

70. R. Förster, H.-W. Lu-Walther, A. Jost, M. Kielhorn, K. Wicker, and R. Heintzmann, “Simple structured illumination microscope setup with high acquisition speed by using a spatial light modulator,” Opt. Express 22, 20663–20677(2014). [CrossRef]   [PubMed]  

71. S. Chowdhury, W. J. Eldridge, A. Wax, and J. Izatt, “Refractive index tomography with structured illumination,” Optica 4, 537–545 (2017). [CrossRef]  

72. D. Dan, M. Lei, B. Yao, W. Wang, M. Winterhalder, A. Zumbusch, Y. Qi, L. Xia, S. Yan, Y. Yang, P. Gao, T. Ye, and W. Zhao, “DMD-based LED-illumination Super-resolution and optical sectioning microscopy,” Scientific Reports 3, 1116 (2013). [CrossRef]   [PubMed]  

73. K. Lee, K. Kim, G. Kim, S. Shin, and Y. Park, “Time-multiplexed structured illumination using a DMD for optical diffraction tomography,” Opt. Lett. 42, 999–1002 (2017). [CrossRef]   [PubMed]  

74. T. Tanaami, S. Otsuki, N. Tomosada, Y. Kosugi, M. Shimizu, and H. Ishida, “High-speed 1-frame/ms scanning confocal microscope with a microlens and Nipkow disks,” Appl. Opt. 41, 4704–4708 (1996). [CrossRef]  

75. J. G. Walker, “Non-scanning confocal fluorescence microscopy using speckle illumination,” Opt. Commun. 189, 221–226 (2001). [CrossRef]  

76. S.-H. Jiang and J. G. Walker, “Experimental confirmation of non-scanning fluorescence confocal microscopy using speckle illumination,” Opt. Commun. 238, 1–12 (2004). [CrossRef]  

77. J. García, Z. Zalevsky, and D. Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13, 6075–6078 (2005). [CrossRef]  

78. R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45, 5037–5045 (2006). [CrossRef]   [PubMed]  

Supplementary Material (2)

NameDescription
Visualization 1       Through focus visualization for 3D phase reconstruction of HT29 cells
Visualization 2       Through focus visualization for 3D fluorescence reconstruction of HT29 cells from speckle illumination

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 3D multimodal structured illumination microscopy (SIM) with laterally translating Scotch tape as the patterning element. The coherent arm (Sensor-C1 and Sensor-C2) simultaneously captures images with different defocus at the laser illumination wavelength (λex = 532 nm), used for both 3D phase retrieval and speckle trajectory calibration. The incoherent (fluorescence) arm (Sensor-F) captures low-resolution raw fluorescence acquisitions at the emission wavelength (λem = 605 nm) for 3D fluorescence super-resolution reconstruction. OBJ: objective, DM: dichroic mirror, SF: spectral filter, ND-F: neutral-density filter.
Fig. 2
Fig. 2 3D coherent and incoherent transfer function (TF) analysis of the SIM imaging process. The 3D (a) coherent and (b) incoherent TFs of the detection system are auto correlated with the 3D Fourier support of the (c) illumination speckle field and (d)illumination intensity, respectively, resulting in the effective Fourier support of 3D (e) coherent and (f) incoherent SIM. In (e) and (f), we display decomposition of the auto-correlation in two steps: ① tracing the illumination support in one orientation and ② replicating this trace in the azimuthal direction.
Fig. 3
Fig. 3 3D multi-slice model: (a) coherent and (b) incoherent imaging models for the interaction between the sample and the speckle field as light propagates through the sample.
Fig. 4
Fig. 4 3D multimodal (fluorescence and phase) SIM reconstruction compared to widefield fluorescence and coherent intensity images for 700 nm fluorescent microspheres. Resolution beyond the system’s diffraction limit is achieved in both the (a) coherent and (b) fluorescent arms.
Fig. 5
Fig. 5 Reconstructed 3D multimodal (fluorescence and phase) large-FOV for mixed 2 μm, 4μm fluorescent and 3 μm non-fluorescent polystyrene microspheres. Zoom-ins for two ROIs show fluorescence and phase at different depths.
Fig. 6
Fig. 6 Reconstructed 3D multimodal (fluorescence and phase) large-FOV imaging for HT-29 cells (See Visualization 1 and Visualization 2). Zoom-ins for two ROIs show fluorescence and phase at different depths. The blue arrows in two ROIs indicate two-layer cell clusters that come in and out of focus. The orange arrows indicate intracellular components, including higher-phase-contrast lipid vesicles at z = −5.1 μm, nucleolus at z = 0, as well as the cell nucleus and cell-cell membrane contacts.

Tables (3)

Tables Icon

Table 1 Summary of spatial frequency bandwidths

Tables Icon

Algorithm 1 3D coherent imaging reconstruction

Tables Icon

Algorithm 2 3D fluorescence imaging reconstruction

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

f l , 1 ( r ) = C { p c ( r r l ) h Δ s l , λ ex ( r ) } ,
g l , m ( r ) = f l , m ( r ) t m ( r ) , m = 1 , , M , f l , m + 1 ( r ) = g l , m ( r ) h Δ z m , λ ex ( r ) , m = 1 , , M 1.
I c , l z ( r ) = | G l ( r ) h c ( r ) h z , λ ex ( r ) | 2 , l = 1 , , N img , z = z c 1 , z c 2 ,
minimize t 1 , , t M , p c , h c e c ( t 1 , t M , p c , h c ) = l , z e c , l z ( t 1 , , t M , p c , h c ) where e c , l z ( t 1 , , t M , p c , h c ) = r | I c , l z ( r ) | G l ( r ) h c ( r ) h z , λ ex ( r ) | | 2 .
I f , l ( r ) = m = 1 M [ o m ( r ) | f l , m ( r ) | 2 ] | h f , z m , l ( r ) | 2 , l = 1 , , N img ,
minimize o 1 , , o M , p c , h f e f ( o 1 , o M , p c , h f ) = l e f , l ( o 1 , , o M , p c , h f ) where e f , l ( o 1 , , o M , p c , h f ) = r | I f , l ( r ) m = 1 M [ o m ( r ) | f l , m ( r ) | 2 ] | h f , z m , l ( r ) | 2 | 2 ,
Δ s l = ( n 1 ) s , for l = 12 2 ( n 1 ) + 1 , , 12 2 n , where n = 1 , , 9 ,
f l , 1 = H Δ s l , λ ex Q S l p c , g l , m = diag ( f l , m ) t m , m = 1 , , M , f l , m + 1 = H Δ z m , λ ex g l , m , m = 1 , , M 1 , G l = H Δ z M , l , λ ex g l , M ,
H z , λ = F 1 diag ( h ˜ z , λ ) F ,
I c , l z = | H c H z , λ ex G l | 2 , I f , l = m = 1 M K z m , l diag ( | f l , m | 2 ) o m ,
K z m , l = F 1 diag ( F | F 1 diag ( h ˜ z m , l , λ em ) h ˜ f | 2 ) F
e c , l z ( t 1 , , t M , p c , h ˜ c ) = e c , l z T e c , l z = I c , l z | H c H z , λ ex G l | 2 2 , e f , l ( o 1 , , o M , p c , h ˜ f ) = e f , l T e f , l = I f , l m = 1 M K z m , l diag ( | f l , m | 2 ) o m 2 2 ,
t m e c , l z = ( e c , l z g l , m g l , m t m ) = diag ( f l , m ¯ ) ( e c , l z g l , m ) = diag ( f l , m ¯ ) v l , m ,
v l , M = ( e c , l z g l , M ) = H Δ z M , l , λ ex H z , λ ex H c diag ( H c H z , λ e x G l | H c H z , λ e x G l | ) e c , l z v l , m = ( e c , l z g l , m + 1 g l , m + 1 g l , m ) = H Δ z m , λ ex diag ( t m + 1 ¯ ) v l , m + 1 , m = 1 , , M 1 ,
p c e c , l z = ( e c , l z g l , 1 g l , 1 p c ) = S l Q H Δ s l , λ ex diag ( t 1 ¯ ) v l , 1 .
h ˜ c e c , l z = ( e c , l z h ˜ c ) = diag ( F G l ¯ ) diag ( h ˜ z , λ ex ¯ ) F diag ( H c H z , λ ex G l | H c H z , λ ex G l | ) e c , l z
o m e f , l = ( e f , l o m ) = 2 diag ( | f l , m | 2 ) K z m , l e f , l , m = 1 , , M
p c e f , l = m = 1 M ( e f , l f l , m f l , m p c ) = 2 m = 1 M ( f l , m p c ) diag ( f l , m ) diag ( o m ) K z m , l e f , l ,
( f l , m p c ) = ( f l , m g l , m 1 g l , m 1 g l , m 2 t s g l , 2 g l , 1 g l , 1 p c ) = S l Q H Δ s , λ ex [ diag ( t 1 ¯ ) H Δ z 1 , λ ex ] [ diag ( t m 1 ¯ ) H Δ z m 1 , λ ex ] .
h ˜ f e f , l = 2 m = 1 M diag ( h ˜ z m , l , λ em ¯ ) F diag ( F 1 diag ( h ˜ z m , l , λ em ) h ˜ f ) F 1 diag ( F diag ( | f l , m | 2 ) o m ¯ ) F e f , l
p c initial ( r ) = l = 1 N img I c , l , z = 0 ( r + r l ) / N img .
o m initial ( r ) = n = 1 9 ( I f , l ( r ) I f , l ( r ) l ( n ) ) ( | f m , l ( r ) | 2 | f m , l ( r ) | 2   l ( n ) ) l ( n ) ,
Δ | f m , l ( r ) | 2 Δ I f , l ( r ) l ( n ) = m = 1 M o m ( r ) Δ | f m , l ( r ) | 2 Δ | f m , l ( r ) | 2   l ( n ) h f , z m , l ( r r ) d r m = 1 M o m ( r ) Δ | f m , l ( r ) | 2   l ( n ) 2 δ m , m δ ( r r ) h f , z m , l ( r r ) d r Δ | f m , l ( r ) | 2   l ( n ) 2 o m ( r ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.