Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Light-sheet microscopy: a tutorial

Open Access Open Access

Abstract

This paper is intended to give a comprehensive review of light-sheet (LS) microscopy from an optics perspective. As such, emphasis is placed on the advantages that LS microscope configurations present, given the degree of freedom gained by uncoupling the excitation and detection arms. The new imaging properties are first highlighted in terms of optical parameters and how these have enabled several biomedical applications. Then, the basics are presented for understanding how a LS microscope works. This is followed by a presentation of a tutorial for LS microscope designs, each working at different resolutions and for different applications. Then, based on a numerical Fourier analysis and given the multiple possibilities for generating the LS in the microscope (using Gaussian, Bessel, and Airy beams in the linear and nonlinear regimes), a systematic comparison of their optical performance is presented. Finally, based on advances in optics and photonics, the novel optical implementations possible in a LS microscope are highlighted.

© 2018 Optical Society of America

1. Introduction

The visualization and quantification of biological processes in samples such as living organisms, tissues, and cells requires microscopy methods that are gentle with the sample while providing fast, 3D information with high spatial and temporal resolutions over large fields of view (FOVs).

To do so, widefield fluorescence microscopy (WFM) and laser scanning confocal fluorescence microscopy (LSCM), have long been the methods of choice to image biological samples, enabling extraction of structural and functional quantitative information from biological samples.

In WFM the whole volume of the sample is illuminated and the generated fluorescence is collected by an objective lens, which is then projected onto a 2D matrix of light detectors (such as a charged couple device [CCD] or complementary metal-oxide-semiconductor [CMOS] cameras). This process allows quickly obtaining a 2D fluorescence image of the sample. However, such an image contains out-of-focus light which strongly reduces its contrast and severely hampers any optical sectioning capability. In contrast, in LSCM, a single point is sequentially illuminated while a single detector sequentially collects the emitted light. In this case, an image is formed once the illumination point has been scanned through the sample plane. In this case, a confocal aperture filters the out-of-focus information. This configuration allows the generation of optical sections from a 3D sample.

WFM works in both episcopic or diascopic directions (usually denoted as epi-fluorescence or dia-fluorescence, referring to a signal collected in the backward or forward direction, respectively), while LSCM works only in an epi-fluorescence configuration. One important drawback of WFM and LSCM is that the excitation light passes through the specimen and excites the fluorescence all along the optical path. This produces fluorescence from regions that are above and below the focal plane, thus wasting valuable fluorophores and fluorescence photons. Furthermore, many endogenous fluorescent and nonfluorescent organic components within the sample are also excited. This is particularly critical when dealing with the imaging of living samples and specimens during long periods of time, as unnecessary photodamage and phototoxicity are caused by continuous irradiation. As brilliantly exposed by Stelzer in Ref. [1], life on Earth is adapted to the solar flux, which is less than 1.4kW/m2. This suggests that, when using a microscope for the in vivo observation of biological samples, irradiance on the specimen should not exceed values larger than 1nW/μm2=100mW/cm2. Therefore, in recent decades, efforts have been placed on developing imaging alternatives to visualize faster and larger sample volumes, with higher spatio-temporal resolutions over longer periods of time, while preserving the sample under low irradiation levels.

Light-sheet fluorescence microscopy (LSFM) has proven to be the technique of choice to accomplish such challenging objectives [2]. In essence, LSFM consists of illuminating the sample with a thin laminar laser beam, usually referred to as a light sheet (LS), but only in the portion of the sample being imaged. Then, using a WFM detection scheme, the fluorescence resulting from the illuminated plane is collected in a perpendicular direction from the illumination axis, as shown in Fig. 1. Furthermore, as the LS illuminates only a thin laminar volume of the sample, the image projected onto the 2D detector array will correspond to a clean optical section of the sample with no out-of-focus light. Such an image is equivalent to that obtained with LSCM but without the need for a confocal aperture.

 figure: Figure 1.

Figure 1. Basic schematic of a LSFM.

Download Full Size | PDF

However, a LSFM is more than just a clever high-resolution optical sectioning imaging device. In a LSFM, fluorophores lying outside the small LS volume will not be illuminated and, therefore, will not suffer photobleaching. This is also true for other endogenous organic molecules within the sample that will not degrade as a result of unnecessary irradiation. Similarly, because the detection scheme is based on WFM, capturing the emitted fluorescent light occurs in a very efficient way. This highly efficient excitation and collection scheme ensures that the excited fluorophores are exposed to a small light dose, which results in minimal photobleaching. As a result, the whole sample will be even less affected by phototoxicity, incrementing its viability.

In most cases, samples are mounted as shown in Fig. 1, suspended in front of the detection objective. To do so, several approaches have been proposed, such as hydrogel embedding, hooks, or polymers with good optical qualities [3]. This mounting system reports three main advantages. First, samples are accessible for illumination from both sides, which may create uniform illumination. Moreover, samples can be imaged from different perspectives, allowing full reconstruction of the sample by “multiview fusion” algorithms. This can be achieved either sequentially, by rotation of the capillary, or simultaneously, by dual (or quadruple) side detection. Finally, water dipping objectives can be used, reducing the overall spherical aberration. To hold these objectives and keep a sealed pool for the sample, custom-made sample holders need to be fabricated, which may include temperature control or medium exchange. Upon scanning of the sample, a 3D image can easily be rendered, using an adequate image-processing tool. Notice that in a LSFM, contrary to most imaging systems, the excitation and collection paths are “uncoupled.”

The LS can be generated in fundamentally two different ways. The first one is based on focusing the excitation light using a cylindrical lens. This modality is commonly referred to as selective-plane illumination microscopy (SPIM). The second is based on quickly moving a focused beam at the focal plane of the detection lens, forming in this way a virtual LS. This technique is normally referred to as digitally scanned laser light sheet microscopy (DSLM). In both cases, the distance in which the beam remains within a square root of 2 factor of its waist (see Section 3.1a) (usually related to the Rayleigh range) will define the FOV of the generated image. Similarly, the axial resolution of a LSFM is related, in a first approach, to the thickness of the LS. While thinner sheets lead to better axial resolution, they are also accompanied by a more pronounced non-uniform intensity distribution in the FOV, and vice versa.

In addition to the optical setup, a LSFM needs a contrast generation mechanism to generate images. This is usually based on fluorescence. Therefore, LSFM not only benefits from developments in the photonics field but also from those in the biochemistry fields, namely, the production of new fluorophores, fluorescent markers, and genetically expressed fluorescence proteins that are of paramount importance for retrieving structural or functional information in biological studies. In addition, and as with a confocal microscope, these fluorophores can also be excited in the nonlinear regime, inheriting most of the advantages of nonlinear microscopy. Furthermore, because of the implicit optical sectioning capability of the LSFM, other contrast mechanisms can be well suited for generating a high-resolution 3D image from the sample. These include the use of elastic and inelastic scattered light. In the case of elastically scattered light, access is gained to histology-like images that provide structural information of the sample. In the case of inelastic scattering, Raman imaging has also been used in a LS configuration, providing the optical section images with enriched biochemical information. Importantly, in these two cases, the sample can be observed as is, i.e., without having to add fluorophore to the sample or without having to modify it genetically.

As aforementioned, in contrast to other well-established point scanning imaging techniques such as LSCM, LSFM intrinsically uses modern recording cameras (CCDs and sCMOS). Therefore, as with WFM, LSFM can massively parallelize the process of data acquisition. In addition, the data can be recorded with speeds above 1000 frames per second and/or with large dynamic ranges having thousands of gray levels (up to 16 bits). Together with this massive parallelization of the produced data, taking advantage of uncoupled excitation and detection channels, interesting alternatives have been proposed to obtain even more data or for further processing the generated data before it is recorded. This allows different excitation and observation channels to be used simultaneously. It also allows combining different imaging modalities and the incorporation of devices to spectrally resolve the data or compensate for aberrations from the sample.

But LSFM is more than just a clever high-resolution optical sectioning imaging device. Because of the highly efficient light management intrinsic to the LSFM, many biologists interested in studying in vivo processes have not only become users, but in many cases, also developers of the technique. LSFM has already changed the way in which specimens are prepared. In fact, novel chambers have been designed and are used to maintain the sample stable by providing better control of the natural environmental and physiological conditions for their natural 3D growth.

Overall, the possibilities are many and give additional degrees of freedom that can be best adapted to study a particular biological problem. For instance, this highly versatile technique has allowed the study all sort of samples, such as cell organelles, cells, cellular spheroids, tissues, organs, embryos of different model organisms (flies, worms, zebrafish, xenopus, etc.), plants, roots, and corals. All this has converted the LSFM into a simple yet powerful high-resolution 3D imaging technique that is especially suited for in vivo imaging [410] and has resulted in a completely new way of using and understanding microscopy [3,11].

1.1. Optics Perspective of LSFM

All the above reflects the wide versatility of the LSFM technique that has been exploited and resulted in many different optical possibilities, which are reviewed in the next section.

1.1a. LSFM is Low Photodamage and Low Phototoxic

As mentioned above, LSFM is a highly efficient excitation and collection imaging technique, with excitation intensities occurring only in a thin laminar volume that can be very low while sample irradiation times are much reduced [1]. These combined effects offer a technique that allows the user to minimize both photobleaching and phototoxicity [10,1214], while allowing for long-term imaging studies (up to several days). This is an essential characteristic required for all sorts of developmental biology studies [1518], such as organogenesis [19], cell migration [20,21], cardiac development [2225], vascular development [26,27], neuro-development [28,29], and generally any kind of in vivo studies. Drosophila melanogaster [2,3032] and zebrafish [19,26,30,3336] have traditionally been the most widely used samples in this field, but it fits well many others, such as worm embryos [28,37] and other small organisms or plants [3840].

1.1b. LSFM is an ad hoc Technique for 3D Imaging

The optical configuration of a LSFM is such that the whole illumination beam inside the sample is used to select a 2D fluorescence slice, providing a 2D optical section, in a natural way. Upon scanning the sample, a high-resolution 3D image can be easily rendered [4,41]. Therefore, LSFM imaging techniques represent an interesting approach to fully study the third dimension in life sciences. This is a timely technique as biomedical sciences are currently experiencing a strong need to visualize samples in 3D as biological processes occur in volumes. Therefore, despite the fact that 2D cells placed in petri dishes have been widely used to study biological process, LSFM allows the use of agarose or fluorinated ethylene propylene (FEP) tubes [17,42,43] for mounting 3D living samples. Thus, 2D models have been constantly replaced by 3D cell cultures [4447], spheroids [4852], organs and organoids [26,5355], and model organisms such as Drosophila melanogaster, zebrafish and C. elegans [2,30,33,56].

1.1c. LSFM Allows Imaging of 3D Specimens with Isotropic Resolution

In some cases, samples are simply too big to be efficiently imaged, and the light absorption and scattering in biological samples degrades the quality of the LS along the propagation axis, limiting the effective FOV. By taking advantage of the open architecture of the setup, it is possible to use pairs of opposing objectives to add multiple illumination and collection channels. When two objectives are used to excite the sample, the technique is normally called multidirectional SPIM (mSPIM) [57]. Here samples are sequentially illuminated from alternating sides, and then fused into a single dataset, solving the problem of image degradation in the LS propagation axis. In a different implementation, samples are rotated and imaged from different perspectives. In this case, the different angular views of the specimen can be recorded to form a tomographic-like image [41]. After applying an appropriate fusion algorithm [58], this effectively results in an 3D image having an isotropic resolution. This is particularly important for obtaining quantitative structural information from the sample. To speed up this process, different solutions, including multiple simultaneous detection pathways, have been presented. Among these, it is worth mentioning simultaneous multiview SPIM (SiMView-SPIM) [59], IsoView [60], multiview-SPIM (MuVi-SPIM) [58,61], and four-lens SPIM [21]. Some of the main applications include the tracking of nuclei and cell shape changes during embryonic development [39,61], morphological nuclei segmentation, detection of nuclear divisions and neural development [59], and the reconstruction of cell movements in the full embryo [33].

1.1d. LSFM Allows Imaging of Thick, Low Scattering Specimens

As explained above, because of the flexible geometry of the LSFM setup, large samples are easy to image. However, in these large samples, scattering starts to play an important role. In this case, the specialized exotic beams used (such as Bessel and Airy) having non-diffractive propagation properties, can be used to overcome scattering, and at the same time will provide large FOVs [6266].

Another strategy is to use near-infrared (NIR) radiation as, at these wavelengths, there is a reduced sensitivity to scattering. This fact has been exploited by using ultrashort pulsed lasers in LSFM to generate two-photon excited fluorescence (TPEF) of standard visible fluorescent markers [56,64]. Two-photon LSFM (2P-LSFM) has, thus, been shown to have excellent performance for in vivo cellular-resolution 3D imaging of large biological samples such as the fruit fly [67], zebrafish embryos [68,69], and multicellular spheroids [51,70]. 2P-LSFM, in combination with structured illumination, have also been tested on zebrafish samples [8,71], increasing signal contrast and imaging depth.

1.1e. LSFM Allows Imaging of Single Cells at High Resolution

High-resolution studies will inherit all the excellent advantages of LSFM mentioned above. However, this usually involves placing individual cells on conventional coverslips. Unfortunately, the reduced working distance and large size of current (commercial) objective lenses for high-resolution microscopy limit the choice of the numerical aperture (NA) for standard LSFM imaging to a few values ranging from 0.8 to 1.1. In such a case, the two objectives of the LSFM can be placed at 45° with respect to the cover slip, in a configuration called iSPIM [28]. This technique has been used for following neuro-development in C. elegans embryos [28,72,73] and, more recently, mice embryos [74]. A variant uses Bessel beams in a structured illumination fashion to increase the resolution and contrast to image subcellular components and organelles in vivo [9,75,76]. Finally, higher resolutions (NA>1.1) can be obtained by using a conventional microscope with a single objective, which creates the LS and collects the fluorescence generated. Examples of these are the OPM technique [23,77,78] and the soSPIM [7982] techniques that have been used for observing whole cells and cell aggregates in combination with superresolution (SR) localization microscopy.

1.1f. LSFM is an Intrinsic Fast Imaging Technique

Because of the 2D nature of the LS and the fact that all the image pixels are simultaneously recorded using 2D array detectors, such as a CCD or CMOS camera, optical sections from the sample can be rapidly obtained [83]. This allows capturing the fast biological processes happening in the illuminated plane of the sample. If this is combined with an axial displacement of the sample for accessing different planes inside the sample, fast volumetric imaging speeds (up to a few volumes/s) can be achieved. Some examples of applications are cell tracking and lineage [59] and brain functional imaging [83,84]. From a different approach, fast volumetric imaging can be performed by the axial movement of the image plane, either by moving the objective [85] or by remote focusing [86,87] in synchrony with displacement of the LS, while the specimen is kept stationary. Using this approach, it has been possible to capture the cardiac dynamics of the beating heart of a zebrafish [88] and neuronal action potentials in mice brains in vivo [85]. Finally, by extending the depth of field (DOF) of the detection objective and moving only the LS, fast volumetric imaging (of up to 70 volumes/s) has been achieved [8991]. This has been demonstrated for fast particle tracking and in vivo C. elegans dynamics.

1.1g. LSFM Allows Multimodal Imaging of Biological Samples

Because of the decoupled nature of the geometry in the involved optic paths, other excitation and imaging channels can be easily incorporated into a working LSFM. This has allowed combining different excitation regimes (linear and nonlinear) [64] in one single instrument. In addition, LSFM has been combined with SR techniques such as STORM [48,8082], STED [92], and structured illumination [69,75,93]. The versatility of LSFM also allows combining it with more established techniques, such as FLIM [9496], FRET [96], hyperspectral fluorescence [97], Raman [98101], and OPT [102,103]. The combined advantages of LSFM are thus shared with multimodal approaches and allow extracting structural, functional, chemical, and molecular information from the sample, enabling a comprehensive interpretation of the biological process of study.

Overall, the points mentioned above confer the LSFM technique with unprecedented capabilities of imaging in a quantitative way, and a wide range of biological processes in a huge variety of samples [1,104].

Following, we will further expand on each of the points mentioned above. Section 2 starts by reviewing the different contrast mechanisms used to generate a LS image. Emphasis is placed on the calculation of the total emitted fluorescence in a LS configuration for both linear and nonlinear regimes. Other contrast mechanisms, such as elastic scattering and Raman, are also presented. In Section 3 the working principles of a LSFM are explained so that any basic user can acquire the relevant optical concepts to understand and build a LSFM. This, shown according to the user’s needs, presents three different imaging regimes which are described in terms of sample sizes of 5000, 500, and 50 μm. In Section 4 the different typical or more commonly used architectures for a LSFM are shown and discussed. Then, Section 5 shows a Fourier formalism for studying the properties of LS engineered in different ways. Here, the point spread functions (PSFs) and their corresponding optical transfer functions (OTFs) for various LS excitation schemes, in the linear and nonlinear regimes, are compared in a quantitative way. These include LSs generated with cylindrical lenses (SPIM), or by scanning a beam (DSLM) using Gaussian, Bessel, sectioned Bessel, lattices, and Airy beams. Resolution and other relevant optical parameters are presented in a formal way. Finally, in Section 6 we focus on the different possibilities that LSFM offers as a consequence of having the collection arm decoupled from the excitation arm. Here, novel wavelength filtering detection, wavefront acquisition modalities, and data processing techniques will be presented. These include the use of simultaneous multiple wavelength filters, the use of spectrometers for hyperspectral detection, the incorporation of adaptive optics (AO) components in the detection path and the novel processing of data for achieving SR imaging in a LS configuration.

2. Contrast Mechanisms in LSFM

LSFM was originally conceived to make use of fluorescence light as a contrast mechanism, and, as such, it includes the use of nonlinear fluorescence. However, other non-fluorescent-based contrast mechanisms, such as elastic and inelastic scattering, have also been explored. Thus, in this section, we will briefly introduce the concepts of linear and nonlinear fluorescence and elastic and inelastic scattering in the context of LSFM. Here the relevant parameters that describe the efficiency of the signal used (fluorescence, nonlinear fluorescence, Raman, and elastic scattering) are derived, taking into account the LS geometry. The differences with traditional point-scanning-based imaging techniques are also highlighted.

2.1. Linear Fluorescence

Modern biology is strongly based on the observation of selected structures inside the samples. These structures are usually labeled with vital dyes or express genetically encoded proteins that make them fluorescent. Fluorescence is the combination of two consecutive events, as depicted on the electronic energy level scheme shown in Fig. 2. The first one starts with the absorption of one photon by a molecule or fluorophore. The second one is the relaxation of the molecule to the ground state, emitting a photon. In the linear regime, the density of excited fluorophores (or absorbed photons) per second, N1p, depends on the probability of being absorbed and the density of excitation photons. The probability of being absorbed can be related to the wavelength (absorption bandwidth) and the material capability to perform one-photon absorption. These two features are characterized by the absorption cross section δ1p. In addition, this also depends on the fluorophore density No. Therefore, the density of excited fluorophores per second can be written as

N1p=δ1pN0Iνhν,
where the photon density has been written in terms of the excitation laser beam intensity Iν at the photon frequency ν as Iν/hν, where h is the Plank constant.

 figure: Figure 2.

Figure 2. Electronic energy level scheme showing the different excitation processes: one-photon (linear) absorption and two-photon (nonlinear) absorption.

Download Full Size | PDF

Before the relaxation to the ground state in which one photon is emitted, in most cases, different energy levels (usually vibrational levels) are involved and only one of the transitions is responsible for the emission. As a result, the emitted photons possess a longer wavelength or Stokes shift. However, the energetic difference between excitation and emitted photons is deposited on the sample. This may result in some damage (photodamage) through the production of heat. In addition, we must take into account the efficiency of the emission, determined by the quantum yield ϕ, which is defined as the fraction of excited fluorophores that returns to the ground state emitting one fluorescent photon. If the quantum yield is low, the probability of energy deposition is higher, increasing photodamage. From Eq. (1) and taking into account a laser beam with a cross section S and the quantum yield ϕ, we can obtain the number of emitted photons per second Θem for an interaction length l as

Θem=ϕ·N1p·S·l=ϕ·δ1p·N0·Phν·l,
where P is the excitation power that can be expressed also as Iν/S. Note that Eq. (2), written in terms of the excitation power, no longer depends on S. Therefore, Θem is valid at any section along the laser beam, as long as the saturation state is not reached (ΘemN0). In practical terms, this means that at the focus of an objective, the generated fluorescence is more intense, but since the area is smaller, the total amount of generated photons is the same as at any other section of the beam.

As mentioned before, in WFM, the entire specimen is illuminated and, due to a large defocused background component, no optical sectioning can be achieved. To eliminate the out-of-focus signals, a confocal aperture and a point scanning configuration can be used, giving rise to the name confocal laser scanning microscopy (CLSM) [105]. To produce an optical section from the selected FOV, the fluorescent emitted signal is detected through a pinhole aperture, located in an optically conjugated (or confocal) plane in front of the detector, thus eliminating the out-of-focus signal. This is a very powerful tool for biological imaging and provides high transversal and axial resolution. However, although only collected from the plane of interest, fluorescence is produced along the illumination beam inside the specimen, increasing photobleaching and phototoxicity and thus care has to be taken to minimize these effects.

LSFM reunites the best of the both worlds: it is a wide field technique that also allows optical sectioning. Because of its excitation geometry, as opposed to widefield and point-scanning-based microscopy, photobleaching is confined to the illumination plane and the generated signal is efficiently collected. This results in a low light dose imaging technique that maximizes sample viability.

2.2. Nonlinear Fluorescence

Even though LS microscopy using fluorescence has proven to be successful, it still holds some drawbacks. Scattering, absorption, and sample-induced aberrations will limit, on the one hand, the maximum depth from which fluorescence from the sample can be collected, and on the other hand, the homogeneity of the excitation LS in terms of broadening, deviation from propagation in a straight path, and attenuation. All of these will be reflected in the image as an increase in optical noise, collection of out-of-focus light, and stripe artifacts induced by absorption and scattering along the illumination axis.

Nonlinear microscopy (NLM) techniques, such as TPEF, are interesting alternatives to linear fluorescence excitation [106]. TPEF normally relies on the use of NIR excitation wavelengths matching the optical window of biological samples and at the same time presenting reduced Rayleigh scattering. This allows for better penetration depths and, because of the nonlinear interaction, virtually eliminates the conversion of scattered excitation into fluorescence. It seems therefore natural to merge the benefits of LS with those of TPEF (2P-LSFM).

TPEF occurs due to the combination of two consecutive effects: two-photon absorption takes place and this is then followed by the emission of a fluorescence photon (see Fig. 2). Two-photon absorption is a nonlinear effect associated with the presence of the third-order susceptibility, χ(3), which is present in any material. This effect can occur when the energy difference between electronic levels is similar to the added energy of two excitation photons. To observe this effect, the two photons must be in the same position within the material. In addition, since the process lasts for only a very short period, in the femtosecond–attosecond scale, both photons must effectively coincide in time. Consequently, if the two absorbed photons have the same wavelength, the distribution of emitted photons in a fluorescent sample using TPEF depends on the square of the intensity, the concentration, and the quadratic cross section absorption (σ2p) [107] as:

p=δ2pN0Iν1hν1Iν2hν2Iν1=Iν2δ2pN0(Ihν)2,
where the two-photon cross section is in GM (1050cm4s) , No is again the fluorophore density, and the photon density (Iν/hν) provides the number of photons per unit area per unit of time.

Assuming the use of pulsed lasers under the undepleted pump approximation, the probability of photon absorption per volume unit can be calculated. Then, by using a Gaussian approximation of the beam (see Appendix A), the number of emitted fluorescent photons along the propagation axis can be written as

F(z)=12η1δ2pN0gpfτ1πw0211+(λzπw02)2N2ϑ2,
where η1 is the quantum efficiency of the emitter and τ, f, and gp are the FWHM (pulse width), the frequency, and the shape factor (2ln2π) of the Gaussian pulse. In this case, w0 is the size of the beam waist (see Section 3.1a.), λ is the wavelength of the light (c/ν) in vacuum, and ϑ is the normalized mean temporal response (1 Hz) to keep unit consistency.

This formula demonstrates the confinement of the emitted photons only in the focalized volume, providing the well-known optical sectioning effect [106]. Taking into account the collection efficiency of the acquisition system, Eq. (4) allows us to compute the number of photons obtained per pixel in the SPIM configuration. Considering the added benefits of nonlinear excitation, TPEF has therefore been implemented and investigated in several LS applications [51,56,63,64,6770,108,109]. One of the most critical points, however, is the high light dose received by the sample, which could produce phototoxic effects in developing systems. For example, it has been reported that after 1 h of imaging, the division in tumor spheroids stops, while in linear excitation it can be maintained for more than 24 h [70].

2.3. Elastic and Inelastic Scattering

Tissue is made up of cells with many different compartments, having different refractive indices and with sizes at different spatial scales, ranging from sub-wavelength to tens of wavelengths [110,111]. Thus, when light propagates through optically inhomogeneous biological samples, scattering occurs and causes the light paths to be scrambled in all directions. In scattering, there are two types of light–matter interaction: elastic and inelastic. Both can be used as sources of contrast for LSFM, in which cases, the word “fluorescence” in LSFM will not apply.

Elastic scattering does not involve photon absorption and, because it is an elastic process, the scattered photons will have the same frequency as the incident ones [112]. In this case, the scattering function, p(θ) describes the probability of a photon scattering into a unit solid angle oriented at an angle relative to the photon’s original trajectory [113]. Therefore, if elastic scattering is used as a source of contrast in LSFM, both the scattering function around 90° and the NA of the detection objective will determine the amount scattered field directed toward the camera. Optical scattering collected from unlabeled biological tissue would provide valuable structural information that can originate from different biological compartments and interfaces, including tissue boundaries, cells, organelles, or macromolecular complexes. In spite of this, elastic scattering has barely been exploited as a contrast mechanism to generate LSFM images [114]; please see Section 6.1d for specific details on elastic scattering LSFM implementations.

Together with the elastic scattering, when a biological sample is irradiated with a laser beam, a small fraction of the incident photons are scattered inelastically and experience frequency shifts above and below that of the incident beam. These frequency shifts, or Raman frequencies, are independent of the exciting frequency and are characteristic of the vibrational or rotational modes of the molecular species giving rise to the scattering. In Raman scattering, the molecules absorb laser photons and reach a virtual excited state that does not have sufficient energy to induce electronic transitions. Molecules further relax via radiative emission to a stable final energy level. The absolute intensity of the Raman radiation can be obtained from the quantum transition dipole moment induced by the laser light with frequency ω0 on the sample molecules, and is proportional to the radiant power I0 of the laser as follows:

IRN·I0(ωkω0)4(αρσ)k2,
where N is the number of molecules per unit of volume involved in the process, ωk the frequency of the scattered light, and αρσ are the components of the polarizability tensors. The Raman cross section is then defined as σS=IR/(IoN), in terms of the scattered intensity per molecule, per incident intensity, for a given vibrational mode. The integrated Raman scattering cross sections are usually very small, from 1030 to 1025cm2/molecule [115], and thus, high-intensity excitation lasers or long integration times are needed. Thus, Raman spectroscopy is a label-free powerful technique for monitoring biological samples that provides detailed information about the chemical composition of cells and tissues. Unfortunately, producing a Raman image can be a slow process due to the long integration times needed to detect the weak Raman signal. In this sense, highly optically efficient LSFM is an amenable platform for Raman spectroscopy [99101,116]. For more detail on Raman LS implementations, please see Section 6.1c.

3. Designing a LSFM

3.1. Practical Considerations

Once the different contrast mechanisms that can be used to form an image have been presented, the next step is to understand how the image in a LSFM is actually formed. To do that, please note that, in a LSFM, the excitation and collection branches are uncoupled. Therefore, it is helpful to study both the excitation and collection arms of the LSFM, separately, as shown in Fig. 3.

 figure: Figure 3.

Figure 3. Schematic of the illumination and detection arm in a LSFM.

Download Full Size | PDF

3.1a. Illumination Path

In a LSFM, a focused light beam is used to produce the excitation LS (see Fig. 3). When using a Gaussian beam, its beam waist (w0) can be related to the sectioning ability and, therefore, in a first approximation, to the axial resolution, Raxial, of the final image as

Raxial=2w0=λπθ=22λfπD=2nλπNA,
where f is the focal length of the lens, D is the diameter, and n is the refractive index. For a formal definition of resolution, see Section 5. Similarly, the Rayleigh range, zr, can be related to the FOV of the image and will be given by
FOV=2zr=2πw02λ.

Conveniently, the FOV as defined by the previous equation is also equal to the full width at half-maximum (FWHM) of the axial intensity distribution (along x in Fig. 3) of the Gaussian beam.

Note that, in contrast to “coaxial” imaging (i.e., wide field and confocal-like techniques), in LSFM a more isotropic volumetric resolution can be obtained, as the axial resolution is linked to the waist of the excitation beam and not to its Rayleigh range.

From the previous equations, it is also possible to see that increasing the FOV implies increasing the Rayleigh range of the excitation lens. This can be achieved using low NA lenses. However, this will also reduce the optical sectioning capabilities, as the thickness (waist) of the generated beam will also increase. However, note that it is not always practical to use Gaussian beams and, in most common experimental situations, truncated or “apertured” beams are used. In such cases, the above calculated dimensions can be approximated following the Fraunhofer diffraction pattern of a circular aperture by Bessel functions [117]:

I(z)(J1(z)z)2,
where J1(z) is the Bessel function of the first type. This intensity distribution is known as the Airy pattern.

Equivalent to the definitions used for Gaussian beams, the axial resolution of an image formed with apertured beams will be related to their waist (thickness) at the focal position, Dbeam. This is given by the diameter of the Airy disk (the central lobe of the Airy pattern):

Dbeam=1.22·λillNAill.

Similarly, the FOV of the image will be related to the size of the main lobe of the focused beam in the axial direction. This takes the form of a sinc2 function:

I(x)(sin(x)x)2.

Here, the distance between zeroes of the sinc2 function is given by

Sx=4·n·λillNAill2,
and the FOVi will be given, to be consistent with the definition of the Rayleigh range of a focused Gaussian beam [see Eq. (7)], by the FWHM of the central lobe of the sinc2 function [Eq. (10)] as
FOVi=1.78·n·λillNAill2.

For more detail on the generation and shaping of excitation beams and other more advanced excitation modalities, please refer to Section 5.

3.1b. Detection Path

The detection path is generally composed of an objective lens, having a focal length of fobj, a tube lens (TL) with focal length of fTL, and a filter to reject the illumination wavelength. Images are collected using a 2D detector array (CCD or CMOS). The interplay among the relevant parameters of all these elements will determine both the final resolution and FOV of the system. Thus, the design of the detection path starts by defining of the functional relationships among the different components.

The magnification of the detection optical system, objective plus TL in an afocal configuration, is given by

M=fTLfobj.

In the detection arm, the transversal resolution, RT, will be given by the Rayleigh criteria as

RT=0.61·λemNAdet,
where NAdet is the NA of the detection objective and λem the emitted wavelength. Similarly, the axial resolution will be dictated by the detection objective lens. Taking into account the same criteria (FWHM) as before, this will be given by
Rdet-axial=1.78n·λemNAdet2.

In an imaging system involving a 2D array detector (e.g., a CCD or a sCMOS camera), the properties of the sensor play an important role in defining the final imaging properties, including the FOV and the resolution. Thus, the characteristics of the imaging sensor can be used as a starting point for the design of the imaging path of the LSFM. Appendix B contains a list of the most common dimensions and other relevant parameters available in scientific cameras for microscopy.

The first important design consideration is that both the pixel size and the area of the imaging device have to be carefully adapted to the optical resolution and available FOV generated by the LS, respectively. Thus, the length of the imaged FOV in terms of the number of pixels of the detector array will then be given by

FOVi=FOVdM,
where FOVd is the generated FOV as seen by the detection imaging system.

In an ideal case, the FOVd should match the dimensions of the active area in the 2D array sensor:

FOVd=#pixelsx·px sizex,
where #pixelsx is the number of pixels in the x direction, and px sizex is the size of the sensor in the same direction. From the above, it is possible to see that the size of the imaged pixel will be given by
pxsizei=pxsizexM.

Also, because of the Nyquist sampling criterion, the maximum resolving power in the transversal (xy) direction can be expressed in terms of the camera’s pixel size as

RT=2·px sizei=2·pxsizexM.

Finally, it is worth noting that the intrinsic characteristics of CCD, EMCCD, and sCMOS technologies, in terms of size, sensitivity, and noise characteristics, are very relevant when defining the LSFM application. For example, applications requiring large FOV need cameras with large numbers of pixels. In such cases, sCMOS cameras seem to be more adequate. However, for applications using very dim fluorescent samples (low-light conditions), an EMCCD would provide a better signal and a more uniform dark noise [118].

3.2. Practical Examples of LSFM Implementation

For selecting the optical components of the LSFM, the two paths (illumination and detection) and the 2D detector array should be considered. Normally, the starting point of the design is the size of the object to be imaged, which will then dictate the required size of the FOV of Eq. (12). Once the FOV has been set, a constraint on the NAill of the illumination arm is imposed. Then, knowing the value of this NA, the thickness of the LS can be determined by using Eq. (9).

Following up on the previous discussion and by analyzing the “crossed” geometry of the LSFM scheme of Fig. 3, the next designing step is relating the thickness of the LS with the axial resolution of the detection system (15), as follows:

Dbeam=Rdet-axial.

By using Eqs. (9) and (15) in the last equation, it is possible to find a very useful relationship between the illumination and the detection objectives:

NAdet=ϵ1.78·n1.22NAill,
where ϵ=λem/λill. This equation provides a theoretical value of the required NA for the detection and illumination objectives. However, in practice, fulfilling the equation is not always possible. In fact, the working distances, magnification, NAs, and so on, usually determine the body dimensions of commercially available objectives. Depending on the choices made, placing them together in an orthogonal configuration will not always be possible. This is even more of an issue when dealing with objectives with large NAs. Therefore, Eq. (24) must be taken only as a starting point for LSFM design. This will be further explained in the next sections.

Once the NA of the objectives used for generating the LS and for collecting the light have been set, then transversal and axial resolutions of the imaging system can be calculated using Eqs. (14) and (15), respectively.

Finally, once the transversal resolution is found, it is used to find the required magnification on the detection imaging system so that the image sampling is performed in an adequate way with the discrete detector array. For such purpose, using the information on the pixel size of the camera, Eq. (19) comes in handy.

To illustrate the proposed design pathway, from now on we will consider three examples in which the size of the object to be studied has been set to fit in three different FOV sizes of 50, 500, and 5000 μm. As a detector, for the last two examples we have chosen the Hamamatsu Flash 4.0 sCMOS camera, due to its widespread use in the LS community. Considering the data given in Appendix B for the camera (pixel size 6.5 μm, 2048 pixels lateral size) and using the design criteria presented in this section, we have calculated the important design parameters and the resulting imaging properties of the optical system for the three cases. The results are summarized in Table 1. It is worth noting that the examples shown here represent canonical designs of LSFM in which the images fulfill the Nyquist criterion. However, for applications requiring imaging under specific conditions, other optimization constraints can be chosen. For example, when imaging under low light levels or at high speeds, the use of pixel binning for recording the image is advisable.

Tables Icon

Table 1. Relation of Components and Parameters Used to Define LSFM in Three Different FOV Regimes

To start off the realistic designs with commercially available optical components, we have selected either the required FOV or the transversal resolution as the initial parameter.

In the following sections, we present the designs of three main case studies that roughly cover most of the optical setups that can be encountered in the literature on LSFM. The light wavelengths used for the examples are 488 nm for illumination and 525 nm for detection. The sample is supposed to be immersed in an aqueous medium with a refractive index of n=1.33.

3.2a. Case I, FOV=5000μm

This large FOV (FOV=5mm) corresponds to the so-called “macro” regime [see Fig. 4(a)]. Upon inspecting Table 1, we can see that the ideal optimal theoretical transversal resolution that we could obtain is about 1.8 μm. By applying the Nyquist criterion (at least 2 pixels are needed to correctly sample the resolution of the system), then each pixel in the camera would correspond to 0.9 μm in the sample. In this case, we would need 5562 pixels to be able to record an image of 5 mm of FOV. As scientific cameras have a maximum of 2000pixels (see Appendix B), a decision will need to be made for maintaining such resolution but limiting the FOV, or for imaging the full FOV but reducing the resolution. For this example, we will choose the latter.

 figure: Figure 4.

Figure 4. LSFM setups: (a) macro LS configuration, (b) cylindrical lens (CL) configuration, and (c) DSLM configuration. See the main text for a detailed description.

Download Full Size | PDF

In terms of excitation, the NA of the illumination beam needed to generate 5 mm of FOV is 0.02. This can easily be achieved with a simple cylindrical lens, which is the more natural way to produce a LS. When using such lenses, the dimension of the FOV in the x direction (see Fig. 3), is related to the Rayleigh range of the beam. It is worth noting that, in this case, a rectangular aperture (RA) is used to uncouple the vertical and horizontal directions. This provides the system with the possibility of modifying the vertical FOV independently from the Rayleigh range. Therefore, the diffraction formulas defined before need to be corrected and correspond to a sinc2 function instead of a Bessel function [119]. Because of this, Eq. (9) becomes

Dbeam=λillNAill.

The height of the FOV (y direction) will be directly related to the size of the beam at the cylindrical lens. Obviously, the beam size in this direction can be modified independently by changing the height of the RA, keeping its width fixed at the required size for controlling the other two important properties, the FOV length and the LS thickness.

As in this case we have chosen to optimize the FOV by sacrificing the resolution, the FOV in the image space has to be adapted to the size of the active area of the detector, which also means that the pixels as seen at the sample space will be larger. When using a camera with 6.5 μm pixel size and 2048 pixels (13.3mm×13.3mm), a 2.6× magnification is needed to acquire 5 mm of the sample. Therefore, according to Eq. (14), the required NA of the detection objective is NAdet0.06. As an example of implementation with stock optical elements, consider a cylindrical lens with 175 mm focal length and a 5 mm diameter input beam. Then, by following Eq. (12), this combination will result in a FOV of 4.3mm×5mm. In the detection path, consider a 40 mm focal length objective with a 0.12 NA (i.e., Nikon 2×0.06NA) and an achromatic doublet with a focal length of 250 mm as TL, resulting in an effective magnification of 2.5×. Thus, the camera will collect a FOV of 5.2 mm with an optical resolution of 5.3 μm but having a pixel size of 2.6 μm. Higher NAdet can be chosen, but care has to be taken as in these cases the final image resolution will be limited by the pixel size of the camera and not the by the optics used.

3.2b. Case II, FOV=500μm

For higher resolution imaging conditions (i.e., FOV<=500μm), the LS is normally imaged to produce a smaller (de-magnified) LS using a microscope objective [see Fig. 4(b)]. Thus, following the above criteria, a 500 μm FOV determines a lateral resolution of 1.01 μm, as shown in Table 1. The number of pixels needed in this case is 989, which can be covered by most modern scientific cameras. Taking a pixel size of 6.5 μm, it is easy to see that 12× magnification could be used. A practical possibility can be using a detection optical system with a Leica 20×0.5NA water immersion objective and an achromatic doublet of 200 mm focal length as a tube lens.

However, following the procedure of the first case used for the excitation path, it is impractical to start with a 500 μm collimated beam that is to be focused by a cylindrical lens. Similarly, if the RA strategy is adopted, the secondary lobes of the obtained diffraction pattern of a very small aperture will become evident, affecting the quality of the LS. To avoid these problems, an objective can be introduced in the excitation path. An additional advantage of using objective lenses for generating the LS is that, depending on the quality of the objective, chromatic aberrations may be factory corrected, which is an important advantage for multi-color implementations. In this case, the FOV height is given by

FOVh=Dpfob-illfcl,
where fob-ill is the focal length of the excitation objective, fCL is the focal length of the cylindrical lens, and Dp is the diameter of the beam at the entrance pupil of the system. With the new degree of freedom, the constraints to the beam diameter and focal length of the cylindrical lens can be reduced. A reasonable value for fob-ill could be 40 mm. This value fixes the beam diameter to 3.8 mm and fCL to 308 mm. The chosen objective has to provide an entrance pupil larger than 3.8 mm. As an example, the Nikon 5×0.12NA would fulfill this requirement. Here it is worth mentioning that air objectives are not optimal, although permissible in some cases. One example could be the case of generating a large LS, where the required excitation NA is very low. In such a case, the aberrations induced at the air–glass–water interfaces can be neglected [18].

It is worth noting that the aperture approach mentioned before could also be used to reduce the fCL, because in this case we can access a plane close to the conjugated object plane and back focal planes (BFPs) of the detection system. This will reduce the diffraction artifacts. If we wish to limit the FOV height, the aperture should be placed at the beam entrance plane located at a distance equal to fCL before the cylindrical lens.

Another common alternative for generating the LS is by scanning the laser light beam in the so-called DSLM mode (see Section 5.4). In this approach [see Fig. 4(c)], the input beam is limited by the size of the clear aperture of the galvanometric mirrors (GMs). This fact will determine the magnification (Msc) of the telescope as

MSC=2·NAill·fobj-illDp.

From the FOVy, the scanning amplitude of the GM can be determined by

tan(2θGM)=FOVy·MSC2fobj-ill.

In this case, as we added another lens for the telescope, we have another degree of freedom for the design. As an example, using the same 40 mm objective and given a clear 2 mm aperture of the GM, this determines Msc of 1.92× and the amplitude of 12 mrad. In terms of voltage applied to the GM, taking a conversion factor of 1 V/deg, the maximum voltage applied to the GM would be 688 mV.

3.2c. Case III, FOV=50μm

The same formalism can be used when designing high-magnification LSFM. For a FOV of 50 μm, a detection objective of 0.56 NA is needed. As the lateral resolution obtained with this objective would be 570 nm, the minimum number of pixels needed to meet the Nyquist criterion is 176 (see Table 1). As standard, available scientific cameras have 128, 512, 1024, or 2048 pixels; in this case (see Appendix B), the EMCCD camera with 512 pixels could be the right choice. If we take into account its pixel size (16 μm), we notice that now a 56× imaging system would be needed. This magnification can be approached by combining an objective with fobj=3.3mm (i.e., 60× Nikon objective) and a TL with 200 mm focal length. The actual FOV captured on the entire active area of the EMCCD will be 137 μm. This means that not all the active area of the chip will be used, and care has to be taken as regions larger than such an area will result in a blurred image. However, this could easily be addressed using only the region of interest (ROI) on the camera.

To generate the LS in this case, we could then use a cylindrical lens or the DSLM approach. If a cylindrical lens is used, then an objective with fob-ill=5mm and NA=0.15 (see Table 1) would be needed (a suitable commercial objective could be a Leica 40×0.6NA WI). Because of the short focal length, immersion objectives should be used in both the illumination and the detection paths. From the previous relationships [Eq. (23)], beam diameter, Dp, is then fixed to 1.52 mm and fCL=152mm.

When using the DSLM approach, a clear aperture of Dp=2mm on the GM and a telescope of Msc=0.75× would be needed [see Eq. (24)]. This would produce the beam size diameter needed to obtain the 0.15 NA, as shown in Table 1. In this case, the scanning amplitude necessary would be 3.8 mrad and 220 mV voltage would need to be applied to the GM.

As has been presented above, all the parameters involved in beam engineering, image formation optics, and sensor geometry in a LSFM are strongly correlated. Using these relationships, a LSFM can be designed with the best expected performance. However, due to the limitation of relying on stock optical components, all system specifications cannot be fulfilled as expected in the preliminary design. Therefore, the decision about what needs to be traded off should be decided in a task-based manner. For example, in the macro LS case, the originally designed system on the table can cover a smaller FOV than that required, but with very high resolution. Such a system can be more suitable for looking at very small and specific structures in a large sample. But the same system can fail when the change of morphology of the whole sample needs to be recorded very fast. In this case, the system covering a full required FOV will be more suitable.

4. LSFM Configuration and Implementation

In the previous section, we presented a comprehensive methodology for designing a LSFM covering most of the imaging scales required in biological microscopy. So far, we have focused on the simpler LS scheme made of an illumination and a detection optical path. However, there are multiple architectures, different ways to build a LSFM, from the custom-made bench-top implementations to the most recent commercial systems. In the following, we will present some of the most relevant LS architectures that have been released so far.

For many years, the lack of a commercial system promoted widespread custom LS solutions with different approaches in terms of number of detection ports, illumination patterns, sample chambers and sample mounting, control software and image processing solutions, and so on. Such a situation was maintained for a few years, but now LS technology is maturing and companies are releasing user-friendly LSFM, taking advantage of the advances in all the different mentioned fronts. Thus, there are currently many commercial alternative options, each incorporating different capabilities (see Table 2). Some are embedded on a regular microscope body, while others are assembled on a breadboard. Interestingly, each configuration offers pros and cons with no clear universally accepted system, as all of them are tailored for specific needs.

Tables Icon

Table 2. Representative LSFM, Commercially Available

One of the main differences among the several LS implementations relies on the relative orientation of the plane that is formed by the two objective lenses and the plane of the optical table [see Figs. 5(a) and 5(b)]. These could be parallel, such as SPIM [2], or perpendicular, including iSPIM, DiSPIM [28,72], and ultramicroscopes [120]. The orientation depends on the sample under analysis and the sample mounting approach. The parallel configuration allows for easy positioning of the sample from the top of the (normally water immersion) chamber. A perpendicular configuration allows coupling the LS illumination onto a standard microscope body, taking advantage of all the microscope capabilities. In addition, this configuration is more convenient if traditional cell culture mounting on glass slides or coverslips are required. Therefore, this has been the election of a few companies (see Table 2).

 figure: Figure 5.

Figure 5. Examples of four different LSFM architectures. (a) Multiple objectives: mSPIM has double side illumination and single side detection, all lenses in a plane parallel to the table. Sample (thinner cylinder in front of the objectives) can be positioned from the top, using a capillary (thick cylinder, entering the scene from the top). (b) Double side illumination and double side detection with only two objective lenses on a 45° configuration in diSPIM, both lenses in a plane orthogonal to the table. (c) A single objective lens and a micro-machined mirror is used in a soSPIM microscope to both excite and detect emission from fluorophores. (d) Two en face objectives and an AFM tip mirror are used in RSLM; excitation objective not shown. The LS is represented in blue and the detection cone in green.

Download Full Size | PDF

In all cases, images suffer shadowing effects, produced due to absorption of the excitation light that can be appreciated on the side of the sample that is farther away from the laser. For this reason, a different advanced illumination configuration based on pivoting the LS helps to minimize the shadowing effect. Also, for thick samples, the quality of the images is strongly degraded due to scattering. Therefore, mSPIM [57] has also been proposed to reduce absorption and scattering artifacts. In mSPIM, the LS is first rapidly tilted on the detection axes, and then sequentially directed onto the sample from two opposing directions, providing an evenly illuminated focal plane.

The last step in complexity is the use of a fixed set of four lenses (two for illumination and two for detection) and two cameras in order to eliminate the time-consuming rotation of specimens required by SPIM. In systems such as MuVi-SPIM [61] and SIMView [59], dedicated hardware and software for data acquisition and image fusion manage the data in real time. However, at least one single 90° rotation is still required for increased axial resolution.

Because of spatial constraints imposed by the two orthogonal objectives, other alternatives have been proposed to use a conventional microscope frame. This would allow imaging of samples prepared by many standard sample preparation techniques, e.g., microscope slide and coverslip, tissue culture dish or multi-well plate. Recently, new configurations have been developed where the same high-NA lens is used to both illuminate and image the specimen. This is the case of soSPIM [7981], depicted in Fig. 5(c). This configuration requires the use of micro-mirrored cavities combined with a laser beam steering unit installed on a standard inverted microscope. The illumination and detection are done through the same objective, allowing for 3D single-molecule-based SR imaging of whole cells or cell aggregates, see Section 6.3b for more details. OPM is another LSFM technique that uses the same high-NA microscope objective to provide both fluorescence illumination and detection [77]. In OPM the excitation light is focused onto a short line at the edge of the back aperture of the microscope objective in order to produce a sheet of illumination at an 60° angle to the conventional optical axis. The fluorescence emitted by the sample is then collected back through the same objective. Two additional microscope objectives are inserted into the fluorescence collection beam path so that the focal plane of the microscope is tilted by 30°. This results in a tilted focal plane that is co-planar with the illumination sheet. Both the illumination and detection focal planes are swept simultaneously and remotely through the sample volume. The imaged volume is determined by the axial position of the objective lens of the secondary microscope. This is, in turn, controlled using a piezo-electric objective actuator, enabling high-speed volumetric imaging [23]. Different variants of the OPM setup have appeared. One example of optical setup is swept confocally aligned planar excitation (SCAPE) microscopy [87]. It uses a unique confocal descanning and image rotation optics that maps a moving plane onto a stationary high-speed camera, permitting completely translationless 3D imaging of intact samples at rates exceeding 20 volumes per second. In this case, the only moving component is an oscillating polygonal scanning mirror mounted on a galvanometer motor, which sweeps an oblique LS back and forth across the sample while the descanned detection plane remains stationary. However, due to oblique illumination and collection geometry, the full NA of the objective is not used.

Another configuration that allows easy integration on a commercial microscope body is using two opposing objective lenses, one for illumination and another for detection. The reflected light-sheet microscopy (RLSM) technique replaces the condenser of an inverted microscope with a vertically mounted high-NA water-immersion objective [82]. This objective focuses an elliptical laser beam to form a diffraction-limited sheet of light with an FWHM>0.5μm. A small mirror made on a tipless atomic force microscopy (AFM) cantilever reflects the LS by 90° and projects it horizontally onto the sample. Fluorescent light is collected by a second high-NA objective, enabling sub-micrometer optical sectioning with high sensitivity and temporal resolution [see Fig. 5(d)]. Vertical scanning is achieved by scanning the sample or the cantilever using a piezo stage. Based on a similar concept, there are also commercial solutions that allow converting a confocal into a LSFM, with minimum modifications to commercial imaging systems and without compromising confocal functionality (see Table 2). In both of these cases, owing to the upright geometry of the illumination and detection objectives, standard glass-bottom dishes can be used to both grow and image cells, thereby simplifying experimental procedures.

4.1. Sample Mounting

Because of its original architecture, one of the early challenges of LS microscopy was how to mount the sample in the confined space between the illumination and detection objectives. At the same time samples needed to be positioned, scanned, and sometimes be able to rotate freely, all this without affecting their physiological functions and having a proper temperature and pH control. In early applications, such as developmental biology studies, large FOV objectives were used to image through chambers containing the live sample in its own culture medium. Later on, to minimize aberrations, chambers were designed so that water-immersion objectives could be attached to the chamber and be in contact with the liquid medium. To keep the sample hydrated and to allow the flux of nutrients toward the living sample inside the chamber, samples are embedded in a transparent porous substrate. Depending on the sample and on the application, this substrate can be made up of agarose gels, molten phytagel [43], and collagen fibrous microenvironments [121], among others. More recently, FEP materials have proven to be useful since their refractive index is close to that of water, helping to minimize aberrations. In addition, it is quite inexpensive and can be fabricated as a capillary or as foils. For small samples, the use of FEP foils allowed the use of high-NA detection objectives (with short working distance) in a 45° inverted configuration for visualization of delicate samples, such as mammalian embryos [74]. Finally, there are applications that require mounting the sample on classical flat substrates (such as microscope cover slides or culture petri dishes). In such cases, novel LSFM schemes based on single lens [79,81] or 45° upright configuration implementations [28,73] can be used.

To sum up, currently LSFM is such a flexible technique that it allows many solutions to be implemented in terms of how to mount the sample and depending on the type of sample to be imaged.

4.2. Acquisition of the Optical Sections

In the following subsections, we briefly describe the principal methods used for recording optical sections of the sample with the ultimate goal to have a 3D image representation of the sample.

4.2a. Mechanical Scanning of the Sample

In a conventional LSFM, the LS and the detection objective are static. Therefore, independent of how the LS is created, the sample has to be moved through the detection axis and across the illumination plane. In this way, it is possible to record an image for every position of the sample. Because of the intuitive nature of this process, early LSFM configurations (such as orthogonal-plane fluorescence optical sectioning [OPFOS] [122] and the oblique illumination confocal microscope or confocal theta microscope [123]) relied on this mechanical sample displacement for the generation of 3D images. This procedure has evolved and current implementations also include sample rotation [2] [see Fig. 6(a)]. This mechanical movement is typically accomplished by using a DC, a stepper, or a piezo-electric motor. Although this solution is technically simple and potentially cheap, mechanical scanning produces sample vibrations that may perturb sensitive biological specimens and induce imaging artifacts. In addition, such translations may be too slow to track some of the fast dynamic processes over the whole sample volume. This has also triggered an interest in developing new sample-scanning techniques [42].

 figure: Figure 6.

Figure 6. Examples of four different sample scanning approaches on LSFM systems. (a) Standard sample scanning by mechanically translating the sample across the LS plane. (b) Opto-mechanic solution where the sample remains static. A galvo mirror scans the LS over the sample in coordination with a piezo stage that refocuses the detection objective. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI. (c) The detection objective depth of field is extended using a cubic phase mask to extend its depth of field, allowing all the planes of the sample to be in focus simultaneously. During acquisition a galvo mirror scans the sample, illuminating different planes. (d) Samples flow inside a FEP capillary across the LS plane on the SPIM-Fluid system. The system allows imaging of, in an automated way, hundreds of samples providing high-throughput capabilities. Adapted from [2,42,84,89], respectively. Figure (a) from Huisken et al., Science 305, 1007–1009 (2004) [2]. Reprinted with permission from AAAS.

Download Full Size | PDF

4.2b. Opto-Mechanical Scanning of the Imaging System

There are situations in which it is preferable not to move the sample and keep it totally static throughout the duration of the experiment. In LSFM this would be possible if the illumination and collection arms of the microscope could be moved in a solidary way, keeping their relative distance fixed. The objective-coupled planar illumination (OCPI) microscopy [85] technique was thus devised to be able to scan the LS through the specimen, while at the same time keeping the relative position of the LS fixed with the focal position of the detection objective [Fig. 5(b)]. This configuration allowed imaging a region of a mouse brain of 601×701 pixels (425μm×500μm) with a 5 μm separation, and it was acquired in 2 s at a speed of 20 planes/s. OCPI, although effective, needs to move a large and heavy portion of the imaging system. Other solutions have been devised based on independently scanning the LS using a galvo mirror. However, in order to obtain meaningful images, the detection objective needs to be continuously refocused to the illuminated plane. A straightforward approach is to use mechanical refocusing, mounting the detection objective onto an axially movable stage motor, as shown in Fig. 6(b).

There are several techniques that use this principle, such as iSPIM/DiSPIM [28,72], lattice LS [75], and SIMView [59]. With these configurations, imaging speeds of up to 40 planes per second have been demonstrated, which are enough to track the development of invertebrate model organisms or to visualize calcium dynamics over the whole zebrafish brain.

4.2c. All-Optical Sample Scanning

Other applications, such as single particle tracking or imaging heart beats in zebrafish, require even faster sample scanning rates. The bottleneck of the previously described systems is the limited speed of the refocusing motor. Recently, two systems have overcome this limitation by an all-optical approach. Using an electrically tunable lens (ETL) between the detection objective and the camera, it is possible to modify the focusing position of the detection objective [86]. Then, by synchronizing the ETL with the position of the LS (which is moved using a conventional galvo scanner), extremely fast volumetric scanning speeds have been achieved without the need to move either the sample or the objective. As an example, volumetric images of a beating zebrafish heart have been reported, consisting of 17 planes at 510 frames per second, equivalent to 30 volume scans per second. Following a different approach, in decoupled illumination detection LSFM (DID-LSFM) [89] a wavefront coding (WFC) technique is combined with LSFM, to extend the DOF of the detection objective [see Fig. 6(c)]. This system incorporates a cubic phase mask in the output pupil of the detection objective. Because of the extended DOF, the LS can be freely scanned, thus illuminating the different planes that will eventually form a volume. In this case imaging speed is limited only by the integration time camera or by the signal-to-noise ratio (SNR). By using the DID-LSFM approach, 3D LS imaging speeds up to 73 volumes/s have been achieved to track the 3D trajectories of microspheres (0.2 μm) freely floating in phosphate-buffered saline. The imaging of a cropped volume with 22 planes was performed with the camera working at 1600 planes/s. More detail can be found in Section 6.2c.

4.2d. Fluidics Approach

Previously we have described LS systems designed for observation of single gel-embedded samples. However, for the case of large scale experimental applications and high-throughput screens, this way of mounting samples is unpractical and time-consuming. Recently, newly advanced systems have been presented that combine LS imaging with microfluidics in a flow-cytometry-like approach. In these, a flow sheath is used to hydro-dynamically focus the particles into the central part of a square capillary to achieve uniform laminar flow. The particles flow orthogonally through the LS plane and exit sideways onto the water dipping imaging objective lens, while a suction tube is used to collect the waste liquid. An inverted fluorescence microscope is positioned to take the perfectly focused image of the LS-illuminated section. This allows continuous recording of images sequentially as the sample travels through this plane [124,125]. Such systems have been used for visualizing phytoplankton in water quality control studies. Other variations of the system have also been proposed to visualize intracellular organelles [126]. Recently, a system capable of operating LSFM for 3D high-throughput imaging on 3D cell cultures and vertebrate model organisms was reported [42]. To provide automated sample loading, the microscope incorporates a FEP tube that crosses the immersion chamber along its diagonal for transporting the samples inside a maintaining buffer solution stream. To acquire the optical sections, samples are pushed back and forward across the LS with a motorized syringe [Fig. 6(d)], and then flushed away to image the next sample. Thus, samples can be maintained in a controlled environment, such as microfluidics bioreactors or multi-well plates with different testing drug dosages, being only temporally loaded for imaging. Efforts for miniaturizing this fluidics approach have also been presented. Thus, using femtosecond laser micromachining, an integrated LSFM on a chip containing an optofluidic device was created [127]. Such a system was able to produce a continuous flow of samples, and 3D high-throughput image multicellular spheroids were then possible.

4.3. Basic Image Processing in LSFM

Because of reduced light dosage and, hence, lower phototoxicity and photobleaching, LSFM has become the technique of choice to image dynamic processes. Since the recording data not only represent 3D, multi-color, and multiview features, but also time sequences, it is especially challenging to extract and illustrate all the complex dynamics that are hidden in these data. For instance, the complexity of the large amount of information generated from the inside of living embryos calls for new tools for data segmentation, visualization, and navigation. However, to properly analyze such datasets, image quality has to be improved, either during acquisition or through post-processing, to remove artifacts coming from stripe patterns or to increase signal contrast and penetration depth. This first layer of image analysis typically marks only the beginning of the post-experiment analysis. Powerful visualization tools and physical modeling are subsequently needed to interpret the resulting data and test mechanistic hypotheses.

In this subsection, we will overview some approaches for image quality improvement that are commonly implemented in a LSFM. This is quite important as it highlights the increasing importance that the computational systems architecture acquires, both for data storage and image processing.

4.3a. Stripe Pattern Removal

Because of their high quality (especially good axial resolution, high dynamic range, and low noise), LSFM datasets generally do not require common preprocessing routines, such as denoising, deconvolution, or unmixing. Rather, it is the revolutionary architecture that imposes limits on image quality. One of the major limitations of LSFM is the creation of stripe patterns due to absorption of light on the surface of the samples along the light propagation path. These stripes create shadows that hinder the use of segmentation tools, needed, for example, to track cell lineage during development. Different approaches have been proposed to overcome those artifacts. On the one hand, image post-acquisition processing tools for variational stationary noise removal [128] or multiview fusion [58] allow correction of artifacts up to some extent, but require powerful workstations and long processing times. On the other hand, double-sided illumination systems (with ulterior image fusion) [57] or systems that allow us to pivot the LS illumination angle and longer integration times at the camera to obtain an average image [129], remove stripes from shadows during acquisition, reducing image processing time. Another promising approach is the use of non-diffractive Bessel laser beams [64] or other exotic beams, as described in Section 5.

4.3b. Multiview Reconstruction and Deconvolution

Sample rotation is one of the main characteristics of LSFM techniques. It allows observing large specimens with high resolution from different view angles, avoiding the deleterious effects that sample absorption and scattering introduce. As a result, observation of biological features that would otherwise be hidden is possible. Sample rotation is especially helpful on single-sided illumination setups, where one of the faces of the sample is poorly illuminated.

With multiview recorded images, to get the resultant image, it is necessary to computationally process the recorded images to fuse them into a single dataset. The general goal of all of these algorithms is to merge the “most useful” information from all the datasets. This idea was first developed for tilted-view microscopy [130] and has been adapted to SPIM microscopy in several works, using one [41] or two [61] cameras in parallel. Moreover, advanced designs, such as the SiMView microscope [59], also offer open source software to perform four-view fusion of time-lapse or single-run data sets recorded with bidirectional illumination and two cameras.

An open source plugin, SPIM Registration [58], and its latest version, Multi-view Registration [131], are available through a Fiji software package. Instead of sample features, the software uses fluorescent beads in a rigid mounting medium as reference markers, enabling efficient, sample-independent registration of multiview SPIM acquisitions.

Because of the multiview fusion computational procedure, some sharpness is lost in the final image when compared to the original data. Multiview deconvolution can substantially improve the resolution and contrast of the images, but its application has been limited because of the large size of the datasets. Recently, new multiview deconvolution software that can be used to increase the final quality of the image, such as Bayesian-based [131] and plane-wise [132], have been developed. These algorithms drastically improve convergence time and provide rapid implementation using graphics hardware.

LSFM is a simple yet powerful idea for attacking biological problems where 3D non-phototoxic imaging is required. Nevertheless, as we have seen in this section, starting from the simplest configuration based on a couple of lenses, it can evolve to a very sophisticated instrument with multiple illumination and imaging paths, capable of complex image acquisition pipelines and requiring advanced digital processing of the captured images. All together, these complexities usually lead to an improved LSFM that helps scientists obtain better and more reliable biological data.

5. Advanced Engineering of the LS

In this section, we aim to provide a detailed overview of the LS engineering process, starting from basic optical principles. We describe the process of LS generation from the simplest to the more complex cases, giving for each case insights into the implications of such LS on the imaging performance of the microscope.

5.1. Physical Considerations

In conventional LSFM, a sheet of excitation light, produced with a cylindrical lens, is projected onto the sample. Then, the fluorescence light emerging from this plane is collected through a microscope objective aligned to the axis orthogonal to the excitation sheet. The image formation in a LSFM can thus be modeled as

i(x,y,z)=o(x,y,z)*|hLS|2,
where i(x,y,z) is the image, o(x,y,z) is the fluorescent object, and |hLS|2 is the overall intensity PSF of the LSFM. The intensity PSF of LSFM is the product of the illumination and detection intensity PSFs [133]:
|hLS|2=|hill|2×|hdet|2.

The propagated field after the objective can be expressed as the 3D Fourier transform of a generalized pupil that can be constructed by projecting the planar aperture stretched over a sphere of unit radius [117]. Thus, the intensity PSF for both excitation and detection can be obtained independently from the coherent or amplitude PSF, h, that is defined as the 3D Fourier transform of the generalized pupil function. As stated above, the generalized pupil is by definition a spherical shell cap, and then the 3D Fourier transform can be rewritten as an integral over a 2D pupil, P(kx,ky), that is the projection of the shell on the kx,y plane, explicitly:

h=P(KxKy)e2πi(kx+ky)e2πikz(kx,ky)zdkxdky,
where
kz=(n/λ)(kx2+ky2)
accounts for the spherical shape of the pupil. For quantifying the imaging properties, it is usual to analyze the OTF, which describes the frequency response of the optical system. The intensity PSF and OTF are each other’s Fourier transforms, and thus contain exactly the same information. Thus, for LSFM, the effective OTF can be calculated as
H(kx,ky,kz)=F|hLS|2.

As usual, the modulation transfer function (MTF), defined as the magnitude of the OTF, will be used to describe the frequency response in the LSFM.

As a first application of the described formalism, we provide results on the imaging properties of a WFM that we will use later as a base for the LSFM detection system. Assuming an objective with an NA=1 and a fluorescent emission around λem=525nm that is propagating in a medium with refractive index n=1.33, we have calculated numerically the 3D PSF/MTF using Eqs. (28)–(30). Here, for consistency, we have employed a scalar paraxial approximation that accurately describes the PSF size at the focal plane but slightly overestimates its axial dimensions [134]. This is not critical in this review as the axial behavior of a LSFM is dominated mostly by the LS thickness. Figure 7(a) shows an axial view of the PSF having a tight focused intensity distribution with an in-focus diameter of 0.65 μm and a DOF of 2.8 μm. Furthermore, the axial view of the MTF in Fig. 7(b) shows a typical “bow-tie” shaped frequency response of a WFM system, which is symmetric and elongated in the transversal-frequency axis, but squeezed around the axial-frequency axis. In addition, the MTF reveals a zero-valued region in the neighborhood of the origin. This cone-shaped region is known as the “missing cone,” as neither the pure axial frequencies nor the oblique frequencies within this cone are transferred to the 3D image [135]. This explains the lack of optical sectioning of conventional WFM when imaging 3D objects. Finally, when LS illumination is implemented to the previous WFM detection scheme, the “missing cone” problem is solved and optical sectioning is achieved, as we will show later.

 figure: Figure 7.

Figure 7. Basic imaging properties of a WFM, (a) PSF and (b) MTF. See the main text for a complete description. Color intensity values are normalized to their maximum; darker colors indicate higher intensities.

Download Full Size | PDF

In the following, different LS engineering techniques will be discussed and analyzed in a way similar to that we used with the widefield case, by using both the intensity PSF and the MTF. This will help to pinpoint the differential features among them.

5.2. Beam Profiles

The beams emitted by laser devices propagate in the fundamental transversal mode (TEM00), keeping a Gaussian-shaped intensity profile, hence the name Gaussian beams [136]. These are the standard laser light beams used for fluorescence excitation in LSFM as they can be highly focused and allow optimal light efficiency as compared to other sources, such as gas discharge lamps or LEDs [137]. To be used in LS microscopy, Gaussian beams should be properly shaped for producing the required illumination pattern on the sample. They often require the use of spherical or cylindrical lenses to change the size of the beam waist, or of other means of limiting its transversal extent. In many LSFM implementations, truncation by introducing a hard aperture, e.g., metallic diaphragms or slits, is used for beam cleaning purposes, or as aforementioned, for controlling the LS extents. The propagation properties of the truncated Gaussian beams can be understood by calculating their far-field diffraction pattern as a function of the beam truncation ratio T=2ω/D, where ω is the beam waist diameter and D is the aperture diameter (see Fig. 3) [138]. By incrementing T, the beam is more truncated and, therefore, the transmitted laser power decreases. However, the increased diffraction at the aperture leads to a proportional increase of the peak irradiance at the observation plane. Furthermore, when T increases, more energy translates into diffraction rings as the beam’s intensity develops asymptotically toward an Airy pattern. Thus, when T is to be set for a particular LS, the trade-offs among the required power efficiency, the resolution, and the maximum achievable contrast should be carefully considered. Usually making T=1 is a good trade-off among those parameters [138]. Figure 8 shows the LS’s main properties, the FOVi and the Dbeam (see Section 4), as functions of the NA for Gaussian beams with different levels of truncation T. The excitation beam wavelength is set to λexc=488nm, and the beam is assumed to propagate in a medium with a refractive index of 1.33. It is clear from that figure that increasing the truncation level from T=1 to higher values does not have much effect on the LS properties. On the other hand, when the beam is slightly truncated (T=0.5), the beam shape approaches a perfect Gaussian profile. In this case, the effective DOFi of the LS becomes larger but at the expense of increasing Dbeam. Although this last result may seem to be an obvious consequence of Eqs. (6) and (7), it is interesting to note that the curves corresponding to T=0.5 in Fig. 8 correspond to the best Gaussian beam that can be produced with a lens of a given NA. Therefore, the use of beams with T1 seem to be advantageous as compared to Gaussian (T0.5) regarding the DOF–LS thickness trade-off. In this paper, for simplicity, we will assume that both the phase and intensity of the beam at the aperture are constant over the area of the diaphragm. This is a reasonable approximation for truncation ratios greater than or equal to 1.0, as seen in Fig. 8.

 figure: Figure 8.

Figure 8. Gaussian beams with different truncation ratios T. Beam properties (a) FOVi and (b) Dbeam, according to Eqs. (6) and (7). These curves were generated using the approximation formulas for truncated Gaussian beams reported in Ref. [138]. See the main text for more detail.

Download Full Size | PDF

More advanced LS implementations may require beam shaping or other wavefront engineering methods. These rely on the modification of the beam, usually at the pupil function of the microscope objective, by tuning the amplitude and phase of the optical wavefront. Beam shaping can be practically implemented by introducing passive optical elements such as amplitude masks, phase masks, axicons, and Powell lenses [139]. Active wavefront shaping devices, such as spatial light modulators (SLMs) and deformable mirrors (DMs), are other more flexible alternatives for beam shaping in nonconventional LS implementations. Both approaches enable a precise engineering of the LS that can be exploited for the creation of custom-made LSFMs adapted to different sample requirements, resulting in increased performance of LSFMs at different levels: higher resolution, larger FOV, better optical sectioning, etc.

Next, we will analyze some of the more common implementations, starting with the use of a cylindrical lens, then moving on to the generation of digitally scanned LSs based on Gaussian and other advanced beams (such as Bessel and Airy). All of these will also be presented in the context of linear and nonlinear excitation. Unless otherwise specified, the excitation objective will have a NA=0.3, whereas for the detection objective NA=1. These are realistic values of some widely used commercial objectives for LSFM implementation. In addition, the excitation and detection wavelengths assumed for all calculations are those typically employed for the enhanced green fluorescent protein (eGFP) [140,141] (see Table 3 for more detail).

Tables Icon

Table 3. Simulation Parameters for the Different Figures of LS Engineering Approaches

5.3. LS Generation Using a Cylindrical Lens

5.3a. Linear Regime

The simpler way to generate a LS is by coherently transforming a round incident laser beam into a highly elliptical one. This can be done with the help of a cylindrical lens or other cylindrical optics (see Section 3). From the diffraction theory point of view, a simple, yet realistic, model for elliptical beam formation is by using the pupil of a spherical lens with an amplitude mask shaped as a rectangular slit, as shown in Fig. 9(a). Propagating the diffracted field caused by the slit on the incoming laser beam to the focal plane of the lens results in an intensity distribution that resembles a LS. For instance, the slit geometry at the pupil plane determines the final LS geometry: its height along the y axis determines the height of the LS, whereas its length along the z axis, Δkz sets both the LS thickness and its length. Based on the pupil transmittance of Fig. 9(a) and by using Eq. (27), it is possible to calculate the 3D LS intensity distribution and the overall PSF/MTF of the LSFM.

 figure: Figure 9.

Figure 9. Examples of pupil amplitude masks for the generation of LSs based on (a) cylindrical lenses, and scanned (b) Gaussian and (c) Bessel beams. The white corresponds to a 100% transmission and the red dashed lines indicate k for maximum NA available in the excitation objective. For details on the parameters see Table 3 in the main text.

Download Full Size | PDF

The results of such calculations are shown in Fig. 10. The slit Δkz has been chosen to generate a LS with a thickness that approximately matches the DOF of a detection objective of NA=1.0 [Fig. 7(a)]. As shown in Fig. 10(b), the generated LS axial profile presents a diffraction structure aside from the main core sheet, typical of the diffraction pattern generated by a rectangular slit [119]. However, this structure is highly attenuated in the effective PSF of the microscope, as a result of the multiplication operation in the definition of the overall PSF [Eq. (27)].

 figure: Figure 10.

Figure 10. Example of a LS generated using a cylindrical lens. (a) Detection objective PSF, (b) LS intensity profile at the center of the FOV, (c) overall MTF, and (d) axial profile of the MTF (ky=0). Color intensity values are normalized to their maximum; darker colors indicate higher intensities.

Download Full Size | PDF

The confinement of the laser illumination to the DOF of the detection lens is the origin of the optical sectioning capabilities of a LSFM. This concept is better explained by analyzing the overall MTF of the LSFM shown in Fig. 10(c). The shape of the MTF follows the bow-tie cross section of a widefield microscope, as in Fig. 7. In this sense, the transversal MTF profile is similar to WFM, so the same resolution and contrast are expected for LSFM. Importantly, however, in the axial direction of the MTF, the missing cone is no longer present. This indicates that axial-frequency components of the 3D objects are transmitted by the LSFM. In other words, by changing from WFM to LS illumination, intrinsic optical sectioning is granted. Actually, the triangle-function shown in Fig. 10(d), representing the axial MTF of the full system, closely follows the Fourier transform of the LS alone, i.e., when it is generated with a slit-shaped pupil transmittance. Notice that the optical transfer functions calculated here for the LSFM also apply to the well-known confocal theta fluorescence microscopies [142]. Other alternatives to generate LSs with cylindrical optics have been reported. These are based on the use of: (i) Powell lenses to transform the Gaussian into top-hat beams, increasing in this way the resolution while avoiding sidelobes [143]; (ii) rectangular slits to produce non-diffracting carpets of light, having reduced sidelobes [144]; and (iii) a tilted cylindrical lens to produce 1D Airy beams [145].

5.3b. Nonlinear Regime

From the last paragraph, it is clear that LSFM is superior to conventional WFM methods as it enables full 3D sectioning capabilities. A further optimization could be expected by enabling deep imaging capabilities when NIR femtosecond-pulsed lasers are used. The use of TPEF (See Section 2.2) in the context of LSFM (or 2P-LSFM) was first demonstrated by Palero et al. [56].

Producing 2P-LSFM using a cylindrical lens is nearly identical to the linear case. The same optical setup can be employed, although given the change of wavelength and the squared intensity dependence of the light absorption, some scaling of the LS dimensions is expected (see Section 2.2). To actually test the capabilities of 2P-LSFM and be able to compare their performance in comparison with the previous case, an identical calculation as before is now presented and shown in Fig. 11.

 figure: Figure 11.

Figure 11. Example of a 2P-LSFM generated using a cylindrical lens. (a) Detection objective intensity PSF, (b) LS intensity profile at the center of the FOV, (c) overall MTF, and (d) MTF axial profile (ky=0) for 2P-LSFM (orange) and LS (blue). Color intensity values are normalized to their maximum; darker colors indicate higher intensities.

Download Full Size | PDF

For comparison purposes, the optical properties of the lenses, the pupil mask, and the detection wavelength were kept invariant, whereas the excitation wavelength was set to the optimum two-photon absorption cross section, σ2p, of eGFP (see Table 3). In addition, two-photon excitation was implemented in practice by taking the square modulus of the numerically calculated excitation intensity distribution. As compared to the linear case, the 2P-LS excitation profile of Fig. 11(b) shows a moderately thicker core intensity distribution. Actually, as can be seen in Figs. 11(a) and 11(b), the DOF of the detection objective seems to be smaller than the 2P-LSFM thickness, which does not fulfill our design criterion of matching those two quantities. This does not mean that a 2P-LSFM designed in such a way is not performing well, but it can reflect, for example, a wish to cover a larger FOV without sacrificing lateral resolution. On the other hand, the residual side bands are greatly attenuated thanks to the nonlinear nature of the TPEF, as seen in Fig. 11(b). Figure 11(c) further helps to illustrate the differences between the linear and the nonlinear approaches. First, for 2P-LSFM, the axial extension of the MTF is shorter as compared to the linear case. This indicates a decrease in the resolution that is expected because of the PSF upscaling produced by almost doubling the excitation wavelength. The actual resolution decrease can be assessed from the MTF profiles of Fig. 11(d), where a reduction of about 10% of the cut-off frequency is observed for 2P-LSFM when compared to the linear case. Second, as shown in the same figure, the attenuation is less pronounced for the 2P-LSFM in the lower frequency region (30% of the axial frequency range). In practice, this will result in an increased contrast of the images at such specific frequencies. However, the MTF decays faster than the linear case for higher frequencies, and thus frequencies in that range will suffer attenuation.

In principle, the contrast and resolution throughout the entire range of axial frequencies could be recovered by increasing the NA of the excitation beam, Δkz, at the cost of reducing the size of the FOV covered by the LS. In addition, when cylindrical lenses are used for 2P-LSFM, other limitations appear. These are related to the reduced intensity of the nonlinear excitation beam as the energy is distributed over the large area covered by the LS. This results in a drastic reduction of the efficiency of fluorescence excitation and usually leads to a poor SNR of the detected fluorescence. Although there is room for some improvements, such as using fluorophores with better σ2p, increasing the excitation power, or using shorter pulses to increase the peak power, the DSLM approach for 2P-LSFM has been presented as a better alternative for efficient fluorescent excitation. This will be presented below.

5.4. LS Generation Using Scanned Beams (DSLM)

The fundamental idea of the so-called digitally scanned laser light sheet fluorescence microscopy (DSLM) concept is based on generating a virtual LS by scanning in a single direction, a focused beam perpendicular to the detection axis [146]. The optical design of this system is carefully chosen so that an angular scan produced at the entrance pupil of the system is transformed into a set of vertically aligned parallel beams at the sample plane. Then, by integrating the camera signal for at least one scanning period, a homogeneous LS is formed as the incoherent summation of the fluorescence generated at the different scan positions. Here, the intensity profile of the focused beam determines some important imaging properties of the system, namely, the width of the FOV and the resolution in the axial direction of the LSFM (see Section 3.1a).

There are several advantages to this implementation over conventional LS generation using a cylindrical lens [33,147]: (i) the full power of the excitation light is concentrated into the single line, providing better illumination efficiency, which results in lower exposure times, (ii) each line in the specimen is illuminated with the same intensity, generating a more homogeneous LS, and (iii) the height of the LS can easily be controlled with the amplitude of the scanning. There are also some disadvantages with DSLM as compared to LSFM. Apart from the intrinsic higher instantaneous intensities, which may lead to increased photodamage, a DSLM setup is generally more complex as it needs at least one laser scanning system.

5.4a. DSLM Using Gaussian Beams, Linear Regime

The most basic implementation of DSLM relies on the use of focused Gaussian beams (with different truncation ratios). These have become the gold standard for comparison with other more complex beam shapes [64,65]. Figure 12(a) shows an example of an xz section of a Gaussian beam designed for a DSLM microscope with a 1.0 NA detection objective. The intensity profiles of Fig. 12 were generated plane by plane using Eq. (28) by changing the defocusing term e2πikxx for each plane. The amplitude mask was selected as in Fig. 9(b). For a focused truncated Gaussian beam, diffraction dictates that at the observation plane there should be a main intensity core together with a surrounding ring structure. Because the pupil amplitude mask is the result of the propagation of a beam through a uniformly illuminated circular aperture, the intensity distribution is well described by an Airy pattern, and the size of the central core follows that of an Airy disk.

 figure: Figure 12.

Figure 12. Examples of beam profiles for DSLM LS generation. (a) Gaussian and (b) Bessel generating the same FOV width of around 60 μm. (c) Bessel beam generating twice the original FOV width of (a). The square root of the beam intensity is shown for better visualization. Color intensity values are normalized to their maximum; darker colors indicate higher intensities.

Download Full Size | PDF

In accordance with the projection-slice Fourier theorem, the axial intensity distribution is given by the Fourier transform of the projection of the generalized pupil on the propagation axis [117]. For a clear circular aperture, this projection is always a rectangular function, and, therefore, the beam’s axial intensity profile follows a sinc2 function profile. This fact explains the intensity distribution in Fig. 12(a), and in particular, the zeros present in the intensity profile along z=0. The first zeros are usually used as the higher limit of the usable FOV width (see Section 3.1a). As mentioned before, the intensity distribution after uniform illumination of a circular aperture is well described by an Airy pattern, as shown in Fig. 13(a). The LS in DSLM is then formed by scanning the focused beam along the y axis. This is expressed as the convolution of the beam’s intensity distribution |hbeam(x,y,z)|2 with the scanning function, described in this case by a unit-area rectangle function Π of length S, resulting in the following expression for the intensity of the scanned beam:

ISB(z,y,x)=1SΠ(yS)*|hbeam(x,y,z)|2.

 figure: Figure 13.

Figure 13. Example of LS generated for DSLM using Gaussian beams. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for DSLM (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.

Download Full Size | PDF

To avoid edge effects at the extremes of the LS, it is convenient to produce a LS with SDbeam, where Dbeam is the Airy diameter of the focused laser beam [see Eq. (9)]. Figure 13(b) shows a section of ISB at x=0. Because of the radial symmetry of the Gaussian beam profile in the xz plane and due to the convolution operation, the side features of the resulting scanned beam appear less sharp as compared to those of the LS intensity profile generated with a cylindrical lens [shown in Fig. 10(b)]. This translates into a decrease of the contrast of some of the axial frequencies transmitted by the microscope. This is made evident by the MTFs shown in Figs. 13(c) and 13(d). Here the MTF values are below the reference LSFM profile for the whole frequency range. Notice, however, that the cut-off frequency remains the same as the one of a cylindrical lens case, and therefore the resolution is not compromised.

Further improvements were reported by combining DSLM with structured illumination (SI), with the aim of mitigating the blurring effects of the out-of-focus scattered light [8]. In this approach, the sheet is modulated to create sinusoidal patterns over the sample. This can be done by modulating the scanning function to shape the average intensity to the required excitation pattern. Then, by digital post-processing of the obtained images, the fluorescence generated by scattered excitation light can be rejected, resulting in an enhanced optical sectioning and increased contrast. One disadvantage of this approach is that several images are required for obtaining the contrast improvement, increasing both the capture and the sample irradiation times. See Section 6.3c for more detail on DSLM SI microscopy.

To sum up, in terms of contrast, DSLM slightly underperforms compared to LSs generated with a cylindrical lens. However, the DSLM approach provides higher excitation power and better spatial uniformity without compromising resolution. In addition, thanks to the incoherent nature of the LS generation, although affected by absorption-induced artifacts as is LSFM, DSLM is less susceptible to scattering artifacts [148]. Finally, DSLM presents other advantages that would improve both contrast and resolution for nonlinear excitation. This will be explained below.

5.4b. DSLM Using Gaussian Beams, Nonlinear Regime

In contrast to 2P-LSFM, DSLM is a more suitable platform for TPEF imaging. This is because the available intensity in DSLM is much higher than the static elliptical-beam-based 2P-LSFM, and, therefore, there is no need to increase the average excitation powers [64]. For a detailed analysis of the fluorescent generation in 2P-LSFM, see Section 2.2. Figure 14 shows the imaging properties of a simulated 2P-DSLM system having the same pupil mask amplitude as the one calculated before for conventional DSLM. In this case, the excitation wavelength has been set to the optimum σ2p of eGFP. The similarity of the results between 2P-LSFM (Fig. 11) and 2P-DSLM (Fig. 14) are remarkable: (i) thicker core intensity distributions with almost no residual diffraction structures [see Figs. 14(a) and 14(b)], and (ii) the MTF [see Figs. 14(c) and 14(d)] shows a small decrease in the axial resolution and an increased contrast for the low-frequency regime. A key advantage of 2P-DSLM is the attenuation of low-intensity features beyond the FOV (i.e., at the ends of the focused beam in the propagation direction). On the other hand, by looking at conventional DSLM in the linear regime, these lobes may produce unwanted fluorescence background (see Fig. 12) [64]. Thanks to this feature, a higher NA can be implemented to improve the axial resolution, concomitantly reducing the FOV width. Interestingly, the same advantage of axial confining can be utilized to incoherently concatenate excitation beams along the x direction, either by putting aside two counterpropagating nonlinear excitation beams [67] or by rapidly scanning the excitation beam along the x axis [149]. Another interesting approach for keeping FOV large enough is to maintain a low NA of the excitation beam and compensate the intensity decrease by using higher energy pulses in the femtosecond laser source [109].

 figure: Figure 14.

Figure 14. Example of 2P-DSLM generation using Gaussian beams. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for 2P-DSLM (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.

Download Full Size | PDF

5.4c. DSLM Using Bessel Beams, Linear Regime

Another improvement that has been implemented to allow larger FOVs while at the same time alleviating the deleterious effects of scattering on scanned sheet microscopy, is the use of Bessel beams (BBs) [148,150]. BBs are a set of non-diffracting beams with a ringed cross-sectional profile that can be described mathematically using the formalism of Bessel functions. These beams propagate without suffering the effects of diffraction, meaning that the cross section of their central core does not spread upon propagation, as expected from other beam types [148]. In the generalized pupil of a spherical lens, a BB can be also described as an infinitesimally thin ring of wavevectors that represents a set of plane waves propagating on a cone. This arrangement of plane waves provides the beam with the capability of passing through small obstructions while retaining the original intensity profile beyond the obstruction [151].

Figure 9(c) shows the typical amplitude pupil mask employed to produce BBs using a microscope objective. The transmission rings are limited by the maximum NA available at the excitation objective (represented by the red dotted line in Fig. 9). Figures 12(b) and 12(c) are the xz sections of simulated BBs using annular amplitude masks, as in Fig. 9(c). The pupil amplitude masks were calculated to generate beams with the maximum possible resolution (for a given excitation objective) and for a FOV that is 1× [Fig. 12(b)] or 2× [Fig. 12(c)], larger than that of the Gaussian case. The width of the central core of the two BBs is the same and was kept constant for the simulations (Table 3). This is one of the remarkable potential advantages of BBs for LS applications: the size of the central core and the FOV width can be changed independently. Thus, BBs can generally be employed to increase the FOV of a LSFM without affecting the resolution. However, enlarging the FOV by using this approach has a drawback: it comes at the expense of increasing the size of the ring structure around the central core, as is evident from Figs. 12(b) and 12(c) [63,64]. Figure 15 shows the results of the PSF/MTF simulation for the BB presenting twice the width of the FOV as compared to a Gaussian beam. The ring structure is evident in Fig. 15(a). This translates into a broad LS with decreased contrast, as shown in Fig. 15(b). Although the use of a detection lens with a short DOF may help to mitigate the effects of a thick LS [see Fig. 15(c)], the MTF [Fig. 15(d)] clearly shows poor performance in terms of contrast in the axial direction.

 figure: Figure 15.

Figure 15. Example of LS generation for DSLM with Bessel beams. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for DSLM with Bessel beams (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum, darker colors indicate higher intensities. See the description in the main text.

Download Full Size | PDF

To circumvent these problems and recover the contrast and resolution, the use of BBs can be combined with confocal-line detection [51,152,153] (see Section 6.2a), structured illumination [63], or deconvolution approaches [65,154].

5.4d. DSLM Using Bessel Beams, Nonlinear Regime

A different approach for the efficient use of BB would be to incorporate all the already mentioned capabilities of working in the nonlinear regime. Initially, this would have important advantages compared to previous examples for imaging thick biological tissues: (i) the reconstruction property of BBs enhances the quality of the LS as it reduces ghost-image artifacts [148] and (ii) the combined effect of using self-healing beams and NIR wavelengths increases the penetration of the LS reducing beam spreading due to scattering [51]. Figure 16 shows a simulation of the imaging properties of 2p-DSLM with BBs, keeping the same pupil amplitude mask as in Fig. 15, but using the wavelength employed before for TPEF. In Figs. 16(a) and 16(b), it is evident that the nonlinear effect helps to reduce the fluorescence generated by the ring structure [compare with Figs. 15(a) and 15(b)]. Figures 16(c) and 16(d) show that, thanks to the nonlinear effect, any remaining structure would have a very weak effect on the overall MTF. For instance, as shown in Fig. 16(d), the MTF profile for 2p-DSLM with Bessel beams does not depart that much from the reference MTF in the low–medium frequency range and actually extends beyond its frequency cutoff. This is a remarkable result taking into account that this BB enables twice the FOV as the reference LSFM, but gives better axial resolution. DSLM with BBs has been successfully used for imaging a variety of biological samples, from cells [63] and small invertebrates [64] to tissue-mimic constructs [70]. This modality can be further combined with the confocal line detection that may provide an additional increase of contrast and optical sectioning [121].

 figure: Figure 16.

Figure 16. Example of LS generation for 2P-DSLM with Bessel beams. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for 2P-DSLM with Bessel beams (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.

Download Full Size | PDF

In addition to conventional Gaussian and Bessel beams, other more complex beam shapes have been used to provide LSFM with enhanced capabilities: better optical sectioning and resolution (axial and transversal), and even larger FOVs and less sensitivity to sample scattering. Following is a brief description of the most relevant developments that make use of advanced beam shaping for generation of the planar illumination characteristic of LSFM.

5.4e. Sectioned Bessel Beams

The excitation patterns described in this section are all generated by modifying the original annular angular spectrum of a BB in order to control the beam propagation at the sample plane by using the basic principles described in Section 3.1. Sectioned Bessel beams (SBBs) have been proposed to mitigate the deleterious effects of the ringed structure of conventional BBs and still keep their intrinsic advantages for LSFM [155]. Ideally, SBBs promise imaging with larger FOV, without compromising image resolution and contrast. In addition, they are suitable to be used in confocal line detection [152]. Furthermore, SBBs inherit the self-reconstruction capabilities BBs, which makes them ideal to minimize scattering effects. The use of SBBs has been successfully demonstrated for imaging large samples with moderate scattering, in particular for non-specifically labeled multi-cellular tumor spheroids (MCTSs).

SBBs are generated by modifying the angular spectrum of standard BBs. This is done by blocking opposite sections of the beam’s angular spectrum in the direction perpendicular to the coordinate where the beam is scanned. Therefore, the spectrum of a SBB consists of two opposed sections of the ring with an angular extension given by the span angle β, as is shown in Fig. 17(a).

 figure: Figure 17.

Figure 17. Examples of pupil amplitude masks for the generation of Bessel-like beams for scanned DSLM, based on (a) sectioned Bessel beams, and (b) optical lattices. The white color corresponds to a transmissivity T=1.0, the red dotted lines indicate the maximum NA available in the excitation objective, and the green lines the limits of the base Bessel ring.

Download Full Size | PDF

As compared to standard BBs, SBBs have the advantage of enhancing the decoupling of the LS thickness from the DOF. This is thanks to the fact that, by increasing the DOF, the SBB cross section increases mostly in the direction perpendicular to the detection path, keeping the intersecting volume with the detection objective almost intact. As a result, SBBs would exceed BBs in terms of image contrast and optical sectioning, when used in widefield detection mode. However, when SBBs are used in confocal line mode (see Section 6.2a) the light efficiency decreases up to 5 times as compared to BBs [155].

Figure 18 shows a simulated example of the imaging properties of a DSLM using SBBs. For consistency, the pupil mask was based on the BB ring used for Fig. 17, and thus the FOV was kept as in Fig. 12(c), with the FOV being twice that of a standard DSLM. The angle β was set to 80°, close to the optimum value reported for good optical sectioning in confocal detection mode [155] (see Table 3). Figures 18(a) and 18(b) show that, although the SBB effectively redirects most of the laser power from the ring structure to the crossing volume with PSF of the detection objective, there is still some residual radial structure around the maxima that reduces the contrast of the scanned beam. This fact is made clearer in Figs. 18(c) and 18(d), where the overall MTF shows a moderate enhancement of the contrast for the mid-frequency range but, at the same time, a reduced cut-off frequency as compared to the BB case; see Fig. 15(d).

 figure: Figure 18.

Figure 18. Example of LS generation for DSLM with SBBs. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for 1P-DSLM with SBB (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.

Download Full Size | PDF

5.4f. DSLM with Optical Lattices

Optical lattices were devised to generate thin LSs that are more adequate for high-resolution subcellular imaging than the standard Gaussian beams. These have evolved from previous attempts at structuring LS illumination using single and multiple BBs [9,63]. In fact, an optical lattice can also be considered a coherent BB array, in which all BBs interfere with each other coherently [156]. For LSFM applications, optical lattices exploit the concept of parallelization, where a single laser beam is spread out in several beamlets confined at the imaging plane. This is done in a smart way such that optical lattices not only improve the axial resolution, but also induce much less phototoxic effects than the single focused beam, for the same total power delivered to the sample [75].

Optical lattices can be designed to optimize either the confinement of the excitation to the central plane, i.e., a high-speed mode to optimize optical sectioning (see Fig. 19), or to achieve higher resolution in the axial direction, i.e., structured illumination microscopy (SIM) (see Fig. 20). In the high-speed mode, the lattice pattern is rapidly swept to generate a set of LSs with a homogeneous transversal intensity profile, just as in a standard DSLM. In the SIM mode, several images are captured per imaged plane for an equivalent number of sub-period lattice displacements. Then a SIM reconstruction algorithm is applied to the data for obtaining an improved spatial resolution.

Following is a straightforward comparison between the two lattice modalities. This is done by simulating the imaging properties of the lattice LSFM systems resulting from modifying the base ring pupil previously used for generating the BBs [Fig. 12(c)]. The lattices are created by superimposing an opaque mask containing a set of slit apertures over the BB ring, as shown in Fig. 17(b). The properties of the stripes were selected to generate either a square lattice for LSs with optimized optical sectioning, or a hexagonal lattice for LSs with optimized resolution in SIM mode. For more detail on the pupil designs of the examples, see Table 3.

 figure: Figure 19.

Figure 19. Example of LS generation for DSLM with optical lattices, optimized for optical sectioning. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for DSLM with optical lattices (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.

Download Full Size | PDF

 figure: Figure 20.

Figure 20. Example of LS generation for DSLM with optical lattices, optimized for structured illumination. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for DSLM with optical lattices (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.

Download Full Size | PDF

The imaging properties of the lattice for LSFM with optimized optical sectioning (fast mode) are shown in Fig. 19. The intensity profile at the origin of the square bounded lattice in Fig. 19(a) shows that, as opposed to other BB variants, optical lattice intensity distribution can be controlled to be fully confined in planes perpendicular to the detection axis. This results in a high contrast set of parallel LSs when the lattice beam is scanned along y, as shown in Fig. 19(b). The optical lattice was designed in such a way that the two main lobes aside from the central core are attenuated by the zeros of the detection PSF when the overall PSF is formed. This results in the overall MTF shown in Fig. 19(c). The MTF axial profile [Fig. 19(d)] extends beyond the reference MTF cut-off frequency and shows a secondary peak in the mid-frequency range. Actually, this MTF profile is similar to the one obtained for 2P-DSLM with BBs [Fig. 16(d)]. This can be understood intuitively by considering that in both cases, the secondary lobes are attenuated, either by a nonlinear process (in the 2P-DSLM with BB case), or by PSF matching with the detection PSF (in the lattice case). Finally, it is worth noting that we did not find a lattice that perfectly matches the detection PSF, but the optimal one, given the current design constraints. Please note that for a similar detection objective, Chen et al. [75] reported examples of perfect matching but employed twice the excitation NA as here.

Figure 20 shows an example of LS generation for DSLM with optical lattices optimized for high-resolution SIM mode. In this case, the central distribution is fractured in many parallel intensity distributions [Fig. 20(a)] that become a set of high-contrast parallel LSs upon scanning [Fig. 20(b)]. Crossing with the detection PSF preserves the LLS structure, which is revealed as an overall axially extended MTF with a couple of side copies around the central distribution [Fig. 20(c)]. The MTF central profile [Fig. 20(d)] shows a secondary peak intensity around the reference MTF cut frequency, similar to the MTF profiles employed to illustrate how SIM works for increasing the resolution in widefield microscopy. Nevertheless, to appreciate the full potential of lattice light-sheets (LLSs) in the SIM mode for increasing the axial resolution and recovering the contrast, a full SIM reconstruction/deconvolution is required [75,93], but this is out of the scope or this review.

The main advantage of LLS excitation is the low photobleaching and phototoxicity produced when obtaining high-resolution 4D images of small biological samples that can fit in a FOV of a few tens of micrometers. This is thanks to the reduced peak intensity of broader area lattices as compared to focused confocal excitation beams; see Figs. 20(a) and 19(a). A LLS offers significantly better excitation light confinement than the Bessel LS of the same length and thickness, and a higher axial resolution in comparison with both Gaussian and Bessel LSs [156]. One exception though is 2P-DSLM Bessel, which slightly outperforms LLS in both metrics; compare Figs. 16 and 19. A similar finding was reported by Welf et al. [121]. A LLS has been successfully demonstrated for fast high-resolution imaging of mitosis and locomotion of cells and the development of early stage embryos of small invertebrates [75]. One important limitation of lattice technology is the need for non-standard sample mounting and the use of just a few quasi-discrete wavevectors of the ring at the generalized pupil, which may considerably reduce its self-healing properties. Therefore, LLS can be very sensitive to any perturbation produced by the sample. This may prevent the application of this technology for imaging thick samples.

5.4g. DSLM Using Airy Beams

Airy beams are non-diffracting fields that, as opposed to BBs, do not result from conical superposition [157,158]. The more distinctive feature of Airy beams is that, while preserving the transversal profile of the field upon propagation, they experience a constant curve trajectory, which is the origin of the term “accelerating beams” [159]. Another remarkable property of these beams is their self-healing properties, used here in the same sense as BBs. This characteristic is what makes these beams more tolerant to partly absorbing or moderately turbid samples. In LSFM, all these properties can be used to generate extended thin LSs that can provide optimal fluorescence excitation of the sample and isotropic resolution [65].

Airy LSs (ALSs) are most efficiently generated in the scanning mode (DSLM) with a cubic phase mask (CPM) produced on a SLM. In addition, these can also be directly generated with a tilted plano–convex cylindrical lens [145] or a 1D CPM. Figure 21 shows an example of DSLM generated with Airy beams. Figure 21(a) shows the typical asymmetric L-shaped intensity profile of Airy beams. Here, the CPM was calculated to cover twice the FOV of the Gaussian DSLM example. Because of the exotic nature of the 3D PSF of Airy beams, the definition of the DOF is not as clear as for the previous examples. In this case, it has been shown that the FOV width is proportional to 6α, where α is the strength parameter of the CPM [65], see Table 3. Besides, it is usual to use square-shaped pupils to generate Airy beams to ensure the separability of the intensity PSF in Cartesian coordinates [160]. This simplifies the MTF analysis and speeds up the deconvolution process. In the present example, the CPM is limited by a circular pupil for easy and fair comparison with the other examples, and, thus, the resulting intensity distribution is no longer separable in Cartesian coordinates. As a consequence, a faint residual radial structure and slight radial deformation of the lateral excitation PSF are observed in Fig. 21(a). Nevertheless, this does not impact considerably the contrast of the scanned LS as it is shown in Fig. 21(b), where the main LS core can be seen together with a set of well-contrasted secondary sheets. Multiplication with the detection PSF preserves the main features and the periodicity of the LS intensity distribution. Thus, the overall MTF of Fig. 21(c) presents an extended support that points to an increased axial resolution. Although this is actually the case, as evidenced by the curves in Fig. 21(d), where the MTF profile for an ALS extends beyond the blue reference curve, the MTF profile also reveals diminished contrast in the medium frequency range as compared to the model case. Deconvolution is, thus, required to recover the contrast and the full optical sectioning. Here it is important to note that the use of ALSs may preclude the use of higher NA, and shorter DOF, detection objectives because the lobe structure must remain in focus.

 figure: Figure 21.

Figure 21. Example of LS generation for DSLM with Airy beams. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for DSLM with Airy beams (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.

Download Full Size | PDF

ALSs provide up to a tenfold larger FOVs than those obtained with a Gaussian beam, keeping high contrast and resolution. Also, in contrast to the use of BBs, scanned ALSs do not require confocal line detection to remove the secondary ring structure. However, the characteristic lobule structure in the axial detection direction requires the use of a deconvolution to recover both the high axial resolution and the full optical sectioning capabilities. Also, just like optical lattices, ALSs excel at confining the excitation to the dimensions perpendicular to the detection path and reducing peak intensities minimizing phototoxicity. Thus, ALSs have been successfully applied to image medium-sized samples, from small cell spheroids to sections of small vertebrates [65] and mice neural tissue [161].

As we have shown in this section, a LS can be engineered in many ways to push forward the capabilities of the microscope. Although this is usually done at the expense of increasing the complexity of the optical setup, or introducing the need for posterior digital post-processing, the benefits in terms of the quality and reliability of the obtained biological data warrant the added effort. Nevertheless, as it is usually difficult to quantify the overall benefit of such advanced approaches for particular biological experiments, a sample-based comparative analysis is recommended, such as the one presented in Andilla et al. [70] for tissue-mimic samples.

6. Advanced Optics and Photonics Technologies in LSFM

In this section, our aim is to describe the most advanced optics and photonics technologies that have been incorporated in a LSFM. This would normally lead to modifying the standard way of detecting the optical signals in the LSFM. Most of the methods described in this section involve the use of multispectral detection, hyperspectral analysis, the use of AO or wavefront engineering, or advanced image-processing tools. These provide LS with enhanced capabilities such as better spatio-temporal resolution, increased contrast, endogenous label free imaging, and intrinsic specificity-.

6.1. Advanced Wavelength Filtering Modalities

6.1a. Simultaneous Multicolor Detection

In LSFM, as in any other fluorescence microscope, the specific signal emitted by the feature of interest of the sample is efficiently collected by combining laser excitation and filtering of the out coming light. However, detection of multiple colors is an important requirement for biological applications where multiple structures are labeled with different fluorophores. For example, dual color imaging is an important tool for many studies in which it is important to know whether two proteins are co-localized or not in a given structure [162], or to detect intracellular environmental changes by measuring the ratio between intensities of different emission wavelengths of a given fluorescent biosensor [163]. Simultaneous acquisition of two colors can be performed by using a dichroic mirror to separate the signal into two synchronized detectors [88], or by using commercial solutions for simultaneous acquisition in the same camera. This approach has been successfully implemented in a LSFM for fast imaging of the zebrafish heartbeat in order to see the red blood cells in context with the heart walls that produce its movement [88].

In the nonlinear regime, and in contrast to linear multicolor excitation, the use of two synchronized lasers can actually generate multicolor excitation. In this case, three different excitation spectra can be generated, centered at three different wavelengths [164]. Mahou et al. [68] have demonstrated the simultaneous acquisition of three colors using two spatially and temporally synchronized pulse trains. In particular, the beams from two lasers each generate their own TPEF signals in the blue (TPEF of laser 1) and in the red range (TPEF of laser 2). In addition, due to wavelength mixing, the spatiotemporal overlap of the two beams provided an additional TPEF in the green–yellow range; see Fig. 22(a). The use of this method has been reported for simultaneous three-color imaging of fly embryos and multicolor imaging of brainbow labeled tissues.

 figure: Figure 22.

Figure 22. Optical setups for (a) multicolor 2P-LSFM, which combines bidirectional multicolor two-photon excitation and multispectral detection on a single camera [68], and (b) hyperspectral LS allowing multiple use of fluorescent markers (adapted with permission from Jahr et al., Nat. Commun. 6, 7990 [2015] [97]), published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI).

Download Full Size | PDF

When the full color spectrum of the fluorescence emission is required, then a full hyperspectral acquisition can be implemented in a LSFM [97]. This takes advantage of the DSLM configuration, which allows the excitation of a single line in the sample volume, as shown in Fig. 22(b). A hyperspectral camera, using a diffraction grating to split the wavelengths across one dimension and a de-scan mirror to project the line onto the same position on the detector, acquires the spectrum of every point of the excited line simultaneously. Then, by moving the line along the sample plane, a hyperspectral 2D image is obtained. The volumetric data is then acquired in the standard way, resulting in an enriched multidimensional image. With this configuration, it is possible to acquire spectral information of large developing samples, such as zebrafish and Drosophila embryos, enabling unmixing of the signals coming from the different fluorescent labels and from the intrinsic fluorophores. This method is potentially useful for overcoming the limit of three-color imaging in LSFM, enabling a larger number of fluorescent labels to be incorporated into the biological samples.

6.1b. Temporal Gated Detection (FLIM)

The temporal response of the fluorescence of a certain dye or fluorescence molecule adds more capabilities of discrimination between labeling species. As happens with other microscopy techniques [95,165], LSFM can benefit from fluorescence by implementing fluorescence lifetime imaging (FLIM) in the frequency domain [94]. Figure 23 shows the original optical setup reported for combining LSFM and FLIM. As shown in Fig. 23, the use of intensified gated CCDs is required for the 2D acquisition of the fluorescence lifetimes. The gating of the intensifier is synchronized with a tenth of the megahertz modulation of the excitation intensity. The demodulation of the detected light depends effectively on the lifetime of the emitter. The natural confinement of the excitation light of the LSFM avoids out-of-plane undesired illumination and, therefore, the associated bleaching that could limit FLIM detection [94]. This limitation can increase dramatically when monitoring living processes. Using a similar approach [96], the application of both FLIM and Förster resonance energy transfer (FRET) techniques in a single microscope [166] shows that monitoring of cell apoptosis in a 3D sample is possible by combining FRET, FLIM, and LSFM.

 figure: Figure 23.

Figure 23. SPIM-FLIM optical setup [94]. See the main text for more details.

Download Full Size | PDF

However, FRET can also be used in combination with LSFM but without using FLIM detection, as shown in Ref. [167]. A standard ratiometric FRET effect is used to monitor the calcium dynamics of transgenic Arabidopsis plants, taking advantage of the LSFM planar sectioning.

6.1c. Hyperspectral Detection (Raman)

LSFM is normally based on fluorescence techniques, which usually require labeled samples. However, other contrast mechanisms can be used (see Section 2.3). In fact, Raman imaging produces contrast by means of the inelastic scattering interaction with matter. As such, it can produce label-free images. However, due to the very low efficiency of the spontaneously emitted Raman signal (1 in 108 incident photons is Raman-shifted), the long image acquisition time (normally from a few seconds to several hours) makes it impractical for studies requiring large image mappings of the sample. This is because in a point detection scheme, the total time for acquiring an image is the acquisition time multiplied by the number of points to produce it. Because of its geometry, LS illumination, provides instantaneous optical sectioning capabilities of large FOVs, while, at the same time, the signal collection is performed in a very efficient way. These two characteristics result in an enhanced system for exciting and then collecting the weakly emitted Raman photons and results in a highly promising scheme for obtaining fast full-2D images using the spontaneous Raman signal as a contrast mechanism [99,116]. Using a custom-made spectrograph in a LS configuration, the possibility of performing optically sectioned Raman imaging on 20 μm polystyrene beads was demonstrated [116]. In this case, the 50μm×150μm images were taken at a single vibrational Raman frequency, and the acquisition time was 2 min each. Similarly, discrete images at different vibrational Raman bands were acquired on zebrafish in a 2.3mm×2.3mm FOV [99] in about 10 s each. More recently, a DSLM configuration was used to perform widefield spontaneous Raman imaging. This was combined with the use of an interferometric tunable filter, thus obtaining a high-resolution 3D spectrally resolved Raman image [100]; see Fig. 24(a). In this case, the required time to obtain a fully spectrally resolved Raman image is the same as that required to obtain a single point in a conventional Raman microscope. Furthermore, the feasibility of the technique for imaging solid 3D samples was demonstrated by using a composition of polystyrene beads and lipid droplets immersed in agar. Additionally, the potential use of the technique for studying biological living samples is also demonstrated using spontaneous Raman imaging in the CH (26003200cm1).

 figure: Figure 24.

Figure 24. Raman LS optical setups. (a) A CW laser forms the LS on the sample plane using a galvo mirror (DSLM). A tunable filter (TF) scans the spectrum with an edge depending on θ, the tilt angle of the filter. Inset: spectral knife edge (KE) trace extraction from a 2D image stack at a region of interest I(x,y) with N as the total number of images, each taken at a different filter tilt angle [100]. (b) Fourier transform-based Raman LS. The illumination LS is done with a high-power laser and a cylindrical lens. The spectra are recovered using a Fourier-transform imaging spectrometer (red path). See the main text for a detailed description [101].

Download Full Size | PDF

Recently, another LSFM Raman implementation combined with a novel Fourier transform (FT)-based spectral imaging method was presented [101]. In this technique, the full 2D image obtained with the LSFM is split into the two arms of a Michelson interferometer and recombined on the CCD camera; see Fig. 24(b). By acquiring images with a difference of the two optical paths between the two arms of the interferometer, the full spectral Raman information of the sample can be retrieved for every pixel. With this setup, hyperspectral Raman data volumes consisting of polymer micro-beads and a zebrafish embryo was demonstrated at high spatial and wavelength resolution.

6.1d. Non-Filtered Detection (Elastic Scattering)

Most implementations of LSFM [1] rely on sample labelling with fluorescent reporters for imaging the targeted biological structures. Fluorescence excitation in LSFM is mediated by a laser LS that passes through the sample and scatters part of the laser light to the detection camera. This light is usually blocked and discarded before arriving at the camera by using a specific filter. However, this scattered light could carry information about the structure and composition of the sample. In the case where scattering is inelastic, the Raman spectrum can give information about the chemical composition of the 3D samples, as described in Section 6.1c. On the other hand, when the scattering is elastic, it would give information about the structure of the sample at different spatial scale levels, as discussed in Section 2.3, and with minimum light dosage deposited onto the sample. LS elastic scattering microscopy has been implemented and demonstrated for plant root phenotyping embedded in transparent soils and quantitative long-term monitoring of lettuce roots [114]. Nevertheless, the contrast and the resolution of the obtained images were seen to be highly dependent on the turbidity and refractive index heterogeneity of both the sample and the embedding substrate. Furthermore, adding polarization sensitive detection to the optical system may help increase the SNR and the contrast of features in samples with increasing turbidity [168]. Nevertheless, one of the main issues when coherent light is used for LS generation is the generation of speckle patterns that degrade the LS images by introducing spurious intensity modulations at the structures of interest [169]. Despite speckle formation, by additionally introducing wavelength and spatial modulation into the light source, contrast from the elastic-scattered light can be better retrieved. For example, an alternative to mitigate speckle formation is to use light sources that reduce the temporal coherence for LS generation. Supercontinuum laser sources and light-emitting diodes (LEDs) are good examples of such low-coherence light sources that can be used to obtain LS speckle-reduced images of biological samples, such as zebrafish and C. elegans [170]. Another reported strategy is using a custom phase mask to generate a multibeam LS without inter-coherence illuminating the FOV of the microscope with diverse incident angles [171]. This helps to decrease the spatial coherence of the effective illuminating LS and, thus, to reduce the speckle in the captured elastic scattering optical section. The use of elastic scattering in LSFM is still not as popular as its fluorescence-based equivalent, mostly because of its lack of specificity. However, if specificity and selectivity are not issues in the imaging process, elastic scattered light could be used to generate optically sectioned images very similar to those obtained by histological cuts, and it is highly advisable for in vivo applications that require fast, very low-light dosage imaging, or in those in which it is not straightforward to incorporate a fluorescent marker or to genetically modify the specimen.

6.2. Advanced Acquisition Modalities

6.2a. Confocal Line Scanning

As in many imaging techniques, out-of-focus light results in unwanted blurred noise. This noise is additive and reduces the SNR of the acquired image. In confocal point-scanning-based microscopy, using a descanned configuration and placing a pinhole in the confocal point suppresses most of the out-of-plane light. In the case of LSFM, because of the LS configuration, confocality in the imaged plane is ensured. However, there are some strategies that have used a confocal-like configuration in the DSLM modality that result in an enhanced SNR. This is done by selective activation of three or four lines of the CCD pixels in synchrony with the laser scan. While the active line of pixels will produce the DSLM image, the inactive pixels will remove any out-of-focus contribution in the perpendicular, z, direction. Different implementations to achieve confocality in a DSLM modality are possible and described as follows.

In one such implementation, a confocal slit can be placed in the descanned imaging plane of the detection arm using a pair of synchronized GMs. The first GM scans the excitation line through the sample. The second one descans the emitted signal to keep the light coming from the desired region on top of the confocal slit. In this case, a third synchronized GM will scan the confocal filtered light signal onto the acquisition camera to generate the image in a line-by-line basis [172]. This has been employed to image the Purkinje cells in the whole cerebellum of a mouse. Another approach is the use of the rolling shutter existing in the new sCMOS cameras. This acquisition modality allows selecting the active region of the detector as a function of time. The synchronization of the active region with the size and position of the scanned laser line creates a virtual slit that moves synchronously with the emitted light [152], as shown in Fig. 25(a). The use of confocal line acquisition has also been demonstrated in combination with Bessel beams in linear fluorescence, providing a similar image quality to that of a conventional DLSM, extending the FOV and reducing the scattering effects of the excitation beam [155]. Figure 25 shows a comparison using DSLM with Gaussian beams [Figs. 25(b) and 25(e)], with Bessel beams [Figs. 25(c) and 25(f)], and Bessel beams using a rolling shutter [Figs. 25(d) and 25(g)]. All the images where obtained in the eye of a zebrafish expressing a nuclei fluorescent marker. As can be seen, the use of Bessel beams increases the axial resolution of the image, and the confocal line strategy allows reducing the out-of-plane light of the acquisition.

 figure: Figure 25.

Figure 25. Confocal line scanning DSLM. (a) The principle of confocal line scanning by using a DSLM and a sCMOS camera. XY and XZ maximum projection intensity of 3D stacks of a 4 dpf zebrafish expressing H2B-GFP are shown for the (b), (e) Gaussian; (c), (f) Bessel; and (d), (g) Bessel confocal line cases. See the main text for more detail.

Download Full Size | PDF

Some of the sCMOS cameras are built in two separate detectors. Such architecture allows the parallelization of the acquisition in two separated arms, doubling the acquisition speed of the system [173].

6.2b. Adaptive Optics

AO has been used extensively in microscopy in several imaging techniques [174]. The main idea is to use an active optical element (such as a DM, a SLM, or simply a movable standard optical element) that compensates the effects produced by the aberrations induced by the optical system or the sample itself [50]. Actually, the use of AO for fluorescence microscopy has been demonstrated in point-scanning confocal [175], TPEF [176,177], and localization microscopy [178,179], among others.

It is worth noting that the excitation and emission paths can be affected by different types of optical aberration and would then be corrected independently [70]. In the case of the LSFM, this fact is more evident as the excitation and emission paths are completely independent. The LS can be affected by the sample and it has to be corrected to maintain the axial resolution of the microscope [180,181], while, on the other hand, the aberrations introduced by the sample and the sample holder reduce the contrast and quality of the acquisition. To compensate for these aberrations, they can be directly measured using a guide star [176]. Once these are measured, aberrations can then be compensated for by using a DM [50]. This aberration compensation will effectively work in a certain volume around the guide star known as the isoplanatic patch. In the case of samples in which the index of refraction varies such that the isoplanatic patch approximation is not valid, multiple corrections could be required. Another option is to model the induced aberrations and compensate for them by a direct inversion of the calculated aberration on the DM [182]. Figure 26(a) shows the implementation of AO in the detection path of a LSFM, where the sample-induced aberrations were compensated for by using a DM and a wavefront sensor [50].

 figure: Figure 26.

Figure 26. AO for LSFM. (a) The optical setup for correcting the aberrations using AO in the detection path of a LSFM (enclosed in the green square). The main components are: the deformable mirror (DM) that is the active element that can modify the wavefront, and the Hartmann–Shack wavefront sensor (HSWF) that measures the amount of wavefront aberration that should be compensated. (b) Maximum intensity projection of a 3D stack of 100 images (z spacing 1 mm) of a large MCTS expressing a fluorescent nuclear protein, without (w/o AO) and with AO (AO). Scale bar: 50 μm. Adapted with permission from Jorand et al., PLoS ONE 7, e35795 (2012) [50]. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.

Download Full Size | PDF

For example, when applying AO LS to MCTS samples, it has been shown that the images obtained using AO are much brighter and have higher contrast than images acquired without AO; see Fig. 26(b). Interestingly, AO correction is even working in the deeper layers (200 μm) of the MCTS. This improvement allows performing segmentation of nuclei deep inside the MCTS, helping to retrieve more and better quantitative biological information from the samples.

6.2c. Wavefront Coding

As was briefly mentioned in Section 4.2c, WFC can be implemented in a LSFM to extend the DOF of the detection lens, enabling fast volumetric imaging [89]. As only the LS is scanned, the sample is not moved or disturbed in any way, helping to preserve its viability. In WFC approaches, a phase mask is used to produce an elongated axial PSF. Figure 27(b) shows how the original WFM detection PSF is transformed from its diffraction-limited shape (upper left) to a typical elongated PSF having an extended DOF (bottom left). The phase mask most used for WFC has a bidimensional cubic profile similar to the one used for Airy beam excitation (see Section 5.4g).

 figure: Figure 27.

Figure 27. LSFM with an extended DOF. (a) Zoom-in of the optical setup for WFC-LSFM [see also Fig. 6(c)] at the sample space. Some positions of the LS in the axial scanning, from Zo to Zn, are shown in green; the detection PSF is shown in red. (b) Comparison of the effective PSFs for three LS axial positions for WFM (top) and WFC-LSFM (bottom). (c) Comparison of the images obtained with standard LSFM (left) and WFC-LSFM (right). Images are maximum intensity projections of a 3D stack of the fluorescent pharynx of a C. elegans [89]. Insets show different cuts of the data at the marked dotted lines. Scale bar 50 μm.

Download Full Size | PDF

Because of the use of the phase mask, the obtained images will present distortions. These however, can be restored by digital deconvolution [160]. Figure 27(c) shows images of a C. elegans pharynx captured with both a standard LSFM (left) and the WFC-LSFM (right) system after deconvolution. Finally, the many advantages of WFC-LSFM have been demonstrated for obtaining high-resolution volumetric images of fast-moving dynamics in living samples, 3D particle tracking at unprecedented speeds (>70volumes/s) [89], and calcium functional imaging of zebrafish embryos [91]. Another approach to extend the DOF uses a block of altered refractive index material between the objective and the sample. This introduces a large spherical aberration that effectively extends the DOF [183]. Although this configuration can be used for fast volumetric imaging, the technique is not as versatile as WFC, as it does not allow precise control over the DOF. One of the limitations of the extended DOF methods is their lower noise-related resolution limit when compared to standard detection schemes. This can be compensated for by increasing the laser excitation power or, in the case of the WFC, by finely tuning the mask parameters to obtain the best trade-off between DOF and noise sensitivity.

6.3. Single Particle and Superresolution Modalities

6.3a. Single Particle Dynamics and Fluorescence Correlation Spectroscopy

In the field of single molecule behavior in biological samples, LSFM enabled new ways to perform experiments. In the case of single particle tracking (SPT) in which the fluorescence emission by a single particle is very dim, the intrinsic optical sectioning given by the use of LS excitation is an important advantage for achieving the required SNR for tracking. Other high-resolution sectioning methods, such as TIRF [184] or HILO [185187], have been used for these purposes, but they restrict the studies to the plane of the sample in close proximity to the coverslip. Thanks to its uncoupled design, the LS approach can maintain good imaging conditions at arbitrary planes deep into a 3D sample. These capabilities have been exploited for SPT in order to measure the diffusion properties of tagged proteins inside a living organism [188] and for dynamic 3D studies of RNA mobility on living Chironomus tentans salivary gland cell nuclei [189,190].

In the same context, molecular dynamics have been studied using LS and fluorescence correlation spectroscopy [47,191]. As for SPT applications, the orthogonal illumination used in LSs produces ideal imaging conditions, in terms of signal contrast and SNR, required for measuring the mobility and reaction parameters of molecules inside living cells by FCS measurements in 3D samples [191,192]. Interaction between proteins can also be studied by extending the FCS methodology to fluorescence cross-correlation spectroscopy using two colors acquired simultaneously [193].

6.3b. Localization and Nanoscopy Techniques

Localization microscopy is a family of methods for SR, using samples sparsely labeled with photoactivable fluorescent proteins. SR is achieved by precisely determining the positions of the separated single molecules by fitting the detected signals to a model function. Most of the localization microscopy methods exploit this same principle, but use either different optics or fluorophores with different photo-physical properties. Examples of these methods are PALM, STORM, and PAINT [194]. The structural resolution in the final images is determined by the density of molecule positions as well as the localization accuracy [195]. The same as with SPT methods, the challenges of standard 2D localization microscopy are mostly in reducing the out-of-focus fluorescence by using the TIRF and HILO methods. For extending localization-based SR to 3D, LS methods have been gaining importance in the past few years. The first demonstration of localization-based SR in a standard LS configuration was presented by Zanacchi et al. [48]. Later, the concept was extended to more convenient sample mounting implementations for SR imaging, such as lattice LSFM [196]. An alternative to avoid the mechanical constraints of using two high-NA orthogonal objectives and to further guarantee sample stability is to build a LS on top of a conventional microscope frame [82,197]. Some of these methods are quite sophisticated and use unconventional elements, such as an AFM cantilever, to approach the LS to the sample. Recently, new configurations have exploited the idea of using the same high-NA lens for both illumination and fluorescence detection. Particularly important is the emergence of soSPIM methods [81]. In this configuration, microfabricated mirrored cavities are used for confining the biological sample and to deflect the LS to the selected region into the specimen. This method has been applied for 3D single-molecule-based SR imaging of whole cells or cell aggregates. This approach is very convenient and relatively easy to implement, and provides background-free 3D SR imaging deep inside the biological samples.

Stimulated emission depletion (STED) is a SR microscopy approach that is based on scanning two superimposed light sources, one for fluorescence excitation and another focused on a “doughnut shape” for fluorescence depletion, which together allow for a superresolved spot on the sample to be imaged [198]. Following the same principle, a superresolved LS can be generated. To extend the STED principle to LSFM, the depletion beam is shaped to a double-sheet (TEM01 profile) that wraps around the excitation LS in the focus of the objective plane [92]. In this manner, the fluorescence emission in the axial direction can be confined to a plane thinner than the diffraction-limited LS, enabling up to 1.5-fold resolution improvement in the axial direction. Recently, a new technique called reversible switchable optical fluorescence transitions (RESOLFT) LS was reported to fully exploit the STED SR capabilities, achieving a 5–12-fold axial resolution improvement [199]. Basically, the use of RESOLFT fluorescent proteins and STED illumination enables temporarily switching off the fluorophores located above and below the focal plane for each scanning step, producing images with improved axial resolution. Compared to STED-LS, this method offers axial SR at reduced excitation power and thus the typical low-photobleaching properties of standard LSFM are preserved. The advantages of RESOLFT-LS have been applied for in vivo imaging for HeLa cells cytoskeleton and HIV-1 assembly sites with minimal photodamage.

6.3c. Structured Illumination LS

A different method to achieve a lateral resolution that exceeds the classical diffraction limit in a WFM is using spatially SI. In this method, the spatially structured excitation light encodes the high-resolution information of the object in the form of moiré fringes [200]. Normally, several images taken by varying the illumination pattern parameters are required to demodulate the intensity and extract the high-resolution information. Using SI methods, the resolution can be pushed up to twice the diffraction-limited value of the imaging lens. In addition, if SI is applied in multiple directions within the same FOV, an isotropic PSF can be obtained. Apart from the resolution enhancement properties, SI techniques can also be used for background rejection and contrast enhancement of the captured widefield images [201].

When combined with LSFM, SI comes in handy for both contrast [8] and resolution [202] improvement of the captured images. Contrast degradation in LSFM is related mostly to light scattering in large samples or increased background noise in densely labeled samples. In the first two demonstrations of SI for LSFM, SIM [8] and HiLo [203], the SI was produced by intensity modulation of the laser light by using an acousto-optic modulator in a DSLM configuration. The key difference between the two is the method of demodulation. Whereas SIM demodulation is based on a phase-stepping algorithm that requires at least three grid illumination images, HiLo demodulation uses only two images sequentially acquired with uniform and structured sheet illumination. For example, an optically sectioned image is then synthesized by fusing high and low spatial frequency information from both images. By using HiLo microscopy, hippocampal pyramidal neurons have been targeted with eGFP in an optically cleared whole mouse brain. The two methods mentioned above were devised mainly for contrast enhancement. Other SI approaches have been proposed to increase the resolution instead. For example, ultrathin planar illumination produced by scanning arrays of Bessel beams can be produced for SR SI microscopy [9]. This has been demonstrated for recording rapid 3D dynamics beyond the diffraction limit in thick or densely fluorescent living specimens. In the same direction, SR improvement in LSFM was clearly illustrated, at least in one dimension, by Chen et al. who reported LLSs as an alternative to perform 3D SI imaging on biological samples [75]; also see Section 5.4f.

In practical terms, SI for resolution improvement requires the generation of at least two LSs that add up coherently at the sample plane, creating the structured light field required for encoding the high-frequency information of the object. Figure 28(a) shows a structured LS that is generated by shaping the laser beam to create two LSs propagating symmetrically toward the FOV center. This can be used to achieve SR along one direction, but, if an isotropic resolution enhancement is needed, a more sophisticated approach involving multiple excitation objectives should be implemented. In the triSPIM approach, three identical objectives are used, positioned at the corner of a cube following a dual iSPIM architecture [204]. This allows the implementation of different SI LS variants, including standard and interfering lattice LSs. For lateral isotropic resolution, two objectives generate the structured LS at the three required directions, while the remaining one acts as a detection lens. By performing this SI methodology on the three possible sample planes, one per detection objective, in combination with the multiview fusion technique of DiSPIM, a nearly isotropic resolution would be obtained. Theoretically, in its best configuration, triSPIM can achieve down to 120 nm lateral and 190 nm axial resolution, although an experimental demonstration of these limits is still to be proven. Recently, another three-objective approach has been proposed: the coherent SI LSFM [202]. In this case, the arrangement of the objectives is coplanar, as opposed to the triSPIM where the three objectives are orthogonal to each other. Here the structured LS is generated by two counterpropagating LSs generated by two opposite objectives. The flexibility given by the two excitation objectives is also exploited for structure rotation and thus isotropic lateral SR. Figure 28(b) shows how the coherent structure is generated and also the way this structure can be rotated at the sample plane. Note that in coherent SI LSFM, the fringes are generated in the axis perpendicular to the beam propagation direction, as opposed to the single-objective LS SI where the fringes are generated along the axis bisecting the angle between the LSs; see Fig. 28(a). Coherent SI LSFM offers lateral SR at the sub-100-nm level, which can be extended to the axial direction if sample rotation and image fusion are used. This method has been used for SR imaging of fluorescently-labelled subcellular compartments within live yeast. In Table 4, a summary of the presented LS SR implementations is presented, along with their main capabilities and limitations.

 figure: Figure 28.

Figure 28. LSFM with structured illumination. (a) Example of LS fringe structure generation by interference of two sheets, delimited by yellow and blue boundaries, which cross each other at the center of the FOV. (b) Example of multi-objective structured LS generation [202]. In this case, two complementarily tilted counterpropagating LSs generate a structured illumination pattern that can be rotated for isotropic SR. See text for detailed information.

Download Full Size | PDF

In conclusion, a LSFM is a very versatile instrument: many of the most advanced microscopy techniques can readily be extrapolated and adapted to its decoupled illumination–detection architecture. Particularly for techniques based on image processing, LS provides an intrinsic boost in image quality and a SNR surplus that permits an estimation of the biophysical parameters of interest with an optimum photon budget.

Tables Icon

Table 4. Summary of the LS Superresolution Methods

7. Conclusions

The intention in this paper is to give an optics perspective of the LSFM field. Thus, we have started by providing a qualitative description of what LSFM is and how it works, emphasizing the new degrees of freedom gained by uncoupling the excitation and detection arms. Then, we have shown that, thanks to these new degrees of freedom, many of the typical constraints present in other imaging techniques can be minimized or overcome. We have then highlighted all the new enabling properties and their impact in biomedical research. These include: low photodamage and phototoxicity, 3D and fast imaging, isotropic resolution, large FOV, possibility for imaging of large samples (cleared or not), compatibility with different excitation regimes (linear and nonlinear), and combination with other imaging techniques (including SR). This only exemplifies the number of biomedical applications that can be addressed with LS and its capability for opening up new venues of biomedical research.

Sections 2 and 3 have been written to be accessible to any scientist wishing to have a first optical approach to the technique. In these sections, we give a glimpse of the different contrast mechanisms and how they are exploited for LSFM purposes. In addition, we set the optical conditions necessary to understand how a LSFM is optically designed. This can also be taken as the starting point for those wanting to design a basic LSFM. For that, and taking into account the rapid expansion of LS techniques and the myriad of applications, in an effort to cover most of the imaging situations, the types of LSFM have been divided into three different cases. Again, from an optics point of view, these take into account the possible size of the sample and, therefore, the FOV to be imaged. These correspond to objects larger than (i) a few millimeters, for ultrastructural studies of tissues and organs, (ii) several hundreds of micrometers, having resolutions up to the cellular level, and (iii) several tens of micrometers, aiming to obtain information of subcellular components.

Once the main basic optical concepts have been presented, Section 4 highlights the different practical architectures commonly implemented by LSFM. This emphasizes the versatility of implementation of the LSFM and how they can be adapted for the study of many biological samples and conditions. This section also focuses on basic image-processing approaches.

Considering the currently reported types of techniques to produce a LS, in Section 5 we have presented a formal Fourier analysis in which several optical parameters are systematically evaluated and quantified. Our analysis starts by calculating the PSF of a LS generated by a cylindrical lens. This is used as a gold standard and as a reference. Then we calculate the PSF and MTF of several DSLM configurations in which Gaussian, Bessel (standard, truncated or forming a lattice), and Airy beams are used. Our analysis also included linear and nonlinear regimes. The performance of all the cases is evaluated in terms of the calculated MTF and compared with the cylindrical lens case, with an indication of its pros and cons.

Finally, in Section 6 we have presented the different technical advances reported in LSFM that have been motivated thanks to the extra degree of freedom offered. Thus, as a result of having the collection and excitation arms decoupled, novel ways of using technology, detection schemes, and image-processing strategies have been highlighted. Altogether, these new implementations have taken imaging to new levels, offering a new way to understand microscopy while offering the possibility, in a natural way, for obtaining multispectral optical sectioning detection, faster volumetric imaging speeds, and SR with minimum photodamage and phototoxicity, just to mention a few.

Here it is important to mention that LSFM is only an emerging technique and many other efforts continue to take the best from a LSFM. Currently, further optimization efforts have been focused on the design of an optimization algorithm able to address, in an individual way, every possible control of illumination and detection, its focal position and the LS propagation properties [181]. This will undoubtedly lead to a perfect imaging instrument. Furthermore, other efforts are now directed at actively controlling the light dose only in the right region of the specimen [104,206,207]. All of this will result in a highly optimized light exposition in the sample. Finally, recent demonstrations take advantage of the natural scattering properties that tissues present for generating LSs directly in the biological sample [208]. All these, including the new possibilities given by the new degrees of freedom and its multiple configurations, will make LSFM a powerful imaging technique with endless imaging possibilities.

Appendix A

The following section will develop the integration to obtain Eq. (6) of the main text from Eq. (3). This process provides details of the assumptions introduced and allows the use of intermediate expressions for numerical calculations in the case of non-analytic distributions.

As described in Eq. (3) of the main text, the probability of a two-photon absorption process is given by

p=δ2pN0Iν1hν1Iν2hν2Iν1=Iν2δ2pN0(Ihν)2.

Assuming that the intensity of light is larger than the absorbers, the photon density (Iν/hν) becomes a function of a constant number of photons N, a surface distribution depending on the 3D coordinates, and, in the case of a pulsed laser, a temporal distribution as follows:

I(ρ,ϕ,z,t)hν=Nϵ(ρ,ϕ,z)ϑ(t).

Therefore, the probability of photon absorption per volume unit and time is

p=d4Ndxdydzdt=d4Nρdρdϕdzdt=δ2pN0ϵ2(ρ,ϕ,z)N2ϑ2(t).

If we assume a pure Gaussian beam, the intensity distribution is described as follows:

ϵ(ρ,ϕ,z)=ϵ(ρ,z)=1σ2(z)2πe12(ρ2/σ(z)2),
where σ is related to the beam waist w:
σ(z)=w(z)2=w021+(z(πw02λ))2.

Finally, if we assume the Gaussian approximation, the temporal pulse shape can be described as

ϑ(t)=ϑ0m=0ne12(tmf)2σt2.

The mean temporal response equals 1 Hz because the total amount of photons is represented in N. Therefore, the mean value is the integral of ϑ divided by the period (or multiplied by the frequency f):

ϑ=1=f+ϑ0e12t2σt2dt=fϑ0σt2π=fϑ0τπ2ln2yieldsϑ0=2ln2fτπϑ,
where τ is the FWHM (pulse width) of the Gaussian function. If we want to integrate the square of the temporal response, we get
ϑ2=f+ϑ02e122t2σt2dt=ϑ02fπσt=ϑ02fπτ22ln2=2ln2π1fτϑ2=gpfτϑ2.

So integrating the square of the temporal response gives an absorption of photons dN per unit of volume:

d2N=δ2pN0gpfτN2ϑ2ϵ2(ρ,z)ρdρdϕdz.

Finally, we can integrate the squared Gaussian surface ϵ providing the description of the variation of absorption of photons along the propagation axis:

dNdz=δ2pN0gpfτN2ϑ218π2[ϕ]ϕ1=0ϕ2=2π[eρ2σ(z)2]ρ1=0ρ2=1σ(z)2=δ2pN0gpfτN2ϑ214π1σ(z)2=δ2pN0gpfτ1πw0211+(zzR)2N2ϑ2.

The emitted TPEF fluorescence of such a beam takes into account the quantum efficiency of emitter η1 and that to obtain one emitted photon, two absorbed photons are needed [107]:

F(z)=12η1δ2pN0gpfτ1πw0211+(λzπw02)2N2ϑ2.

Just to conclude, the total amount of emitted photons can be calculated by integrating z along the whole axis, obtaining the same result as Ref. [107]:

F()=12η1δ2pN0gpfτN2ϑ21πw0211+(λzπw02)2dz12η1δ2pN0gpfτN2ϑ21λ[atanλzπw02]=12η1δ2pN0gpfτπλN2ϑ2.

Appendix B

Table Containing the Characteristics of Some Representative Cameras used for Fluorescence Imaging from Different Manufacturers.

NameType of SensorNumber of PixelsPixel SizeQESpeed (FPS)Shutter
Photometrics -HQ2CCD1392×10406.45 μm0.6211Global
Hamamatsu Flash 4.0sCMOS2048×20486.5 μm0.8100Rolling
pco.edge 5.5sCMOS2560×21606.5 μm0.6100G&R
Andor ZylasCMOS2560×21606.5 μm0.58100Global
Andor -iXon3 885EMCCD1004×10028 μm0.6531Global
PhotometricsEvolve 512-DeltaEMCCD512×51216 μm0.9862.5Global
Hamamatsu Orca ImagEM-1KEMCCD1024×102413 μm0.939.5Global

Funding

The authors acknowledge financial support from the Spanish Ministerio de Economía y Competitividad (MINECO) through the “Severo Ochoa” program for Centres of Excellence in R&D (SEV-2015-0522) (BIO2014-59614-JIN, RYC-2015-17935, FIS2016-80455-R [AEI/FEDER, UE]); Fundación Cellex; Generalitat de Catalunya through the CERCA program; Fundació la Marató de TV3 (20141730); European Union’s Horizon 2020 Framework Programme (H2020) (713140, 654148).

Acknowledgment

This research has been partially conducted at ICFO’s Super Resolution Light Microscopy and Nanoscopy Facility.

References

1. E. H. K. Stelzer, “Light-sheet fluorescence microscopy for quantitative biology,” Nat. Methods 12, 23–26 (2014). [CrossRef]  

2. J. Huisken, J. Swoger, F. Del Bene, J. Wittbrodt, and E. H. K. Stelzer, “Optical sectioning deep inside live embryos by selective plane illumination microscopy,” Science 305, 1007–1009 (2004). [CrossRef]  

3. E. G. Reynaud, J. Peychl, J. Huisken, and P. Tomancak, “Guide to light-sheet microscopy for adventurous biologists,” Nat. Methods 12, 30–34 (2014). [CrossRef]  

4. P. J. Keller, F. Pampaloni, and E. H. Stelzer, “Life sciences require the third dimension,” Curr. Opin. Cell Biol. 18, 117–124 (2006). [CrossRef]  

5. E. G. Reynaud, U. Kržič, K. Greger, and E. H. K. Stelzer, “Light sheet-based fluorescence microscopy: more dimensions, more photons, and less photodamage,” HFSP J. 2, 266–275 (2008). [CrossRef]  

6. P. J. Keller, F. Pampaloni, and E. H. K. Stelzer, “Three-dimensional preparation and imaging reveal intrinsic microtubule properties,” Nat. Methods 4, 843–846 (2007). [CrossRef]  

7. P. J. Verveer, J. Swoger, F. Pampaloni, K. Greger, M. Marcello, and E. H. K. Stelzer, “High-resolution three-dimensional imaging of large specimens with light sheet-based microscopy,” Nat. Methods 4, 311–313 (2007). [CrossRef]  

8. P. J. Keller, A. D. Schmidt, A. Santella, K. Khairy, Z. Bao, J. Wittbrodt, and E. H. K. Stelzer, “Fast, high-contrast imaging of animal development with scanned light sheet-based structured-illumination microscopy,” Nat. Methods 7, 637–642 (2010). [CrossRef]  

9. L. Gao, L. Shao, C. D. Higgins, J. S. Poulton, M. Peifer, M. W. Davidson, X. Wu, B. Goldstein, and E. Betzig, “Noninvasive imaging beyond the diffraction limit of 3D dynamics in thickly fluorescent specimens,” Cell 151, 1370–1385 (2012). [CrossRef]  

10. E. H. K. Stelzer, “Light sheet based fluorescence microscopes (LSFM, SPIM, DSLM) reduce phototoxic effects by several orders of magnitude,” Mech. Dev. 126, S36 (2009). [CrossRef]  

11. R. M. Power and J. Huisken, “A guide to light-sheet fluorescence microscopy for multiscale imaging,” Nat. Methods 14, 360–373 (2017). [CrossRef]  

12. P. P. Laissue, R. A. Alghamdi, P. Tomancak, E. G. Reynaud, and H. Shroff, “Assessing phototoxicity in live fluorescence imaging,” Nat. Methods 14, 657–661 (2017).

13. F. Pampaloni, N. Ansari, P. Girard, and E. H. K. Stelzer, “Light sheet-based fluorescence microscopy (LSFM) reduces phototoxic effects and provides new means for the modern life sciences,” Adv. Microsc. Tech. II 8086, 80860Y (2011). [CrossRef]  

14. M. Jemielita, M. J. Taormina, A. DeLaurier, C. B. Kimmel, and R. Parthasarathy, “Comparing phototoxicity during the development of a zebrafish craniofacial bone using confocal and light sheet fluorescence microscopy techniques,” J. Biophoton. 6, 920–928 (2013). [CrossRef]  

15. J. Vermot, S. E. Fraser, and M. Liebling, “Fast fluorescence microscopy for imaging the dynamics of embryonic development,” HFSP J. 2, 143–155 (2008). [CrossRef]  

16. M. Weber and J. Huisken, “Light sheet microscopy for real-time developmental biology,” Curr. Opin. Genet. Dev. 21, 566–572 (2011). [CrossRef]  

17. A. Kaufmann, M. Mickoleit, M. Weber, and J. Huisken, “Multilayer mounting enables long-term imaging of zebrafish development in a light sheet microscope,” Development 139, 3242–3247 (2012). [CrossRef]  

18. J. Huisken and D. Y. R. Stainier, “Selective plane illumination microscopy techniques in developmental biology,” Development 136, 1963–1975 (2009). [CrossRef]  

19. J. Swoger, M. Muzzopappa, H. López-Schier, and J. Sharpe, “4D retrospective lineage tracing using SPIM for zebrafish organogenesis studies,” J. Biophoton. 4, 122–134 (2011). [CrossRef]  

20. T. Ichikawa, K. Nakazato, P. J. Keller, H. Kajiura-Kobayashi, E. H. K. Stelzer, A. Mochizuki, and S. Nonaka, “Live imaging of whole mouse embryos during gastrulation: migration analyses of epiblast and mesodermal cells,” PLoS ONE 8, e64506 (2013). [CrossRef]  

21. B. Schmid, G. Shah, N. Scherf, M. Weber, K. Thierbach, C. P. Campos, I. Roeder, P. Aanstad, and J. Huisken, “High-speed panoramic light-sheet microscopy reveals global endodermal cell dynamics,” Nat. Commun. 4, 2207 (2013). [CrossRef]  

22. P. J. Scherz, J. Huisken, P. Sahai-Hernandez, and D. Y. R. Stainier, “High-speed imaging of developing heart valves reveals interplay of morphogenesis and function,” Development 135, 1179–1187 (2008). [CrossRef]  

23. S. Kumar, D. Wilding, M. B. Sikkel, A. R. Lyon, K. T. MacLeod, and C. Dunsby, “High-speed 2D and 3D fluorescence microscopy of cardiac myocytes,” Opt. Express 19, 13839–13847 (2011). [CrossRef]  

24. K. Mellman, J. Huisken, C. Dinsmore, C. Hoppe, and D. Y. Stainier, “Fibrillin-2b regulates endocardial morphogenesis in zebrafish,” Dev. Biol. 372, 111–119 (2012). [CrossRef]  

25. V. Trivedi, T. V. Truong, L. A. Trinh, D. B. Holland, M. Liebling, and S. E. Fraser, “Dynamic structure and protein expression of the live embryonic heart captured by 2-photon light sheet microscopy and retrospective registration,” Biomed. Opt. Express 6, 2056–2066 (2015). [CrossRef]  

26. R. Opitz, E. Maquet, J. Huisken, F. Antonica, A. Trubiroha, G. Pottier, V. Janssens, and S. Costagliola, “Transgenic zebrafish illuminate the dynamics of thyroid morphogenesis and its relationship to cardiovascular development,” Dev. Biol. 372, 203–216 (2012). [CrossRef]  

27. B. Patra, Y.-S. Peng, C.-C. Peng, W.-H. Liao, Y.-A. Chen, K.-H. Lin, Y.-C. Tung, and C.-H. Lee, “Migration and vascular lumen formation of endothelial cells in cancer cell spheroids of various sizes,” Biomicrofluidics 8, 052109 (2014). [CrossRef]  

28. Y. Wu, A. Ghitani, R. Christensen, A. Santella, Z. Du, G. Rondeau, Z. Bao, D. Colón-Ramos, and H. Shroff, “Inverted selective plane illumination microscopy (iSPIM) enables coupled cell identity lineaging and neurodevelopmental imaging in Caenorhabditis elegans,” Proc. Natl. Acad. Sci. USA 108, 17708–17713 (2011). [CrossRef]  

29. Y. Wu, R. Christensen, D. Colon-Ramos, and H. Shroff, “Advanced optical imaging techniques for neurodevelopment,” Curr. Opin. Neurobiol. 23, 1090–1097 (2013). [CrossRef]  

30. P. J. Keller, A. D. Schmidt, J. Wittbrodt, and E. H. K. Stelzer, “Digital scanned laser light-sheet fluorescence microscopy (DSLM) of zebrafish and Drosophila embryonic development,” Cold Spring Harb. Protoc. 2011, 1235–1243 (2011). [CrossRef]  

31. N. Jahrling, K. Becker, C. Schonbauer, F. Schnorrer, and H.-U. Dodt, “Three-dimensional reconstruction and segmentation of intact Drosophila by ultramicroscopy,” Front. Syst. Neurosci. 4, 1–6 (2010). [CrossRef]  

32. E. Rebollo, K. Karkali, F. Mangione, and E. Martin-Blanco, “Live imaging in Drosophila: the optical and genetic toolkits,” Methods 68, 48–59 (2014). [CrossRef]  

33. P. J. Keller, A. D. Schmidt, J. Wittbrodt, and E. H. K. Stelzer, “Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy,” Science 322, 1065–1069 (2008). [CrossRef]  

34. J. H. Hoh, W. F. Heinz, and J. L. Werbin, “Spatial information dynamics during early zebrafish development,” Dev. Biol. 377, 126–137 (2013). [CrossRef]  

35. P. J. Keller, “In vivo imaging of zebrafish embryogenesis,” Methods 62, 268–278 (2013). [CrossRef]  

36. F. Pinto-Teixeira, M. Muzzopappa, J. Swoger, A. Mineo, J. Sharpe, and H. Lopez-Schier, “Intravital imaging of hair-cell development and regeneration in the zebrafish,” Front. Neuroanat. 7, 33 (2013). [CrossRef]  

37. R. Fickentscher, P. Struntz, and M. Weiss, “Mechanical cues in the early embryogenesis of Caenorhabditis elegans,” Biophys. J. 105, 1805–1811 (2013). [CrossRef]  

38. G. Sena, Z. Frentz, K. D. Birnbaum, and S. Leibler, “Quantitation of cellular dynamics in growing Arabidopsis roots with light sheet microscopy,” PLoS ONE 6, e21303 (2011). [CrossRef]  

39. F. Amat and P. J. Keller, “Towards comprehensive cell lineage reconstructions in complex organisms using light-sheet microscopy,” Dev. Growth Differ. 55, 563–578 (2013). [CrossRef]  

40. P. G. Sappl and M. G. Heisler, “Live-imaging of plant development: latest approaches,” Curr. Opin. Plant Biol. 16, 33–40 (2013). [CrossRef]  

41. J. Swoger, P. Verveer, K. Greger, J. Huisken, and E. H. K. Stelzer, “Multi-view image fusion improves resolution in three-dimensional microscopy,” Opt. Express 15, 8029–8042 (2007). [CrossRef]  

42. E. J. Gualda, H. Pereira, T. Vale, M. Falcão Estrada, C. Brito, and N. Moreno, “SPIM-fluid: open source light-sheet based platform for high-throughput imaging,” Biomed. Opt. Express 6, 4447–4456 (2015). [CrossRef]  

43. A. Desmaison, C. Lorenzo, J. Rouquette, B. Ducommun, and V. Lobjois, “A versatile sample holder for single plane illumination microscopy,” J. Microsc. 251, 128–132 (2013). [CrossRef]  

44. C. J. Engelbrecht, K. Greger, E. G. Reynaud, U. Kržič, J. Colombelli, and E. H. Stelzer, “Three-dimensional laser microsurgery in light-sheet based microscopy (SPIM),” Opt. Express 15, 6420–6430 (2007). [CrossRef]  

45. T. Bruns, S. Schickinger, R. Wittig, and H. Schneckenburger, “Preparation strategy and illumination of three-dimensional cell cultures in light sheet-based fluorescence microscopy,” J. Biomed. Opt. 17, 1015181 (2012). [CrossRef]  

46. F. Pampaloni, R. Kroschewski, U. Berge, A. Marmaras, P. Horvath, and E. H. K. Stelzer, “Tissue-culture light sheet fluorescence microscopy (TC-LSFM) allows long-term imaging of three-dimensional cell cultures under controlled conditions,” Integr. Biol. 6, 988–998 (2014). [CrossRef]  

47. J. Capoulade, M. Wachsmuth, L. Hufnagel, and M. Knop, “Quantitative fluorescence imaging of protein diffusion and interaction in living cells,” Nat. Biotechnol. 29, 835–839 (2011). [CrossRef]  

48. F. C. Zanacchi, Z. Lavagnino, M. P. Donnorso, A. D. Bue, L. Furia, M. Faretta, and A. Diaspro, “Live-cell 3D super-resolution imaging in thick biological samples,” Nat. Methods 8, 1047–1049 (2011). [CrossRef]  

49. C. Lorenzo, C. Frongia, R. Jorand, J. Fehrenbach, P. Weiss, A. Maandhui, G. Gay, B. Ducommun, and V. Lobjois, “Live cell division dynamics monitoring in 3D large spheroid tumor models using light sheet microscopy,” Cell Div. 6, 22 (2011). [CrossRef]  

50. R. Jorand, G. Le Corre, J. Andilla, A. Maandhui, C. Frongia, V. Lobjois, B. Ducommun, and C. Lorenzo, “Deep and clear optical imaging of thick inhomogeneous samples,” PLoS ONE 7, e35795 (2012). [CrossRef]  

51. F. O. Fahrbach, V. Gurchenkov, K. Alessandri, P. Nassoy, and A. Rohrbach, “Light-sheet microscopy in thick media using scanned Bessel beams and two-photon fluorescence excitation,” Opt. Express 21, 13824–13839 (2013). [CrossRef]  

52. F. Pampaloni, N. Ansari, and E. H. K. Stelzer, “High-resolution deep imaging of live cellular spheroids with light-sheet-based fluorescence microscopy,” Cell Tissue Res. 352, 161–177 (2013). [CrossRef]  

53. N. Jaehrling, K. Becker, and H.-U. Dodt, “3D-reconstruction of blood vessels by ultramicroscopy,” Organogenesis 5, 227–230 (2009). [CrossRef]  

54. A. Ertürk, K. Becker, N. Jährling, C. P. Mauch, C. D. Hojer, J. G. Egen, F. Hellal, F. Bradke, M. Sheng, and H.-U. Dodt, “Three-dimensional imaging of solvent-cleared organs using 3DISCO,” Nat. Protoc. 7, 1983–1995 (2012). [CrossRef]  

55. A. Ertürk and F. Bradke, “High-resolution imaging of entire organs by 3-dimensional imaging of solvent cleared organs (3DISCO),” Exp. Neurol. 242, 57–64 (2013). [CrossRef]  

56. J. Palero, S. I. C. O. Santos, D. Artigas, and P. Loza-Alvarez, “A simple scanless two-photon fluorescence microscope using selective plane illumination,” Opt. Express 18, 8491–8498 (2010). [CrossRef]  

57. J. Huisken and D. Y. R. Stainier, “Even fluorescence excitation by multidirectional selective plane illumination microscopy (mSPIM),” Opt. Lett. 32, 2608–2610 (2007). [CrossRef]  

58. S. Preibisch, S. Saalfeld, J. Schindelin, and P. Tomancak, “Software for bead-based registration of selective plane illumination microscopy data,” Nat. Methods 7, 418–419 (2010). [CrossRef]  

59. R. Tomer, K. Khairy, F. Amat, and P. J. Keller, “Quantitative high-speed imaging of entire developing embryos with simultaneous multiview light-sheet microscopy,” Nat. Methods 9, 755–763 (2012). [CrossRef]  

60. R. K. Chhetri, F. Amat, Y. Wan, B. Höckendorf, W. C. Lemon, and P. J. Keller, “Whole-animal functional and developmental imaging with isotropic spatial resolution,” Nat. Methods 12, 1171–1178 (2015). [CrossRef]  

61. U. Krzic, S. Gunther, T. E. Saunders, S. J. Streichan, and L. Hufnagel, “Multiview light-sheet microscope for rapid in toto imaging,” Nat. Methods 9, 730–733 (2012). [CrossRef]  

62. F. O. Fahrbach and A. Rohrbach, “A line scanned light-sheet microscope with phase shaped self-reconstructing beams,” Opt. Express 18, 24229–24244 (2010). [CrossRef]  

63. T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8, 417–423 (2011). [CrossRef]  

64. O. E. Olarte, J. Licea-Rodriguez, J. A. Palero, E. J. Gualda, D. Artigas, J. Mayer, J. Swoger, J. Sharpe, I. Rocha-Mendoza, R. Rangel-Rojo, and P. Loza-Alvarez, “Image formation by linear and nonlinear digital scanned light-sheet fluorescence microscopy with Gaussian and Bessel beam profiles,” Biomed. Opt. Express 3, 1492–1505 (2012). [CrossRef]  

65. T. Vettenburg, H. I. C. Dalgarno, J. Nylk, C. Coll-Lladó, D. E. K. Ferrier, T. Čižmár, F. J. Gunn-Moore, and K. Dholakia, “Light-sheet microscopy using an Airy beam,” Nat. Methods 11, 541–544 (2014). [CrossRef]  

66. P. Zhang, M. E. Phipps, P. M. Goodwin, and J. H. Werner, “Confocal line scanning of a Bessel beam for fast 3D imaging,” Opt. Lett. 39, 3682–3685 (2014). [CrossRef]  

67. T. V. Truong, W. Supatto, D. S. Koos, J. M. Choi, and S. E. Fraser, “Deep and fast live imaging with two-photon scanned light-sheet microscopy,” Nat. Methods 8, 757–760 (2011). [CrossRef]  

68. P. Mahou, J. Vermot, E. Beaurepaire, and W. Supatto, “Multicolor two-photon light-sheet microscopy,” Nat. Methods 11, 600–601 (2014). [CrossRef]  

69. M. Zhao, H. Zhang, Y. Li, A. Ashok, R. Liang, W. Zhou, and L. Peng, “Cellular imaging of deep organ using two-photon Bessel light-sheet nonlinear structured illumination microscopy,” Biomed. Opt. Express 5, 1296–1308 (2014). [CrossRef]  

70. J. Andilla, R. Jorand, O. E. Olarte, A. C. Dufour, M. Cazales, Y. L. E. Montagner, R. Ceolato, N. Riviere, J.-C. Olivo-Marin, P. Loza-Alvarez, and C. Lorenzo, “Imaging tissue-mimic with light sheet microscopy: a comparative guideline,” Sci. Rep. 7, 44939 (2017). [CrossRef]  

71. P. J. Keller and H.-U. Dodt, “Light sheet microscopy of living or cleared specimens,” Curr. Opin. Neurobiol. 22, 138–143 (2012). [CrossRef]  

72. Y. Wu, P. Wawrzusin, J. Senseney, R. S. Fischer, R. Christensen, A. Santella, A. G. York, P. W. Winter, C. M. Waterman, Z. Bao, D. A. Colon-Ramos, M. McAuliffe, and H. Shroff, “Spatially isotropic four-dimensional imaging with dual-view plane illumination microscopy,” Nat. Biotechnol. 31, 1032–1038 (2013). [CrossRef]  

73. A. Kumar, Y. Wu, R. Christensen, P. Chandris, W. Gandler, E. McCreedy, A. Bokinsky, D. A. Colón-Ramos, Z. Bao, M. McAuliffe, G. Rondeau, and H. Shroff, “Dual-view plane illumination microscopy for rapid and spatially isotropic imaging,” Nat. Protoc. 9, 2555–2573 (2014). [CrossRef]  

74. P. Strnad, S. Gunther, J. Reichmann, U. Krzic, B. Balazs, G. de Medeiros, N. Norlin, T. Hiiragi, L. Hufnagel, and J. Ellenberg, “Inverted light-sheet microscope for imaging mouse pre-implantation development,” Nat. Methods 13, 139–142 (2015). [CrossRef]  

75. B.-C. Chen, W. R. Legant, K. Wang, L. Shao, D. E. Milkie, M. W. Davidson, C. Janetopoulos, X. S. Wu, J. A. Hammer, Z. Liu, B. P. English, Y. Mimori-Kiyosue, D. P. Romero, A. T. Ritter, J. Lippincott-Schwartz, L. Fritz-Laylin, R. D. Mullins, D. M. Mitchell, J. N. Bembenek, A.-C. Reymann, R. Boehme, S. W. Grill, J. T. Wang, G. Seydoux, U. S. Tulu, D. P. Kiehart, and E. Betzig, “Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution,” Science 346, 1257998 (2014). [CrossRef]  

76. L. Gao, L. Shao, B.-C. Chen, and E. Betzig, “3D live fluorescence imaging of cellular dynamics using Bessel beam plane illumination microscopy,” Nat. Protoc. 9, 1083–1101 (2014). [CrossRef]  

77. C. Dunsby, “Optically sectioned imaging by oblique plane microscopy,” Opt. Express 16, 20306–20316 (2008). [CrossRef]  

78. F. Cutrale and E. Gratton, “Inclined selective plane illumination microscopy adaptor for conventional microscopes,” Microsc. Res. Tech. 75, 1461–1466 (2012). [CrossRef]  

79. E. Zagato, T. Brans, S. Verstuyft, D. van Thourhout, J. Missinne, G. van Steenberge, J. Demeester, S. De Smedt, K. Remaut, K. Neyts, and K. Braeckmans, “Microfabricated devices for single objective single plane illumination microscopy (SoSPIM),” Opt. Express 25, 1732–1745 (2017). [CrossRef]  

80. M. B. M. Meddens, S. Liu, P. S. Finnegan, T. L. Edwards, C. D. James, and K. A. Lidke, “Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution,” Biomed. Opt. Express 7, 2219–2236 (2016). [CrossRef]  

81. R. Galland, G. Grenci, A. Aravind, V. Viasnoff, V. Studer, and J.-B. Sibarita, “3D high- and super-resolution imaging using single-objective SPIM,” Nat. Methods 12, 641–644 (2015). [CrossRef]  

82. J. C. M. Gebhardt, D. M. Suter, R. Roy, Z. W. Zhao, A. R. Chapman, S. Basu, T. Maniatis, and X. S. Xie, “Single-molecule imaging of transcription factor binding to DNA in live mammalian cells,” Nat. Methods 10, 421–426 (2013). [CrossRef]  

83. M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10, 413–420 (2013). [CrossRef]  

84. T. Panier, S. A. Romano, R. Olive, T. Pietri, G. Sumbre, R. Candelier, and G. Debregeas, “Fast functional imaging of multiple brain regions in intact zebrafish larvae using selective plane illumination microscopy,” Front. Neural Circuits 7, 65 (2013). [CrossRef]  

85. T. F. Holekamp, D. Turaga, and T. E. Holy, “Fast three-dimensional fluorescence imaging of activity in neural populations by objective-coupled planar illumination microscopy,” Neuron 57, 661–672 (2008). [CrossRef]  

86. F. O. Fahrbach, F. F. Voigt, B. Schmid, F. Helmchen, and J. Huisken, “Rapid 3D light-sheet microscopy with a tunable lens,” Opt. Express 21, 21010–21026 (2013). [CrossRef]  

87. M. B. Bouchard, V. Voleti, C. S. Mendes, C. Lacefield, W. B. Grueber, R. S. Mann, R. M. Bruno, and E. M. C. Hillman, “Swept confocally-aligned planar excitation (SCAPE) microscopy for high-speed volumetric imaging of behaving organisms,” Nat. Photonics 9, 113–119 (2015). [CrossRef]  

88. M. Mickoleit, B. Schmid, M. Weber, F. O. Fahrbach, S. Hombach, S. Reischauer, and J. Huisken, “High-resolution reconstruction of the beating zebrafish heart,” Nat. Methods 11, 919–922 (2014). [CrossRef]  

89. O. E. Olarte, J. Andilla, D. Artigas, and P. Loza-Alvarez, “Decoupled illumination detection in light sheet microscopy for fast volumetric imaging,” Optica 2, 702–705 (2015). [CrossRef]  

90. O. E. Olarte, J. Andilla, D. Artigas, and P. Loza-Alvarez, “Decoupled illumination-detection microscopy,” Opt. Photon. News 26(12), 41 (2015), special issue on Optics in 2015.

91. S. Quirin, N. Vladimirov, C.-T. Yang, D. S. Peterka, R. Yuste, and M. Ahrens, “Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy,” Opt. Lett. 41, 855–858 (2016). [CrossRef]  

92. M. Friedrich, Q. Gan, V. Ermolayev, and G. S. Harms, “STED-SPIM: stimulated emission depletion improves sheet illumination microscopy resolution,” Biophys. J. 100, L43–L45 (2011). [CrossRef]  

93. L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8, 1044–1046 (2011). [CrossRef]  

94. K. Greger, M. J. Neetz, E. G. Reynaud, and E. H. K. Stelzer, “Three-dimensional fluorescence lifetime imaging with a single plane illumination microscope provides an improved signal to noise ratio,” Opt. Express 19, 20743–20750 (2011). [CrossRef]  

95. A. Periasamy, P. Wodnicki, X. F. Wang, S. Kwon, G. W. Gordon, and B. Herman, “Time-resolved fluorescence lifetime imaging microscopy using a picosecond pulsed tunable dye laser system,” Rev. Sci. Instrum. 67, 3722–3731 (1996). [CrossRef]  

96. P. Weber, S. Schickinger, M. Wagner, B. Angres, T. Bruns, and H. Schneckenburger, “Monitoring of apoptosis in 3D cell cultures by FRET and light sheet fluorescence microscopy,” Int. J. Mol. Sci. 16, 5375–5385 (2015). [CrossRef]  

97. W. Jahr, B. Schmid, C. Schmied, F. O. Fahrbach, and J. Huisken, “Hyperspectral light sheet microscopy,” Nat. Commun. 6, 7990 (2015). [CrossRef]  

98. Y. Oshima, H. Kajiura-Kobayashi, and S. Nonaka, “Multimodal light-sheet microscopy for fluorescence live imaging,” Proc. SPIE 8227, 82271H (2012).

99. Y. Oshima, H. Sato, H. Kajiura-Kobayashi, T. Kimura, K. Naruse, and S. Nonaka, “Light sheet-excited spontaneous Raman imaging of a living fish by optical sectioning in a wide field Raman microscope,” Opt. Express 20, 16195–16204 (2012). [CrossRef]  

100. I. Rocha-Mendoza, J. Licea-Rodriguez, M. Marro, O. E. Olarte, M. Plata-Sanchez, and P. Loza-Alvarez, “Rapid spontaneous Raman light sheet microscopy using cw-lasers and tunable filters,” Biomed. Opt. Express 6, 3449–3461 (2015). [CrossRef]  

101. W. Müller, M. Kielhorn, M. Schmitt, J. Popp, and R. Heintzmann, “Light sheet Raman micro-spectroscopy,” Optica 3, 452–457 (2016). [CrossRef]  

102. J. Mayer, A. Robert-Moreno, R. Danuser, J. V. Stein, J. Sharpe, and J. Swoger, “OPTiSPIM: integrating optical projection tomography in light sheet microscopy extends specimen characterization to nonfluorescent contrasts,” Opt. Lett. 39, 1053–1056 (2014). [CrossRef]  

103. E. J. Gualda, T. Vale, P. Almada, J. A. Feijó, G. G. Martins, and N. Moreno, “OpenSpinMicroscopy: an open-source integrated microscopy platform,” Nat. Methods 10, 599–600 (2013). [CrossRef]  

104. N. Scherf and J. Huisken, “The smart and gentle microscope,” Nat. Biotechnol. 33, 815–818 (2015). [CrossRef]  

105. W. J. Alford, R. D. VanderNeut, and V. J. Zaleckas, “Laser scanning microscopy,” Proc. IEEE 70, 641–651 (1982). [CrossRef]  

106. W. Denk, J. Strickler, and W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990). [CrossRef]  

107. C. Xu and W. W. Webb, “Measurement of two-photon excitation cross sections of molecular fluorophores with data from 690 to 1050 nm,” J. Opt. Soc. Am. B 13, 481–491 (1996). [CrossRef]  

108. Z. Lavagnino, F. C. Zanacchi, E. Ronzitti, and A. Diaspro, “Two-photon excitation selective plane illumination microscopy (2PE-SPIM) of highly scattering samples: characterization and application,” Opt. Express 21, 5998–6008 (2013). [CrossRef]  

109. A. Maruyama, Y. Oshima, H. Kajiura-Kobayashi, S. Nonaka, T. Imamura, and K. Naruse, “Wide field intravital imaging by two-photon-excitation digital-scanned light-sheet microscopy (2p-DSLM) with a high-pulse energy laser,” Biomed. Opt. Express 5, 3311–3325 (2014). [CrossRef]  

110. N. N. Boustany, S. A. Boppart, and V. Backman, “Microscopic imaging and spectroscopy with scattered light,” Annu. Rev. Biomed. Eng. 12, 285–314 (2010). [CrossRef]  

111. S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58, R37–R61 (2013). [CrossRef]  

112. M. H. Niemz, Laser-Tissue Interactions (Springer, 2002).

113. J. R. Lorenzo, Principles of Diffuse Light Propagation: Light Propagation in Tissues with Applications in Biology and Medicine (World Scientific, 2012).

114. Z. Yang, H. Downie, E. Rozbicki, L. X. Dupuy, and M. P. MacDonald, “Light sheet tomography (LST) for in situ imaging of plant roots,” Opt. Express 21, 16239–16247 (2013). [CrossRef]  

115. K. Kneipp, H. Kneipp, I. Itzkan, R. R. Dasari, and M. S. Feld, “Ultrasensitive chemical analysis by Raman spectroscopy,” Chem. Rev. 99, 2957–2976 (1999). [CrossRef]  

116. I. Barman, K. M. Tan, and G. P. Singh, “Optical sectioning using single-plane-illumination Raman imaging,” J. Raman Spectrosc. 41, 1099–1101 (2010). [CrossRef]  

117. C. W. McCutchen, “Generalized aperture and the three-dimensional diffraction image,” J. Opt. Soc. Am. 54, 240–244 (1964). [CrossRef]  

118. T. Maruno, E. Toda, and K. Bennett, “Comparison of CMOS and EMCCD cameras for computational imaging with application to super-resolution localization microscopy,” in Imaging and Applied Optics Technical Papers (Optical Society of America, 2012), paper IM4C.3.

119. E. Hecht, Optics, 4th ed. (Pearson Addison-Wesley, 2010).

120. K. Becker, N. Jaehrling, E. R. Kramer, E. Schnorrer, and H.-U. Dodt, “Ultramicroscopy: 3D reconstruction of large microscopical specimens,” J. Biophoton. 1, 36–42 (2008). [CrossRef]  

121. E. S. Welf, M. K. Driscoll, K. M. Dean, C. Schäfer, J. Chu, M. W. Davidson, M. Z. Lin, G. Danuser, and R. Fiolka, “Quantitative multiscale cell imaging in controlled 3D microenvironments,” Dev. Cell 36, 462–475 (2016). [CrossRef]  

122. A. H. Voie, D. H. Burns, and F. A. Spelman, “Orthogonal-plane fluorescence optical sectioning: three-dimensional imaging of macroscopic biological specimens,” J. Microsc. 170, 229–236 (1993). [CrossRef]  

123. S. Lindek, R. Pick, and E. H. K. Stelzer, “Confocal theta microscope with three objective lenses,” Rev. Sci. Instrum. 65, 3367–3372 (1994). [CrossRef]  

124. J. Wu and R. K. Y. Chan, “A fast fluorescence imaging flow cytometer for phytoplankton analysis,” Opt. Express 21, 23921–23926 (2013). [CrossRef]  

125. J. Wu, J. Li, and R. K. Y. Chan, “A light sheet based high throughput 3D-imaging flow cytometer for phytoplankton analysis,” Opt. Express 21, 14474–14480 (2013). [CrossRef]  

126. R. Regmi, K. Mohan, and P. P. Mondal, “High resolution light-sheet based high-throughput imaging cytometry system enables visualization of intra-cellular organelles,” AIP Adv. 4, 097125 (2014). [CrossRef]  

127. P. Paiè, F. Bragheri, A. Bassi, and R. Osellame, “Selective plane illumination microscopy on a chip,” Lab Chip 16, 1556–1560 (2016). [CrossRef]  

128. J. Fehrenbach, P. Weiss, and C. Lorenzo, “Variational algorithms to remove stationary noise: applications to microscopy imaging,” IEEE Trans. Image Process. 21, 4420–4430 (2012). [CrossRef]  

129. K. Greger, J. Swoger, and E. H. K. Stelzer, “Basic building units and properties of a fluorescence single plane illumination microscope,” Rev. Sci. Instrum. 78, 023705 (2007). [CrossRef]  

130. P. J. Shaw, D. A. Agard, Y. Hiraoka, and J. W. Sedat, “Tilted view reconstruction in optical microscopy. Three-dimensional reconstruction of Drosophila melanogaster embryo nuclei,” Biophys. J. 55, 101–110 (1989). [CrossRef]  

131. S. Preibisch, F. Amat, E. Stamataki, M. Sarov, R. H. Singer, E. Myers, and P. Tomancak, “Efficient Bayesian-based multiview deconvolution,” Nat. Methods 11, 645–648 (2014). [CrossRef]  

132. B. Schmid and J. Huisken, “Real-time multi-view deconvolution,” Bioinformatics 31, 3398–3400 (2015). [CrossRef]  

133. C. J. Engelbrecht and E. H. Stelzer, “Resolution enhancement in a light-sheet-based microscope (SPIM),” Opt. Lett. 31, 1477–1479 (2006). [CrossRef]  

134. C. J. R. Sheppard and H. J. Matthews, “Imaging in high-aperture optical systems,” J. Opt. Soc. Am. A 4, 1354–1360 (1987). [CrossRef]  

135. N. Streibl, “Depth transfer by an imaging system,” Opt. Acta 31, 1233–1241 (1984). [CrossRef]  

136. J. Alda, “Laser and Gaussian beam propagation and transformation,” in Encyclopedia of Optical Engineering, R. G. Driggers, C. Hoffman, and R. Driggers, eds. (CRC Press, 2003), pp. 999–1013.

137. J. W. Dobrucki, “Fluorescence microscopy,” in Fluorescence Microscopy, U. Kubitscheck, ed. (Wiley-VCH Verlag, 2013), pp. 97–142.

138. H. Urey, “Spot size, depth-of-focus, and diffraction ring intensity formulas for truncated Gaussian beams,” Appl. Opt. 43, 620–625 (2004). [CrossRef]  

139. F. M. Dickey and S. C. Holswade, Laser Beam Shaping: Theory and Techniques (CRC Press, 2000).

140. G. Zhang, V. Gurtu, and S. R. Kain, “An enhanced green fluorescent protein allows sensitive detection of gene transfer in mammalian cells,” Biochem. Biophys. Res. Commun. 227, 707–711 (1996). [CrossRef]  

141. M. Drobizhev, N. S. Makarov, S. E. Tillo, T. E. Hughes, and A. Rebane, “Two-photon absorption properties of fluorescent proteins,” Nat. Methods 8, 393–399 (2011). [CrossRef]  

142. S. Lindek and E. H. K. Stelzer, “Optical transfer functions for confocal theta fluorescence microscopy,” J. Opt. Soc. Am. A 13, 479–482 (1996). [CrossRef]  

143. S. Saghafi, K. Becker, N. Jaehrling, M. Richter, E. R. Kramer, and H.-U. Dodt, “Image enhancement in ultramicroscopy by improved laser light sheets,” J. Biophoton. 3, 686–695 (2010). [CrossRef]  

144. I. Golub, B. Chebbi, and J. Golub, “Toward the optical ‘magic carpet’: reducing the divergence of a light sheet below the diffraction limit,” Opt. Lett. 40, 5121–5124 (2015). [CrossRef]  

145. Z. Yang, M. Prokopas, J. Nylk, C. Coll-Lladó, F. J. Gunn-Moore, D. E. K. Ferrier, T. Vettenburg, and K. Dholakia, “A compact Airy beam light sheet microscope with a tilted cylindrical lens,” Biomed. Opt. Express 5, 3434–3442 (2014). [CrossRef]  

146. P. J. Keller and E. H. Stelzer, “Quantitative in vivo imaging of entire embryos with digital scanned laser light sheet fluorescence microscopy,” Curr. Opin. Neurobiol. 18, 624–632 (2008). [CrossRef]  

147. P. J. Keller and E. H. K. Stelzer, “Digital scanned laser light sheet fluorescence microscopy,” Cold Spring Harb. Protoc. 2010, pdb.top78 (2010). [CrossRef]  

148. F. O. Fahrbach, P. Simon, and A. Rohrbach, “Microscopy with self-reconstructing beams,” Nat. Photonics 4, 780–785 (2010). [CrossRef]  

149. W. Zong, J. Zhao, X. Chen, Y. Lin, H. Ren, Y. Zhang, M. Fan, Z. Zhou, H. Cheng, Y. Sun, and L. Chen, “Large-field high-resolution two-photon digital scanned light-sheet microscopy,” Cell Res. 25, 254–257 (2015). [CrossRef]  

150. J. Durnin, “Exact solutions for nondiffracting beams. I. The scalar theory,” J. Opt. Soc. Am. A 4, 651–654 (1987). [CrossRef]  

151. R. P. MacDonald, S. A. Boothroyd, T. Okamoto, J. Chrostowski, and B. A. Syrett, “Interboard optical data distribution by Bessel beam shadowing,” Opt. Commun. 122, 169–177 (1996). [CrossRef]  

152. E. Baumgart and U. Kubitscheck, “Scanned light sheet microscopy with confocal slit detection,” Opt. Express 20, 21805–21814 (2012). [CrossRef]  

153. F. O. Fahrbach and A. Rohrbach, “Propagation stability of self-reconstructing Bessel beams enables contrast-enhanced imaging in thick media,” Nat. Commun. 3, 632 (2012). [CrossRef]  

154. T. Meinert, O. Tietz, K. J. Palme, and A. Rohrbach, “Separation of ballistic and diffusive fluorescence photons in confocal light-sheet microscopy of Arabidopsis roots,” Sci. Rep. 6, 30378 (2016). [CrossRef]  

155. F. O. Fahrbach, V. Gurchenkov, K. Alessandri, P. Nassoy, and A. Rohrbach, “Self-reconstructing sectioned Bessel beams offer submicron optical sectioning for large fields of view in light-sheet microscopy,” Opt. Express 21, 11425–11440 (2013). [CrossRef]  

156. L. Gao, “Optimization of the excitation light sheet in selective plane illumination microscopy,” Biomed. Opt. Express 6, 881–890 (2015). [CrossRef]  

157. G. A. Siviloglou and D. N. Christodoulides, “Accelerating finite energy Airy beams,” Opt. Lett. 32, 979–981 (2007). [CrossRef]  

158. G. A. Siviloglou, J. Broky, A. Dogariu, and D. N. Christodoulides, “Observation of accelerating Airy beams,” Phys. Rev. Lett. 99, 213901 (2007). [CrossRef]  

159. M. A. Bandres, “Accelerating beams,” Opt. Lett. 34, 3791–3793 (2009). [CrossRef]  

160. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995). [CrossRef]  

161. J. Nylk, K. McCluskey, S. Aggarwal, J. A. Tello, and K. Dholakia, “Enhancement of image quality and imaging depth with Airy light-sheet microscopy in cleared and non-cleared neural tissue,” Biomed. Opt. Express 7, 4021–4033 (2016). [CrossRef]  

162. S. Bolte and F. P. Cordelières, “A guided tour into subcellular colocalization analysis in light microscopy,” J. Microsc. 224, 213–232 (2006). [CrossRef]  

163. B. K. Grillo-Hill, B. A. Webb, and D. L. Barber, “Ratiometric imaging of pH probes,” in Methods in Cell Biology (Elsevier, 2014), Vol. 123, pp. 429–448.

164. A. Rohrbacher, O. E. Olarte, V. Villamaina, P. Loza-Alvarez, and B. Resan, “Multiphoton imaging with blue-diode-pumped SESAM-modelocked Ti:sapphire oscillator generating 5 nJ 82 fs pulses,” Opt. Express 25, 10677–10684 (2017). [CrossRef]  

165. T. W. J. Gadella, T. M. Jovin, and R. M. Clegg, “Fluorescence lifetime imaging microscopy (FLIM): spatial resolution of microstructures on the nanosecond time scale,” Biophys. Chem. 48, 221–239 (1993). [CrossRef]  

166. H. Wallrabe and A. Periasamy, “Imaging protein molecules using FRET and FLIM microscopy,” Curr. Opin. Biotechnol. 16, 19–27 (2005). [CrossRef]  

167. A. Costa, A. Candeo, L. Fieramonti, G. Valentini, and A. Bassi, “Calcium dynamics in root cells of Arabidopsis thaliana visualized with selective plane illumination microscopy,” PLoS ONE 8, e75646 (2013). [CrossRef]  

168. S. L. Reidt, D. J. O’Brien, K. Wood, and M. P. MacDonald, “Polarised light sheet tomography,” Opt. Express 24, 11239–11249 (2016). [CrossRef]  

169. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts, 2007).

170. D. Merino, O. E. Olarte, J. L. Cruz, J. Diez, Y. Barmenkov, M. Andres, P. Perez, and P. Loza-Alvarez, “Use of a supercontinuum laser source with low temporal coherence for elastic scattering light sheet microscopy,” in European Conference on Lasers and Electro-Optics (Optical Society of America, 2015), paper CL_P_7.

171. Z. Xu and T. E. Holy, “Development of low-coherence light sheet illumination microscope for fluorescence-free bioimaging,” Proc. SPIE 8129, 812908 (2011). [CrossRef]  

172. L. Silvestri, A. Bria, L. Sacconi, G. Iannello, and F. S. Pavone, “Confocal light sheet microscopy: micron-scale neuroanatomy of the entire mouse brain,” Opt. Express 20, 20582–20598 (2012). [CrossRef]  

173. Z. Yang, L. Mei, F. Xia, Q. Luo, L. Fu, and H. Gong, “Dual-slit confocal light sheet microscopy for in vivo whole-brain imaging of zebrafish,” Biomed. Opt. Express 6, 1797–1811 (2015). [CrossRef]  

174. N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14, 374–380 (2017). [CrossRef]  

175. X. Tao, B. Fernandez, O. Azucena, M. Fu, D. Garcia, Y. Zuo, D. C. Chen, and J. Kubby, “Adaptive optics confocal microscopy using direct wavefront sensing,” Opt. Lett. 36, 1062–1064 (2011). [CrossRef]  

176. R. Aviles-Espinosa, J. Andilla, R. Porcar-Guezenec, O. E. Olarte, M. Nieto, X. Levecq, D. Artigas, and P. Loza-Alvarez, “Measurement and correction of in vivo sample aberrations employing a nonlinear guide-star in two-photon excited fluorescence microscopy,” Biomed. Opt. Express 2, 3135–3149 (2011). [CrossRef]  

177. K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014). [CrossRef]  

178. I. Izeddin, M. E. Beheiry, J. Andilla, D. Ciepielewski, X. Darzacq, and M. Dahan, “PSF shaping using adaptive optics for three-dimensional single-molecule super-resolution imaging and tracking,” Opt. Express 20, 4957–4967 (2012). [CrossRef]  

179. K. F. Tehrani, J. Xu, Y. Zhang, P. Shen, and P. Kner, “Adaptive optics stochastic optical reconstruction microscopy (AO-STORM) using a genetic algorithm,” Opt. Express 23, 13677–13692 (2015). [CrossRef]  

180. H. I. C. Dalgarno, T. Cizmar, T. Vettenburg, J. Nylk, F. J. Gunn-Moore, and K. Dholakia, “Wavefront corrected light sheet microscopy in turbid media,” Appl. Phys. Lett. 100, 191108 (2012). [CrossRef]  

181. L. A. Royer, W. C. Lemon, R. K. Chhetri, Y. Wan, M. Coleman, E. W. Myers, and P. J. Keller, “Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms,” Nat. Biotechnol. 34, 1267–1278 (2016). [CrossRef]  

182. D. Turaga and T. E. Holy, “Aberrations and their correction in light-sheet microscopy: a low-dimensional parametrization,” Biomed. Opt. Express 4, 1654–1661 (2013). [CrossRef]  

183. R. Tomer, M. Lovett-Barron, I. Kauvar, A. Andalman, V. M. Burns, S. Sankaran, L. Grosenick, M. Broxton, S. Yang, and K. Deisseroth, “SPED light sheet microscopy: fast mapping of biological system structure and function,” Cell 163, 1796–1806 (2015). [CrossRef]  

184. D. Axelrod, “Cell-substrate contacts illuminated by total internal reflection fluorescence,” J. Cell Biol. 89, 141–145 (1981). [CrossRef]  

185. M. Tokunaga, N. Imamoto, and K. Sakata-Sogawa, “Highly inclined thin illumination enables clear single-molecule imaging in cells,” Nat. Methods 5, 159–161 (2008). [CrossRef]  

186. C. A. Konopka and S. Y. Bednarek, “Variable-angle epifluorescence microscopy: a new way to look at protein dynamics in the plant cell cortex,” Plant J. 53, 186–196 (2008). [CrossRef]  

187. P. Theer, D. Dragneva, and M. Knop, “πSPIM: high NA high resolution isotropic light-sheet imaging in cell culture dishes,” Sci. Rep. 6, 32880 (2016). [CrossRef]  

188. J. G. Ritter, R. Veith, A. Veenendaal, J. P. Siebrasse, and U. Kubitscheck, “Light sheet microscopy for single molecule tracking in living tissue,” PLoS ONE 5, e11639 (2010). [CrossRef]  

189. J.-H. Spille, T. Kaminski, H.-P. Königshoven, and U. Kubitscheck, “Dynamic three-dimensional tracking of single fluorescent nanoparticles deep inside living tissue,” Opt. Express 20, 19697–19707 (2012). [CrossRef]  

190. J.-H. Spille, T. P. Kaminski, K. Scherer, J. S. Rinne, A. Heckel, and U. Kubitscheck, “Direct observation of mobility state transitions in RNA trajectories by sensitive single molecule feedback tracking,” Nucleic Acids Res. 43, e14 (2015). [CrossRef]  

191. T. Wohland, X. Shi, J. Sankaran, and E. H. K. Stelzer, “Single plane illumination fluorescence correlation spectroscopy (SPIM-FCS) probes inhomogeneous three-dimensional environments,” Opt. Express 18, 10627–10641 (2010). [CrossRef]  

192. A. P. Singh, J. W. Krieger, J. Buchholz, E. Charbon, J. Langowski, and T. Wohland, “The performance of 2D array detectors for light sheet based fluorescence correlation spectroscopy,” Opt. Express 21, 8652–8668 (2013). [CrossRef]  

193. J. W. Krieger, A. P. Singh, C. S. Garbe, T. Wohland, and J. Langowski, “Dual-color fluorescence cross-correlation spectroscopy on a single plane illumination microscope (SPIM-FCCS),” Opt. Express 22, 2358–2375 (2014). [CrossRef]  

194. P. Sengupta, S. Van Engelenburg, and J. Lippincott-Schwartz, “Visualizing cell structure and function with point-localization superresolution imaging,” Dev. Cell 23, 1092–1102 (2012). [CrossRef]  

195. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006). [CrossRef]  

196. W. R. Legant, L. Shao, J. B. Grimm, T. A. Brown, D. E. Milkie, B. B. Avants, L. D. Lavis, and E. Betzig, “High-density three-dimensional localization microscopy across large volumes,” Nat. Methods 13, 359–365 (2016). [CrossRef]  

197. Y. S. Hu, Q. Zhu, K. Elkins, K. Tse, Y. Li, J. A. J. Fitzpatrick, I. M. Verma, and H. Cang, “Light-sheet Bayesian microscopy enables deep-cell super-resolution imaging of heterochromatin in live human embryonic stem cells,” Opt. Nanosc. 2, 7 (2013). [CrossRef]  

198. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19, 780–782 (1994). [CrossRef]  

199. P. Hoyer, G. de Medeiros, B. Balázs, N. Norlin, C. Besir, J. Hanne, H.-G. Kräusslich, J. Engelhardt, S. J. Sahl, S. W. Hell, and L. Hufnagel, “Breaking the diffraction limit of light-sheet fluorescence microscopy by RESOLFT,” Proc. Natl. Acad. Sci. USA 113, 3442–3446 (2016). [CrossRef]  

200. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198, 82–87 (2000). [CrossRef]  

201. J. Michaelson, H. Choi, P. So, and H. Huang, “Depth-resolved cellular microrheology using HiLo microscopy,” Biomed. Opt. Express 3, 1241–1255 (2012). [CrossRef]  

202. B.-J. Chang, V. D. P. Meza, and E. H. K. Stelzer, “csiLSFM combines light-sheet fluorescence microscopy and coherent structured illumination for a lateral resolution below 100 nm,” Proc. Natl. Acad. Sci. USA 114, 4869–4874 (2017). [CrossRef]  

203. J. Mertz and J. Kim, “Scanning light-sheet microscopy in the whole mouse brain with HiLo background rejection,” J. Biomed. Opt. 15, 016027 (2010). [CrossRef]  

204. J. D. Manton and E. J. Rees, “triSPIM: light sheet microscopy with isotropic super-resolution,” Opt. Lett. 41, 4170–4173 (2016). [CrossRef]  

205. H. P. Kao and A. S. Verkman, “Tracking of single fluorescent particles in three dimensions: use of cylindrical optics to encode particle position,” Biophys. J. 67, 1291–1300 (1994). [CrossRef]  

206. R. A. Hoebe, C. H. Van Oven, T. W. J. Gadella, P. B. Dhonukshe, C. J. F. Van Noorden, and E. M. M. Manders, “Controlled light-exposure microscopy reduces photobleaching and phototoxicity in fluorescence live-cell imaging,” Nat. Biotechnol. 25, 249–253 (2007). [CrossRef]  

207. C. Conrad, A. Wünsche, T. H. Tan, J. Bulkescher, F. Sieckmann, F. Verissimo, A. Edelstein, T. Walter, U. Liebel, R. Pepperkok, and J. Ellenberg, “Micropilot: automation of fluorescence microscopy-based imaging for systems biology,” Nat. Methods 8, 246–249 (2011). [CrossRef]  

208. D. Di Battista, D. Ancora, H. Zhang, K. Lemonaki, E. Marakis, E. Liapis, S. Tzortzakis, and G. Zacharakis, “Tailored light sheets through opaque cylindrical lenses,” Optica 3, 1237–1240 (2016). [CrossRef]  

aop-10-1-111-i001 Omar E. Olarte received his PhD in photonics from the Universitat Politècnica de Catalunya, Spain, in 2014. In the same year, he was awarded a “Severo-Ochoa” excellence postdoctoral fellowship at ICFO, Spain, for research in high-resolution light-sheet microscopy. Since then he is a postdoctoral researcher at ICFO’s Super-resolution Light microscopy and Nanoscopy (SLN) lab. Dr. Olarte is a physicist and an optical scientist with ample experience in the development of advanced optical instrumentation for studying complex biological phenomena: from deep imaging of biological tissues and laser-manipulation of small organisms to ultrafast calcium imaging of neuronal networks. He is currently developing fast high-resolution light-sheet microscopy technologies for neurobiology applications.

aop-10-1-111-i002 Jordi Andilla obtained his PhD in applied physics working in holographic optical tweezers for biomedical applications in the University of Barcelona in 2009. In this work, he developed and characterized a complete system of HOT and he demonstrated its usability in in situ biological experiments and single molecule experiments like DNA stretching. From 2009 until 2011 he moved to France to work as PostDoc in the company Imagine Optic. He developed adaptive optics in the field of biological microscopy. This work was focused in techniques like non-linear microscopy, light sheet microscopy, and super-resolution microscopy. After this period, he moved to the Institute of Photonic Sciences (ICFO) in Castelldefels (Spain) as research engineer to develop, maintain, and support the systems of the Super resolution Light microscopy and Nanoscopy lab facility (SLN). He currently develops super-resolution and light sheet microscopes for biomedical and its corresponding protocols for non-specialist users. He is also actively contributing to outreach activities to disseminate science with projects like the creation of a laser harp or the legolish (legolish.org) as well as many talks and workshops.

aop-10-1-111-i003 Emilio J. Gualda, graduated in telecommunication engineering, obtaining his PhD at the Universitat Politècnica de Catalunya in 2005 in light propagation through optical fibers. Afterward he performed four postdoctoral stays (FORTH [Greece], LOUM [Spain], ICFO [Spain], IGC [Portugal]). In 2015, he returned to ICFO for the development of high-throughput light-sheet based microscopes within the framework of a Jovenes Investigadores personal grant. In 2016 Dr. Gualda was awarded with the Ramon y Cajal fellowship. He has a strong background in several imaging techniques including confocal, TPEF, SHG, THG, STED, STORM, FCS, FLIM, OPT, and LSFM for applications in C. elegans, ocular tissues, zebrafish, and 3D cell cultures, and a broad skills spectrum in terms of optics and photonics, microscopy development and control, image and data processing and sample handling. His scientific records include 32 research papers, more than 40 participations in national and international congresses and meetings, 20 invited talks, co-organization of three EMBO practical courses at IGC, and a diploma thesis coordination.

aop-10-1-111-i004 Pablo Loza-Alvarez received his PhD in laser physics from the University of St Andrews, Scotland in 2000. He then performed 2 postdoctoral stays, one in St Andrews and one at the Universitat Politecnica de Catalunya under the Ramon y Cajal programme. In 2004 he was incorporated to ICFO as a Group Leader. In 2006 he entered in the I3 ANEP-MICINN programme. Currently he is the head of the Super Resolution Light Microscopy and Nanoscopy lab at ICFO. Dr. Loza-Alvarez now has strong experience in microscopy and, by introducing novel photonics tools, has developed a number of novel imaging techniques. These have been used for a wide variety of applications ranging from the imaging of large model organisms (mesoscopic level) to the visualization of subcellular components (super resolution). He has directed 7 PhD (1 ongoing) students and co-supervised 3 other with main universities. Dr. Loza-Alvarez has co-authored over 250 publications in international journals and conferences (h = 25, Google Scholar), has written 6 patents, and has been invited for seminars at different conferences, schools, and colloquia.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (28)

Figure 1.
Figure 1. Basic schematic of a LSFM.
Figure 2.
Figure 2. Electronic energy level scheme showing the different excitation processes: one-photon (linear) absorption and two-photon (nonlinear) absorption.
Figure 3.
Figure 3. Schematic of the illumination and detection arm in a LSFM.
Figure 4.
Figure 4. LSFM setups: (a) macro LS configuration, (b) cylindrical lens (CL) configuration, and (c) DSLM configuration. See the main text for a detailed description.
Figure 5.
Figure 5. Examples of four different LSFM architectures. (a) Multiple objectives: mSPIM has double side illumination and single side detection, all lenses in a plane parallel to the table. Sample (thinner cylinder in front of the objectives) can be positioned from the top, using a capillary (thick cylinder, entering the scene from the top). (b) Double side illumination and double side detection with only two objective lenses on a 45° configuration in diSPIM, both lenses in a plane orthogonal to the table. (c) A single objective lens and a micro-machined mirror is used in a soSPIM microscope to both excite and detect emission from fluorophores. (d) Two en face objectives and an AFM tip mirror are used in RSLM; excitation objective not shown. The LS is represented in blue and the detection cone in green.
Figure 6.
Figure 6. Examples of four different sample scanning approaches on LSFM systems. (a) Standard sample scanning by mechanically translating the sample across the LS plane. (b) Opto-mechanic solution where the sample remains static. A galvo mirror scans the LS over the sample in coordination with a piezo stage that refocuses the detection objective. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI. (c) The detection objective depth of field is extended using a cubic phase mask to extend its depth of field, allowing all the planes of the sample to be in focus simultaneously. During acquisition a galvo mirror scans the sample, illuminating different planes. (d) Samples flow inside a FEP capillary across the LS plane on the SPIM-Fluid system. The system allows imaging of, in an automated way, hundreds of samples providing high-throughput capabilities. Adapted from [2,42,84,89], respectively. Figure (a) from Huisken et al., Science 305, 1007–1009 (2004) [2]. Reprinted with permission from AAAS.
Figure 7.
Figure 7. Basic imaging properties of a WFM, (a) PSF and (b) MTF. See the main text for a complete description. Color intensity values are normalized to their maximum; darker colors indicate higher intensities.
Figure 8.
Figure 8. Gaussian beams with different truncation ratios T. Beam properties (a) FOVi and (b) Dbeam, according to Eqs. (6) and (7). These curves were generated using the approximation formulas for truncated Gaussian beams reported in Ref. [138]. See the main text for more detail.
Figure 9.
Figure 9. Examples of pupil amplitude masks for the generation of LSs based on (a) cylindrical lenses, and scanned (b) Gaussian and (c) Bessel beams. The white corresponds to a 100% transmission and the red dashed lines indicate k for maximum NA available in the excitation objective. For details on the parameters see Table 3 in the main text.
Figure 10.
Figure 10. Example of a LS generated using a cylindrical lens. (a) Detection objective PSF, (b) LS intensity profile at the center of the FOV, (c) overall MTF, and (d) axial profile of the MTF (ky=0). Color intensity values are normalized to their maximum; darker colors indicate higher intensities.
Figure 11.
Figure 11. Example of a 2P-LSFM generated using a cylindrical lens. (a) Detection objective intensity PSF, (b) LS intensity profile at the center of the FOV, (c) overall MTF, and (d) MTF axial profile (ky=0) for 2P-LSFM (orange) and LS (blue). Color intensity values are normalized to their maximum; darker colors indicate higher intensities.
Figure 12.
Figure 12. Examples of beam profiles for DSLM LS generation. (a) Gaussian and (b) Bessel generating the same FOV width of around 60 μm. (c) Bessel beam generating twice the original FOV width of (a). The square root of the beam intensity is shown for better visualization. Color intensity values are normalized to their maximum; darker colors indicate higher intensities.
Figure 13.
Figure 13. Example of LS generated for DSLM using Gaussian beams. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for DSLM (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.
Figure 14.
Figure 14. Example of 2P-DSLM generation using Gaussian beams. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for 2P-DSLM (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.
Figure 15.
Figure 15. Example of LS generation for DSLM with Bessel beams. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for DSLM with Bessel beams (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum, darker colors indicate higher intensities. See the description in the main text.
Figure 16.
Figure 16. Example of LS generation for 2P-DSLM with Bessel beams. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for 2P-DSLM with Bessel beams (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.
Figure 17.
Figure 17. Examples of pupil amplitude masks for the generation of Bessel-like beams for scanned DSLM, based on (a) sectioned Bessel beams, and (b) optical lattices. The white color corresponds to a transmissivity T=1.0, the red dotted lines indicate the maximum NA available in the excitation objective, and the green lines the limits of the base Bessel ring.
Figure 18.
Figure 18. Example of LS generation for DSLM with SBBs. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for 1P-DSLM with SBB (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.
Figure 19.
Figure 19. Example of LS generation for DSLM with optical lattices, optimized for optical sectioning. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for DSLM with optical lattices (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.
Figure 20.
Figure 20. Example of LS generation for DSLM with optical lattices, optimized for structured illumination. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for DSLM with optical lattices (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.
Figure 21.
Figure 21. Example of LS generation for DSLM with Airy beams. The zy sections of (a) the excitation beam profile and (b) the LS (scanned-beam) intensity profile, at the center of the FOV. Overall (c) MTF and (d) MTF axial profile (ky=0) for DSLM with Airy beams (orange) and LSFM (blue). Color maps for intensity values are normalized to the maximum; darker colors indicate higher intensities. See the description in the main text.
Figure 22.
Figure 22. Optical setups for (a) multicolor 2P-LSFM, which combines bidirectional multicolor two-photon excitation and multispectral detection on a single camera [68], and (b) hyperspectral LS allowing multiple use of fluorescent markers (adapted with permission from Jahr et al., Nat. Commun. 6, 7990 [2015] [97]), published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI).
Figure 23.
Figure 23. SPIM-FLIM optical setup [94]. See the main text for more details.
Figure 24.
Figure 24. Raman LS optical setups. (a) A CW laser forms the LS on the sample plane using a galvo mirror (DSLM). A tunable filter (TF) scans the spectrum with an edge depending on θ, the tilt angle of the filter. Inset: spectral knife edge (KE) trace extraction from a 2D image stack at a region of interest I(x,y) with N as the total number of images, each taken at a different filter tilt angle [100]. (b) Fourier transform-based Raman LS. The illumination LS is done with a high-power laser and a cylindrical lens. The spectra are recovered using a Fourier-transform imaging spectrometer (red path). See the main text for a detailed description [101].
Figure 25.
Figure 25. Confocal line scanning DSLM. (a) The principle of confocal line scanning by using a DSLM and a sCMOS camera. XY and XZ maximum projection intensity of 3D stacks of a 4 dpf zebrafish expressing H2B-GFP are shown for the (b), (e) Gaussian; (c), (f) Bessel; and (d), (g) Bessel confocal line cases. See the main text for more detail.
Figure 26.
Figure 26. AO for LSFM. (a) The optical setup for correcting the aberrations using AO in the detection path of a LSFM (enclosed in the green square). The main components are: the deformable mirror (DM) that is the active element that can modify the wavefront, and the Hartmann–Shack wavefront sensor (HSWF) that measures the amount of wavefront aberration that should be compensated. (b) Maximum intensity projection of a 3D stack of 100 images (z spacing 1 mm) of a large MCTS expressing a fluorescent nuclear protein, without (w/o AO) and with AO (AO). Scale bar: 50 μm. Adapted with permission from Jorand et al., PLoS ONE 7, e35795 (2012) [50]. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.
Figure 27.
Figure 27. LSFM with an extended DOF. (a) Zoom-in of the optical setup for WFC-LSFM [see also Fig. 6(c)] at the sample space. Some positions of the LS in the axial scanning, from Zo to Zn, are shown in green; the detection PSF is shown in red. (b) Comparison of the effective PSFs for three LS axial positions for WFM (top) and WFC-LSFM (bottom). (c) Comparison of the images obtained with standard LSFM (left) and WFC-LSFM (right). Images are maximum intensity projections of a 3D stack of the fluorescent pharynx of a C. elegans [89]. Insets show different cuts of the data at the marked dotted lines. Scale bar 50 μm.
Figure 28.
Figure 28. LSFM with structured illumination. (a) Example of LS fringe structure generation by interference of two sheets, delimited by yellow and blue boundaries, which cross each other at the center of the FOV. (b) Example of multi-objective structured LS generation [202]. In this case, two complementarily tilted counterpropagating LSs generate a structured illumination pattern that can be rotated for isotropic SR. See text for detailed information.

Tables (4)

Tables Icon

Table 1. Relation of Components and Parameters Used to Define LSFM in Three Different FOV Regimes

Tables Icon

Table 2. Representative LSFM, Commercially Available

Tables Icon

Table 3. Simulation Parameters for the Different Figures of LS Engineering Approaches

Tables Icon

Table 4. Summary of the LS Superresolution Methods

Equations (43)

Equations on this page are rendered with MathJax. Learn more.

N1p=δ1pN0Iνhν,
Θem=ϕ·N1p·S·l=ϕ·δ1p·N0·Phν·l,
p=δ2pN0Iν1hν1Iν2hν2Iν1=Iν2δ2pN0(Ihν)2,
F(z)=12η1δ2pN0gpfτ1πw0211+(λzπw02)2N2ϑ2,
IRN·I0(ωkω0)4(αρσ)k2,
Raxial=2w0=λπθ=22λfπD=2nλπNA,
FOV=2zr=2πw02λ.
I(z)(J1(z)z)2,
Dbeam=1.22·λillNAill.
I(x)(sin(x)x)2.
Sx=4·n·λillNAill2,
FOVi=1.78·n·λillNAill2.
M=fTLfobj.
RT=0.61·λemNAdet,
Rdet-axial=1.78n·λemNAdet2.
FOVi=FOVdM,
FOVd=#pixelsx·px sizex,
pxsizei=pxsizexM.
RT=2·px sizei=2·pxsizexM.
Dbeam=Rdet-axial.
NAdet=ϵ1.78·n1.22NAill,
Dbeam=λillNAill.
FOVh=Dpfob-illfcl,
MSC=2·NAill·fobj-illDp.
tan(2θGM)=FOVy·MSC2fobj-ill.
i(x,y,z)=o(x,y,z)*|hLS|2,
|hLS|2=|hill|2×|hdet|2.
h=P(KxKy)e2πi(kx+ky)e2πikz(kx,ky)zdkxdky,
kz=(n/λ)(kx2+ky2)
H(kx,ky,kz)=F|hLS|2.
ISB(z,y,x)=1SΠ(yS)*|hbeam(x,y,z)|2.
p=δ2pN0Iν1hν1Iν2hν2Iν1=Iν2δ2pN0(Ihν)2.
I(ρ,ϕ,z,t)hν=Nϵ(ρ,ϕ,z)ϑ(t).
p=d4Ndxdydzdt=d4Nρdρdϕdzdt=δ2pN0ϵ2(ρ,ϕ,z)N2ϑ2(t).
ϵ(ρ,ϕ,z)=ϵ(ρ,z)=1σ2(z)2πe12(ρ2/σ(z)2),
σ(z)=w(z)2=w021+(z(πw02λ))2.
ϑ(t)=ϑ0m=0ne12(tmf)2σt2.
ϑ=1=f+ϑ0e12t2σt2dt=fϑ0σt2π=fϑ0τπ2ln2yieldsϑ0=2ln2fτπϑ,
ϑ2=f+ϑ02e122t2σt2dt=ϑ02fπσt=ϑ02fπτ22ln2=2ln2π1fτϑ2=gpfτϑ2.
d2N=δ2pN0gpfτN2ϑ2ϵ2(ρ,z)ρdρdϕdz.
dNdz=δ2pN0gpfτN2ϑ218π2[ϕ]ϕ1=0ϕ2=2π[eρ2σ(z)2]ρ1=0ρ2=1σ(z)2=δ2pN0gpfτN2ϑ214π1σ(z)2=δ2pN0gpfτ1πw0211+(zzR)2N2ϑ2.
F(z)=12η1δ2pN0gpfτ1πw0211+(λzπw02)2N2ϑ2.
F()=12η1δ2pN0gpfτN2ϑ21πw0211+(λzπw02)2dz12η1δ2pN0gpfτN2ϑ21λ[atanλzπw02]=12η1δ2pN0gpfτπλN2ϑ2.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.