Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Opposed-view dark-field digital holographic microscopy

Open Access Open Access

Abstract

Scattering and absorption belong to the major problems in imaging the internal layers of a biological specimen. Due to the structural inhomogeneity of the specimen, the distribution of the structures in the upper layers of a given internal structure of interest is different from the lower layers that may result in different interception of scattered light, falling into the angular aperture of the microscope objective, from the object in each imaging view. Therefore, different spatial frequencies of the scattered light can be acquired from different (top and bottom) views. We have arranged an opposed-view dark-field digital holographic microscope (DHM) to collect the scattered light concurrently from both views with the aim to increase the contrast of internal structures and improve the signal-to-noise ratio. Implementing a DHM system gives the possibility to implement digital refocusing process and obtain multilayer images from each side without a depth scan of the object. The method is explained and the results are presented exemplary for a Drosophila embryo.

© 2014 Optical Society of America

1. Introduction

Imaging internal layers of biological specimens is a demanding field of research [1]. A variety of techniques has been developed in this regard, like optical sectioning techniques such as confocal microscopy [2]. In parallel, scan-free methods like digital holographic microscopy (DHM) have been introduced to extract the object wave-front leading to a multilayer image of a specimen via digital refocusing and quantitative phase (topographic) image [3, 4]. One of the major issues in imaging the internal structures of biological specimens is the absorption and scattering that distort the information coming from the object. In some studies, researchers assumed that the first-order Born approximation is valid for the scattering process inside the specimen, meaning that the medium is approximated to be a weak scatterer and multiple scattering is neglected [57]. However, in the majority of cases, neither the specimen satisfies the weak scattering approximation nor is the prediction of the scatterer distribution possible to create the corresponding model. In fluorescence microscopy, some techniques have been developed to reduce this effect, based on non-linear light-matter interaction like two-photon microscopy [8], or confining the excitation area to a selective internal layer using light sheet microscopy [9, 10].

In many applications staining the sample might not be feasible. Hence, label-free 3D microscopy methods have been also developed based on auto-fluorescence or scattering properties, like optical projection tomography [11] and methods based on Raman scattering [12]. Phase tomographic approaches have been also developed based on digital holographic microscopy to extract 3D phase information of the specimen [13, 14]. Dark-field technique has been shown to be promising in improving the image contrast for internal layers in combination with optical sectioning techniques [15] and digital holographic refocusing [16]. Being a wide-field scan-free method, DHM showed to be promising in monitoring live biological processes [17] and in 3D tracking of particles with video rates [18].

Despite all these developments, label-free wide-field microscopy is still suffering from unwanted scattering and absorption from the out-of-focus structures, which are inhomogeneously distributed inside a biological object, especially in sub-millimeter sized organisms. This inhomogeneity comes from the distribution of different cell types within the object [19]. It has been shown that even at the cellular level the refractive index of the cell is also highly inhomogeneous [14]. We took advantage of this inhomogeneity and considered the fact that the structural profile of the subsequent layers is different that results in different frequency band-pass for each imaging view, through the angular aperture of the imaging objective. In this regard, we have arranged an experiment to extract the object wave-front simultaneously from top and bottom views without depth scan of the sample by employing an opposed-view dark-field digital holographic microscopy (DHM). Digital refocusing makes the image acquisition faster and prevents distortion of the sample with the scanning movement. The methodology has been explained in this paper and the result has been presented exemplary for a Drosophila embryo.

2. Method

In the proposed method, dark-field illumination has been implemented in the setup, utilizing bright/dark-field objectives. The analytical description of image formation in bright-field holography has been thoroughly demonstrated by other researchers [2022]. In this section, we describe the frequency cut-off for a bright/dark-field opposed-view imaging system. A schematic of such objective is shown in Fig. 1 (left). The objective has the ability to perform simultaneously in bright- and dark-field modes. The angle α is the angular aperture of the objective that defines its numerical aperture (NA = sinα). θ and δθ represent the angle of dark-field illumination and the corresponding covered angular range, respectively. Figure 1 (right) describes the frequency coverage in the [ky,kz] plane, assuming that the object experiences the illuminating incident beams in all directions within both the bright- and dark-field angular range.

 figure: Fig. 1

Fig. 1 Left: A schematic of a bright/dark-field objective. Right: The frequency band-pass in the [ky,kz] plane for a 20x Nikon bright/dark-field objective with NA = 0.45. Each blue and red curves represent the band-pass frequency corresponded to a specific incident beam angle in bright and dark field modes, respectively. The dashed red curve shows the frequency band transferred by the dark-field illumination angle of θ = 90°, for which the transmitted and reflected frequencies overlap.

Download Full Size | PDF

The angles shown in the diagram are set to the characteristics of the objective used in the experiment (20x Nikon bright/dark-field objective, NA = 0.45). In this case, the object is illuminated from top and bottom, in order to collect both the reflected and transmitted lights by the objective from the top view. For each incident illuminating beam with a specific angle, the frequency band-pass of the imaging system is confined to a curve, limited by NA of the imaging system. The blue curves in the diagram represent the frequencies corresponded to the bright-field mode, in which the frequencies bounded to the centered section are transferred through the transmission and the ones lie inside the top area are transferred through the reflection mode. The red curves represent the frequency band-pass obtained utilizing dark-field illumination. From the diagram it can be seen that, although the dark-field frequency range in the transverse direction is limited by the NA of the objective, it is shifted toward higher frequencies in both transverse directions, resulting in removing the background and increasing the contrast for finer structures. Like bright-field mode, the lower and the upper frequency regions correspond to transmission and reflection modes, respectively. To realize the frequency coverage in the three dimensional frequency space, the diagram should be rotated about the kz axis. The optimums of the dark-field cut-off frequency in the axial direction is limited by the dark-field illumination parameters (θ and δθ). In the extreme case, for the dark-field illumination angle of θ = 90°, the reflected and transmitted frequencies overlap (dashed red curve), leading in the maximum shift in the lateral component of the wave vector.

Figure 2(a) shows the frequency band-pass in the case of opposed-view bright/dark-field imaging, having both reflection and transmission modes for each imaging (top and bottom) view. However, in the case of thick objects (sub-millimeter sized), some other limiting factors should be considered that distorts the information transferred from a given internal layer, e.g. inhomogeneously distributed structures in subsequent layers, which scatters or absorbs the light coming from a given structure of interest. Figure 2(b) shows a simplified schematic of such object. It is assumed that the structure of interest at the center has a symmetric shape, looking from top and bottom view.

 figure: Fig. 2

Fig. 2 (a) The spatial frequency distribution in the case of opposed-view bright/dark-field imaging, having both reflection and transmission modes for each imaging view. (b) A simplified schematic of a sub-millimeter sized biological object. A diagram of the frequency vectors, emanating from the structure at the center, is mapped on the object in the axial direction. The green and blue arrows show the vectors directed towards the top and bottom views, respectively. Due to the inhomogeneous distribution of structures in subsequent layers, different frequencies are collectable in each imaging view. The bottom diagram represents the combination of non-intercepted frequencies collected from both views. This is similar to the case of not having the scatterers in subsequent layers.

Download Full Size | PDF

To better explain the condition, a diagram of the frequency vectors, emanating from the structure, is mapped onto the object in the axial direction. The green and blue arrows show the vectors directed towards the top and bottom views, respectively. In each imaging view, some of the frequencies are intercepted by other structures in subsequent layers through absorption or scattering. Due to the inhomogeneous distribution of these structures, different frequencies are collectable in each imaging view. The bottom diagram in Fig. 2(b) represents the combination of non-intercepted frequencies collected from both views. This is similar to the case of not having the scatterers in subsequent layers. Therefore, depending on the structural distribution of the object in lower layers, some frequencies which are missing in the top-view could be collected by the bottom-view and vice versa.

3. Experiment

We have arranged an opposed-view dark-field DHM to capture holograms concurrently from two opposing views (top and bottom). The holograms, obtained for each object layer from each view, is analyzed separately and the intensity images are combined together to create the opposed-view image. This procedure is being repeated for each layer inside the specimen through digital refocusing.

Figure 3 shows a schematic of an opposed-view dark-field DHM. The system is a symmetric combination of two off-axis dark-field DHMs. The light source is a 660 nm diode laser, having a coherence length of >1m. Two Nikon bright/dark-field microscope objectives with 20x magnification and NA = 0.45 have been placed face to face for imaging from both views. The spacing between the objectives has been adjusted to have their dark-field illumination spots overlapped. The objectives have the possibility to image both in reflection and transmission modes and as the specimen is illuminated simultaneously using the objectives, both transmitted and reflected light from the structures make contribution in image formation. The cameras (SVCam-ECO CCD; SVS-VISTEK) have a resolution of 2448 × 2050 pixels and a frame rate of 10 fps.

 figure: Fig. 3

Fig. 3 The schematic of an opposed-view dark-field digital holographic microscope.

Download Full Size | PDF

To avoid mismatch in focusing from the opposed views, the sample holder was designed to have the same glass slide for both the substrate and the coverslip. A gap of 200 µm was placed between the slides filled with sunflower oil, in which the Drosophila embryo was immersed. Figure 4 shows a photo of the actual setup, being arranged on a vertical stand in the lab. The setup has the possibility to operate both in dark-field (DF) and bright-field (BF) modes simultaneously for future applications. Two cameras have been installed in each view to concurrently record the BF and DF images, which are separated using a dichroic mirror installed in the imaging path for each view. The blue and red lines indicate the bright- and dark-field beam path, respectively. The dashed lines in each imaging mode indicate the reference beam path.

 figure: Fig. 4

Fig. 4 (a) A picture of the actual opposed-view DHM setup in the lab, being arranged on a vertical stand. The setup has the possibility to operate both in dark-field (DF) and bright-field (BF) modes simultaneously. The blue and red beam paths indicate the bright- and dark-field path, respectively. The dashed lines in each mode indicate the reference beam.

Download Full Size | PDF

Due to the scattering of the coherent light, the image suffers from a strong speckle noise. To minimize this noise, the specimen has been illuminated using various speckle-fields and the final image has been obtained by averaging over the fields. A dark-field reconstructed intensity image of a Drosophila embryo for a single speckle-field illumination is shown in Fig. 5(a). In this approach, which has been previously explained for a single-view dark-field DHM [16], a diffuser has been inserted in each illumination path and rotated using a stepper motor, to produce the speckle-field illuminations (Fig. 3). A hole has been inserted in the middle of each diffuser to allow the passage of the central portion of the beams, which are used as reference waves. After recording the required number of successive digital holograms with different speckle-fields, each hologram is analyzed separately to reconstruct and propagate through different image planes for a given speckle-field via digital refocusing. The final reconstructed image for each image plane has been then obtained by averaging over the speckle fields. The images shown in this paper are obtained using 200 successive speckle-fields; for example: Fig. 5(b). A typical bright-field image of this embryo is shown in Fig. 5(c) taken using a short coherence laser at 405 nm.

 figure: Fig. 5

Fig. 5 (a) The reconstructed dark-field intensity image of a Drosophila embryo taken by one speckle-field illumination. (b) The reconstructed intensity after averaging over 200 successive speckle-field illuminations. (c) The bright-field image of the same Drosophila embryo, taken by a short coherent source at the wavelength of 405 nm. The scale bar in (b) is 25 µm.

Download Full Size | PDF

4. Results and discussion

To combine the information obtained from the opposed views, first the counterpart images for each specific object layer, obtained from both views, should be selected and the image combination should be applied to each corresponding pair. To remove any non-uniformity in the intensity distribution, which might be due to the absorption or scattering from some upper/lower portions of the specimen (Fig. 6(a)), the intensity of the image has been locally enhanced (Fig. 6(b)), as suggested by Refs [23, 24]. Afterwards, a number of element-wise self-multiplications have been applied to each image matrix to obtain a desired contrast for the structures versus the background coming from the scattering of the other layers (Fig. 6(c)).

 figure: Fig. 6

Fig. 6 (a) A raw reconstructed dark-field intensity image for a given object plane of a Drosophila embryo, averaged over speckle-fields. (b) The locally enhanced version of the image. (c). The contrast enhanced image using element-wise self-multiplications of the image matrix. The scale bar in (a) is 25 µm.

Download Full Size | PDF

An intensity image of an arbitrary internal layer and its opposed-view counterpart are shown in Figs. 7(a) and 7(b), respectively. We have investigated two different approaches to fuse opposed-view images: pixel-based and tile-based. In the pixel-based approach, for each pixel, a region of 10 × 10 pixels, centered by the initial pixel point, has been selected. The standard deviation (Std) of the intensity distribution over the selected region has been calculated and compared with the same value for the corresponding pixel in the opposed-view image. The pixel value with the larger calculated Std has been then set to the corresponding pixel position in the final image. This process has been done for each single pixel of both images to derive the final fused image, which is shown in Fig. 7(c) for the given layer. In the tile-based approach, the images have been divided into small patches, i.e. 25 × 25 pixels, and the Std of the intensity has been calculated for each tile pairs from the opposed-view images. The tile with the larger Std is being placed in the corresponding region in the final fused image. The image obtained using this approach has been shown in Fig. 7(d). Sub-figures in Fig. 7(e) represent the magnified image of the regions marked by numbers in Fig. 7(a) obtained from top and bottom views. For the sake of better visibility, some structures have been marked with arrows in Fig. 7(e), which are present in one of the views while missing in the other. The magnified fused sub-images of the same regions have been shown in Fig. 7(f). The structures presented in both views are visible together in the fused images, without reducing the image quality. From the fused sub-images, it can be seen that the results obtained from two approaches are almost the same, as long as the tile dimension is small enough. Nevertheless, the advantage of employing the tile-based approach is its significantly faster calculation time in comparison to the pixel-based one, which is considerable when a large number of layers are needed to be processed.

 figure: Fig. 7

Fig. 7 (a) A reconstructed intensity image of (a) the top and (b) the bottom view for a given layer of a Drosophila embryo. (c, d) The fused image obtained using the pixel-based and tile-based approach, respectively. (e) The magnified top and bottom view images of the regions marked with numbers in (a). The arrows represent some of the structures visible in one view, while missing in the other. (f) The magnified images of the same regions as (a) after performing image fusion process. The scale bar in (a) is 25 µm.

Download Full Size | PDF

It should be noted that in the conventional tile-based combination approach, sharp borders normally appear after the mosaicking process (Fig. 8(a)). To avoid this and to obtain the smooth image, which is shown in Fig. 8(b), each patch has been selected in a way to have an overlapping area of 1/4 of the tile size with its neighboring ones [23, 24]. The overlapping area of each tile has been then weighted using the sigmoid function defined as [1+exp(-at)]-1, where the parameter a defines the slope of the function. In each patch the left (top) and right (bottom) overlapping area are weighted with SL=S(t) and SR=S(t), respectively (Fig. 8(c)). Afterwards, each tile has been added to a zero valued matrix with the size of the original image to create the enhanced image. Figure 8(c-1) shows two neighboring tiles stitched together conventionally, while the Fig. 8(c-2) shows the result after weighting the overlapping area of the right and left tile with the SL and SR, respectively. It can be seen that the intensity transition from one patch to the other is smoother in Fig. 8(c-2). The final opposed-view images obtained for each object layer has been stacked together to produce a multimedia file (Media 1), presenting the digital refocusing process through the internal structures of the Drosophila embryo (Fig. 8(d)).

 figure: Fig. 8

Fig. 8 (a) A portion of the final fused image obtained using conventional mosaicking and (b) by taking overlapping patches and applying weighting process. (c) The sigmoid functions, SL and SR, used for weighting the left (top) and right (bottom) overlapping sections of each tile; (c.1) An example of conventional mosaicking and (c.2) the result of applying the sigmoid weighting function. (d) The stacks of images obtained from the opposed-view fusion for different layers, leading to a digital refocusing multimedia file (Media 1) of the internal structure of a Drosophila embryo.

Download Full Size | PDF

To have a comparison, confocal images of a texture (from an optical cleaning tissue) are shown in Fig. 9 along with the images obtained through opposed-view dark-field imaging. The images show two layers of the same tissue separated by 20 µm. The confocal image was taken with a 50x objective with NA of 0.8 at a wavelength of 850 nm and the opposed-view image was taken with a 20x dark-field objective with NA of 0.45. As our system is a label-free method we took non-fluorescent confocal microscopy for comparison.

 figure: Fig. 9

Fig. 9 Confocal images of a texture (from an optical cleaning tissue), along with the images obtained through opposed-view dark-field imaging for two layers of the same tissue separated by 20 µm. The confocal image was taken with a 50x objective with NA of 0.8 at a wavelength of 850 nm and the opposed-view image was taken with a 20x dark-field objective with NA of 0.45. the scale bar is 30 µm.

Download Full Size | PDF

5. Conclusions

In this work we have shown that imaging through opposed-view channels, including transmission and reflection modes, reveals more structures of an internal layer of the specimen in imaging of sub-millimeter-sized organisms. Employing a DHM system, the multilayer imaging can be performed without disturbing the specimen by scanning or any kind of movement. The image combination approach using overlapping tile selection and weighting procedure showed to be promising in removing the sharp borders during tile-based image combination and providing a smooth image that increases the speed of image processing. The proposed setup can be simply employed to capture the movement of a specimen in a biological medium, by utilizing a high-speed camera. Currently, we are trying to record a live biological process like growing the Drosophila embryo by employing also the bright-field mode to measure the phase of the object wavefront and detect the change in the optical thickness of the specimen while growing.

Acknowledgment

The authors would like to acknowledge the support of the DFG-Deutsche Forschungsgemeinschaft (German Research Foundation) under grant No. OS 111/33-1. The authors also gratefully thank Prof. Dr. Anette Preiss from the Institut für Genetik at Universität Hohenheim in Stuttgart, Germany, for providing the Drosophila embryos and Dr. Daniel Claus and Florian Mauch from the Insitut für Technishe Optik, for valuable discussions on image processing and providing confocal images, respectively.

References and links

1. A. Abbott, “Cell culture: Biology’s new dimension,” Nature 424(6951), 870–872 (2003). [CrossRef]   [PubMed]  

2. J. A. Conchello and J. W. Lichtman, “Optical sectioning microscopy,” Nat. Methods 2(12), 920–931 (2005). [CrossRef]   [PubMed]  

3. M. Kim, “Principles and techniques of digital holographic microscopy,” SPIE Rev. 1, 018005 (2010).

4. J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics 2(3), 190–195 (2008). [CrossRef]  

5. N. Streibl, “Three-dimensional imaging by a microscope,” J. Opt. Soc. Am. A 2(2), 121–127 (1985). [CrossRef]  

6. M. Gu, “Principles of Three Dimensional Imaging in Confocal Microscopes,” Singapore, World Scientific Publishing Co Pte Ltd., 1996.

7. J. Lim, H. Ding, M. Mir, R. Zhu, K. Tangella, and G. Popescu, “Born approximation model for light scattering by red blood cells,” Biomed. Opt. Express 2(10), 2784–2791 (2011). [CrossRef]   [PubMed]  

8. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005). [CrossRef]   [PubMed]  

9. J. Huisken, J. Swoger, F. Del Bene, J. Wittbrodt, and E. H. K. Stelzer, “Optical sectioning deep inside live embryos by selective plane illumination microscopy,” Science 305(5686), 1007–1009 (2004). [CrossRef]   [PubMed]  

10. F. O. Fahrbach and A. Rohrbach, “Propagation stability of self-reconstructing Bessel beams enables contrast-enhanced imaging in thick media,” Nat. Commun. 3, 632 (2012). [CrossRef]   [PubMed]  

11. J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3D microscopy and gene expression studies,” Science 296(5567), 541–545 (2002). [CrossRef]   [PubMed]  

12. Y. Ozeki, Y. Kitagawa, K. Sumimura, N. Nishizawa, W. Umemura, S. Kajiyama, K. Fukui, and K. Itoh, “Stimulated Raman scattering microscope with shot noise limited sensitivity using subharmonically synchronized laser pulses,” Opt. Express 18(13), 13708–13719 (2010). [CrossRef]   [PubMed]  

13. F. Charrière, A. Marian, F. Montfort, J. Kuehn, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, “Cell refractive index tomography by digital holographic microscopy,” Opt. Lett. 31(2), 178–180 (2006). [CrossRef]   [PubMed]  

14. W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4(9), 717–719 (2007). [CrossRef]   [PubMed]  

15. C. W. Lee, M. J. Chen, J. Y. Cheng, and P. K. Wei, “Morphological studies of living cells using gold nanoparticles and dark-field optical section microscopy,” J. Biomed. Opt. 14(3), 034016 (2009). [CrossRef]   [PubMed]  

16. A. Faridian, G. Pedrini, and W. Osten, “High-contrast multilayer imaging of biological organisms through dark-field digital refocusing,” J. Biomed. Opt. 18(8), 086009 (2013). [CrossRef]   [PubMed]  

17. B. Kemper, A. Bauwens, A. Vollmer, S. Ketelhut, P. Langehanenberg, J. Müthing, H. Karch, and G. von Bally, “Label-free quantitative cell division monitoring of endothelial cells by digital holographic microscopy,” J. Biomed. Opt. 15(3), 036009 (2010). [CrossRef]   [PubMed]  

18. J. Hahn, S. Lim, K. Choi, R. Horisaki, and D. J. Brady, “Video-rate compressive holographic microscopic tomography,” Opt. Express 19(8), 7289–7298 (2011). [CrossRef]   [PubMed]  

19. D. Oron, D. Yelin, E. Tal, S. Raz, R. Fachima, and Y. Silberberg, “Depth-resolved structural imaging by third-harmonic generation microscopy,” J. Struct. Biol. 147(1), 3–11 (2004). [CrossRef]   [PubMed]  

20. R. ChmelÍk, “Three-dimensional scalar imaging in high-aperture low-coherence interference and holographic microscopes,” J. Mod. Opt. 53(18), 2673–2689 (2006). [CrossRef]  

21. S. S. Kou and C. J. R. Sheppard, “Imaging in digital holographic microscopy,” Opt. Express 15(21), 13640–13648 (2007). [CrossRef]   [PubMed]  

22. V. Lauer, “New approach to optical diffraction tomography yielding a vector equation of diffraction tomography and a novel tomographic microscope,” J. Microsc. 205(2), 165–176 (2002). [CrossRef]   [PubMed]  

23. D. Fedorov, B. Sumengen, and B. S. Manjunath, “Mosaicking based framework for local enhancement of bio imagery,” Workshop on Multiscale Biological Imaging, Data Mining & Informatics, Santa Barbara, CA, USA, Sep. 2006.

24. P. J. Burt and E. H. Adelson, “A multiresolution spline with application to image mosaics,” ACM Trans. Graphics 2(4), 217–236 (1983). [CrossRef]  

Supplementary Material (1)

Media 1: MOV (4605 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Left: A schematic of a bright/dark-field objective. Right: The frequency band-pass in the [ k y , k z ] plane for a 20x Nikon bright/dark-field objective with NA = 0.45. Each blue and red curves represent the band-pass frequency corresponded to a specific incident beam angle in bright and dark field modes, respectively. The dashed red curve shows the frequency band transferred by the dark-field illumination angle of θ = 90°, for which the transmitted and reflected frequencies overlap.
Fig. 2
Fig. 2 (a) The spatial frequency distribution in the case of opposed-view bright/dark-field imaging, having both reflection and transmission modes for each imaging view. (b) A simplified schematic of a sub-millimeter sized biological object. A diagram of the frequency vectors, emanating from the structure at the center, is mapped on the object in the axial direction. The green and blue arrows show the vectors directed towards the top and bottom views, respectively. Due to the inhomogeneous distribution of structures in subsequent layers, different frequencies are collectable in each imaging view. The bottom diagram represents the combination of non-intercepted frequencies collected from both views. This is similar to the case of not having the scatterers in subsequent layers.
Fig. 3
Fig. 3 The schematic of an opposed-view dark-field digital holographic microscope.
Fig. 4
Fig. 4 (a) A picture of the actual opposed-view DHM setup in the lab, being arranged on a vertical stand. The setup has the possibility to operate both in dark-field (DF) and bright-field (BF) modes simultaneously. The blue and red beam paths indicate the bright- and dark-field path, respectively. The dashed lines in each mode indicate the reference beam.
Fig. 5
Fig. 5 (a) The reconstructed dark-field intensity image of a Drosophila embryo taken by one speckle-field illumination. (b) The reconstructed intensity after averaging over 200 successive speckle-field illuminations. (c) The bright-field image of the same Drosophila embryo, taken by a short coherent source at the wavelength of 405 nm. The scale bar in (b) is 25 µm.
Fig. 6
Fig. 6 (a) A raw reconstructed dark-field intensity image for a given object plane of a Drosophila embryo, averaged over speckle-fields. (b) The locally enhanced version of the image. (c). The contrast enhanced image using element-wise self-multiplications of the image matrix. The scale bar in (a) is 25 µm.
Fig. 7
Fig. 7 (a) A reconstructed intensity image of (a) the top and (b) the bottom view for a given layer of a Drosophila embryo. (c, d) The fused image obtained using the pixel-based and tile-based approach, respectively. (e) The magnified top and bottom view images of the regions marked with numbers in (a). The arrows represent some of the structures visible in one view, while missing in the other. (f) The magnified images of the same regions as (a) after performing image fusion process. The scale bar in (a) is 25 µm.
Fig. 8
Fig. 8 (a) A portion of the final fused image obtained using conventional mosaicking and (b) by taking overlapping patches and applying weighting process. (c) The sigmoid functions, SL and SR, used for weighting the left (top) and right (bottom) overlapping sections of each tile; (c.1) An example of conventional mosaicking and (c.2) the result of applying the sigmoid weighting function. (d) The stacks of images obtained from the opposed-view fusion for different layers, leading to a digital refocusing multimedia file (Media 1) of the internal structure of a Drosophila embryo.
Fig. 9
Fig. 9 Confocal images of a texture (from an optical cleaning tissue), along with the images obtained through opposed-view dark-field imaging for two layers of the same tissue separated by 20 µm. The confocal image was taken with a 50x objective with NA of 0.8 at a wavelength of 850 nm and the opposed-view image was taken with a 20x dark-field objective with NA of 0.45. the scale bar is 30 µm.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.