Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-frame 3D lensless microscopic imaging via deep learning

Open Access Open Access

Abstract

Since the pollen of different species varies in shape and size, visualizing the 3-dimensional structure of a pollen grain can aid in its characterization. Lensless sensing is useful for reducing both optics footprint and cost, while the capability to image pollen grains in 3-dimensions using such a technique could be truly disruptive in the palynology, bioaerosol sensing, and ecology sectors. Here, we show the ability to employ deep learning to generate 3-dimensional images of pollen grains using a series of 2-dimensional images created from 2-dimensional scattering patterns. Using a microscope to obtain 3D Z-stack images of a pollen grain and a 520 nm laser to obtain scattering patterns from the pollen, a single scattering pattern per 3D image was obtained for each position of the pollen grain within the laser beam. In order to create a neural network to transform a single scattering pattern into different 2D images from the Z-stack, additional Z-axis information is required to be added to the scattering pattern. Information was therefore encoded into the scattering pattern image channels, such that the scattering pattern occupied the red channel, and a value indicating the position in the Z-axis occupied the green and blue channels. Following neural network training, 3D images were formed from collated generated 2D images. The volumes of the pollen grains were generated with a mean accuracy of ∼84%. The development of airborne-pollen sensors based on this technique could enable the collection of rich data that would be invaluable to scientists for understanding mechanisms of pollen production climate change and effects on the wider public health.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Pollen can lead to allergic rhinitis, with individuals being susceptible to different types of pollen, such as tree, grass or weed [1,2]. In addition, the morphology of pollen can also play an important role as an indicator of the environment and climate [3,4], being affected by ecological conditions [5], including soil fertility [6], moisture [7,8] and temperature [9]. Since pollen can vary in shape and size [10], having a sensor that could determine the 3D structure of the pollen grain at a specific location, in real-time, would not only aid in identification of the species (owing to extra degrees of information), but would be useful for understanding the environment that they were created in or have existed in. Such sensor capability would also aid in diagnosing the species that is causing individuals the most severe hay fever symptoms, allowing them to take steps to mitigate exposure at times and locations where that specific species is most prevalent.

Although pollen grain sensors do exist, these optical counters generally only output a value for the number of particulates per unit of volume sampled (broken down into specific size ranges) and species identification is limited [11], or they are relatively large [12] and potentially unsuitable for mass distribution. Whilst more accurate methods of species determination exist via collection using traps [13], such techniques require the collected samples to be examined in a laboratory [14,15], which causes an unavoidable time-lag between pollen levels in the environment and species-specific pollen count data, while 3D imaging can require laboratory microscopes that can be bulky and expensive [1618]. Therefore, a sensor that can image and identify the pollen in 3D in real-time in the outdoor environment would be beneficial both to individuals and the scientific community. Furthermore, making such sensors as small and low-cost as possible would allow wider distribution and therefore higher geographic resolution to be obtained. Small footprint optical sensors can be achieved by minimising the number and complexity of elements, i.e., by making them lensless by removing imaging lenses [19]. In the method we describe here, light is scattered from pollen directly onto a CMOS camera sensor, and its scattering pattern (recorded as a computer image file) contains, encoded within it, details of its size and shape [2023].

Since most cameras only record the intensity of the scattered light, information regarding the phase is lost, meaning an exact description of the object’s size and shape is not directly possible. Therefore, creating an inverse function to transform a scattering pattern into an image of the object is challenging. Phase retrieval algorithms offer a solution through object oversampling, via ensuring that the object has no intensity outside a certain region [24,25], or via collection of multiple scattering patterns (e.g., ptychography) [26]. These algorithms can be time consuming and so alternative more efficient deep learning methods have recently been explored [27,28].

Deep learning has gained great interest in the past few years, owing to its application in rapid object detection [29], such as for classification of sport videos [30] or related to this field of work, pollen species identification in microscopy [31]. Deep learning has also been used for rapid image-to-image translation [32], such as for transforming an image of a sketch to photograph [33], and relevant to palynology, for transforming a visible light microscope image of pollen to a scanning electron microscope image [34]. In the domain of phase retrieval, deep learning has been used for image reconstruction using multiple scattering patterns [35] and ptychography [36]. Deep learning has also been used for phase retrieval in digital holography, such as using the interference between two coherent beams to obtain holograms that can be converted into highly accurate reconstructed depth profiles using physics-generalization-enhanced neural networks [37]. Other applications of deep learning methods for volumetric imaging include improving light-sheet fluorescence microscopy [38], in which the image reconstruction was achieved at 100 times the speed of a traditional method, hence proving invaluable in real-time imaging of biological behaviour. Our work aims to extend deep learning methods to lensless sensing, to enable the potential for small-footprint devices.

Although 2D image generation of pollen grains from their three-wavelength scattering patterns has been accomplished using deep learning [39], since a pollen grain’s size and shape vary in 3D depending on their species, age and hydration level, being able to produce 3D images of these particulates would provide additional information that could be highly beneficial in determining such parameters. Here, scattering patterns were obtained by illuminating pollen grains of different species with a laser beam, and the data was then used to train a deep learning neural network to generate 2D images that could be collated to form 3D images of the pollen grains. The neural network was then used to test on scattering patterns previously unseen by the neural network. The method involves using a single scattering pattern replicated into multiple scattering patterns corresponding to different Z-axis depths (by encoding depth into the green and blue channel via different intensity values), which are then fed into a neural network to generate multiple 2D images, corresponding to different Z-axis depths of a 3D image.

2. Method

2.1 Sample fabrication

Antirrhinum majus pollen grains were obtained from garden flowers, Narcissus pollen grains were collected from the University of Southampton grounds, and Populus deltoides pollen grains were purchased from Sigma Aldrich. For ease of location under the microscope, the pollen grains were dispersed onto different regions of 25 mm × 75 mm × 1 mm fused silica glass slides. The size of the pollen grains ranged from ∼ 10 µm to ∼ 50 µm.

2.2 Scattering pattern collection setup

The beam of light from a laser diode operating at 520 nm and < 1 mW output power was focussed down by a Nikon Eclipse microscope with a 50× objective lens onto individual pollen grains (see Fig. 1(a)). During the experiment, back-reflection optical microscope images of the pollen grains were simultaneously recorded via the 50× objective lens and a color camera (1280 × 1024 pixels, Thorlabs Inc., DCC1645C), where the live view from this camera allowed accurate alignment of the laser beam focus with respect to the pollen grain for scattering pattern collection (see Fig. 1(b) for example of a scattering pattern). The pollen covered glass slide remained attached to a motorised 3-axis translation stage, which enabled accurate positioning of the pollen grains within the focus of the laser beam. In order to capture the forward scattered laser light from the pollen, a color camera sensor (1280 × 1024 pixels, Thorlabs Inc., DCC1645C) was positioned ∼1 mm behind the substrate and operated with a 5 ms integration time. Whilst higher resolution scattering patterns could have been obtained by placing the camera sensor at a larger distance, due to the size of the sensor and the desire to capture the information in a single frame, the 1 mm distance was chosen so that high frequency information (i.e., from large angles) could be collected on a single camera sensor in a single frame. The white light used to obtain optical microscope images was turned off during scattering pattern collection. The microscope slide was translated by ± 5 µm in X, Y and Z directions about a central position. Scattering patterns were recorded at each of these positions, giving a total of 27 different scattering patterns for each pollen grain.

 figure: Fig. 1.

Fig. 1. (a) Diagram showing the experimental setup, which includes a laser beam that was focussed onto pollen grains present on a glass slide. The light scattered in the forward direction from the pollen grains was collected by a camera sensor placed ∼1 mm away from the glass slide. The pollen grains were imaged via back-reflection of white light onto a separate camera sensor. (b) Antirrhinum majus experimental scattering pattern and (c) corresponding maximum intensity projection microscope image.

Download Full Size | PDF

2.3 Imaging

Rather than implementing the 50× objective that was used for the scattering part of this experiment and for pollen alignment, we employed direct wide-field imaging using a 100× objective for collection of Z-stack images that would be used to create the 3D profile. Here, the 100× objective offered a narrower depth of field than the 50× objective, thus improving depth resolution when imaging the pollen grains at multiple heights. Such a setup was employed for ease of collection, but it is potentially feasible that our neural network approach could be extended to using other imaging methods, such as a confocal microscope to collect the Z-stack images for the training data set. The pollen sample substrate was translated along the Z-axis using a 3-axis motorized stage to obtain a Z-stack of microscope images (with 35 steps of 2 µm in the Z direction). Figure 1(c) shows an example of a maximum intensity projection of a Z-stack of images taken in the XY plane, formed by taking the brightest pixel in the Z-axis to form a single 2D image for an Antirrhinum majus pollen grain [40].

Whilst the maximum intensity projection images in Fig. 2 show defocusing, the 2D slices used for training were thresholded (via contrast enhancement and binarizing to produce black and white images where the in-focus information was white, and the out-of-focus information was black) to ensure only in-focus structures were present for reconstructing the 3D structure (since the most intense pixels corresponded to the in-focus information). The 2D slices were collated into a 3D array, which was resized so that each 3D pixel (voxel) in the volume was of equal length of ∼ 0.5 µm to form 128 × 128 × 128 pixels with 8-bit grayscale values. A 3D blur function (smooth3 in Matlab) with a width of 5 pixels was applied to the 3D array to aid in interpolation between the expanded Z-stack layers. Each pollen grain was then artificially translated within its 3D volume to form 27 different volumes of data, each corresponding to the position where an experimental scattering pattern was obtained (i.e., [-5, 0, +5] in XYZ).

 figure: Fig. 2.

Fig. 2. Examples of (a) Antirrhinum majus, (b) Populus deltoides and (c) Narcissus pollen grains, showing the experimental scattering pattern (top) with corresponding maximum intensity projection image (middle) and 3D image of pollen in 128 × 128 × 128-pixel volume (bottom). Here, each XYZ pixel in the 3D volume is ∼0.5 µm in length.

Download Full Size | PDF

Figure 2 gives three examples of cropped scattering patterns, maximum intensity projection microscope images and the corresponding 3D plots of the pollen grains, as used in the training of the neural network. The images show the variety of shapes and sizes of the pollen, and in turn, the variations in the scattering patterns.

2.4 Data organization

In order to use a 2D image generation neural network for the task of creating 3D images, the data first had to be collated into a suitable format. Each Z-stack image slice was associated with a scattering pattern recorded from the corresponding position. However, this meant that the same, single, scattering pattern would be associated with each slice in the Z-stack (even though the Z-stack microscope images vary significantly with Z position). If no additional information were supplied, the neural network would struggle to learn the relationship between the scattering pattern and the Z-stack image structure, which varies in the Z-axis. Therefore, additional information is needed to indicate to the neural network the Z-axis position of each scattering pattern within the Z-stack. This information is encoded by replicating each scattering patterns 128 times (since there are 128 associated Z-stack images) and reformatting them, such that the scattering pattern (formed from green light) occupies the red channel, and a value representing the position in the Z-axis occupies the green and blue channels (value of 0-255, in steps of 2 for each slice), as demonstrated in Fig. 3. All of the scattering patterns were cropped and resized to 128 × 128 × 3 pixels. The associated single channel 128 × 128 pixels Z-stack image was also transformed into 3 channels, such that the same 8-bit greyscale image occupied each RGB channel to also give a 128 × 128 × 3 pixels image. Each pollen grain, in each position, therefore had 128 sets of image pairs to use as training data (scattering pattern and corresponding image slice from the reformatted Z-stack).

 figure: Fig. 3.

Fig. 3. Examples of the experimental scattering pattern with a color gradient in the green and blue channels (GB), indicating different heights for Z-axis information related to the Z-stack images from the corresponding 3D image of a Populus deltoides pollen grain.

Download Full Size | PDF

2.5 Neural network training and implementation

The deep learning neural network was trained and tested in Matlab, using the Deep Learning Toolbox, with code openly available from GitHub (https://github.com/matlab-deep-learning/pix2pix.git). All training and testing were carried out using an Intel Core i7-6700 CPU @ 3.40 GHz desktop computer with 64 GB RAM and an NVIDIA GeForce RTX 2080 Ti graphics processing unit (GPU) with 11 GB GDDR6 RAM. The image generation neural network utilizes the pix2pix model [41], which is based on a U-Net architecture [42], and is displayed as a simplified diagram in Fig. 4. As shown in the figure, the input to the neural network was a 2D scattering pattern and the output was a 2D image. The architecture of the generator has a contracting path (left) and expanding path (right), with the downscaling and upscaling operations represented by the green and orange arrows, respectively. The contracting path consist of convolutional layers and the expansive path consists of transposed convolutional layers. The rectangular boxes (yellow for contraction and blue for expansion) represent multi-channel feature maps of each layer, where the number of channels of each feature map is indicated in text beneath each box. Skip connections, which concatenate the feature maps from the contracting path with the corresponding feature maps from the expanding path, are represented by the black dotted-line arrows. Concatenation allows the neural network to utilise information from the layers in the contracting path through layers in the expansive path. The output of the generator is passed to the discriminator to provide feedback on the image generation accuracy.

 figure: Fig. 4.

Fig. 4. Simplified diagram of the neural network.

Download Full Size | PDF

Overall, 8 sets of scattering patterns and Z-stack microscope images were used for training the image generation neural network. These includes 3 sets of data for Antirrhinum majus, 2× Populus deltoides, 2× Narcissus, and a set of data recorded with the laser beam scatter but without any pollen grains present. This gave a total of 27648 image pairs that were fed into the neural network for training, with each pair containing a scattering pattern and a Z-stack image slice. The neural network training lasted for a total of 10 epochs, with a learn rate of 0.0002, a minibatch of 2 and implemented the adaptive moment estimation (ADAM) optimizer. The L1 loss (least absolute deviations) between the generated images and the actual experimental images was desired to be as small as possible for higher accuracy image generation. During testing, the trained neural network took ∼0.04 seconds to generate an image.

3. Results and discussion

Following training, datasets from 2 pollen grains not used in training were used for testing the neural network. The 2D scattering patterns, with Z information encoded in the green and blue channels (as per the training), were fed into the neural network to produce predictions of the 2D image that would be expected at each Z slice. These 2D image predictions were then compiled into a Z-stack and processed to produce a 3D image. The 3D images generated by the neural network from scattering patterns previously unseen by the neural network are shown in Fig. 5, with image pixels of zero intensity corresponding to empty 3D space in the plots. Here, test data from (a) Populus deltoides (b) Antirrhinum majus pollen grains are shown, with the actual 3D image (1st row) and generated 3D image (2nd row) presented using azimuth = -37.5° and vertical elevation = 30° (1st column), and corresponding X-Y plane (2nd column) and X-Z plane (3rd column) for each 3D image in (a) and (b). Although there is some inaccurate generation towards in the edge of the volume in (a) at X = 20 and Y = 40, it is evident that the spheroidal shape of Populus deltoides has been generated successfully, while the elongated shape of Antirrhinum majus has also been generated successfully.

 figure: Fig. 5.

Fig. 5. The capability of the image neural network for (a) Populus deltoides and (b) Antirrhinum majus. In each case a comparison between actual experiment images (1st row) and generated images (2nd row) of the pollen grains is provided. From left to right, the columns represent a 3D view (1st column), X-Y plane (2nd column) and X-Z plane (3rd column). Here, 1 pixel is ∼0.5 µm in length. The maximum sizes of the pollen grains in X, Y and Z are indicated by the dashed lines and arrows.

Download Full Size | PDF

The volume of each pollen grain for both the actual (Z-stack of binarized slices created in the same way as the neural network input training data) and the generated 3D images was calculated via summing the pixels of all slices for each grain, then converting pixels to microns (1 voxel = 0.14 µm3). For the actual pollen grain images, the pollen grain volume was calculated to be (a) 5629 µm3 and (b) 4662 µm3, while the volume of the pollen grains in the generated images was calculated to be (a) 4458 µm3 and (b) 3799 µm3, meaning the volumes were generated with a mean accuracy of ∼84%. These volume metrics, as well as the accuracy of the generated images compared with the actual experimental images, are detailed in Table 1. The table includes the absolute difference in the distance between two largest separated points in the X-axis, Y-axis and Z-axis (indicated by the green, red and blue dashed lines in Fig. 5, respectively) of the actual and generated pollen grain images shown in Fig. 5.

Tables Icon

Table 1. Accuracy of the generated images shown in Fig. 5

The table shows that the largest difference was 4 pixels (∼2 µm). In addition, the mean Structural Similarity Index Measurement (SSIM), which is a measurement of three values, namely the luminance, contrast and structure [43], was calculated for all generated slices, where the closer the value is to 1, the greater the accuracy of the image generation. The ground truth for the SSIM for each pollen grain was obtained via using the grain’s actual experimentally obtained images that had been binarized using the same method employed for creating the input data used in training the neural network. The SSIM values are (a) 0.9358 ± 0.0748 and (b) 0.9301 ± 0.0979. Errors likely occur due to the resolution and accuracy of the experimentally measured 3D structures used in training. The number of pollen grains included in the training data also effects the accuracy of generation, and so adding more data could provide the neural network with a greater understanding of the scattering dynamics, further improving accuracy of the predicted 3D shape.

To understand the neural network image generation, we cropped the scattering patterns by using a circular mask centred on the centre of the scattering pattern, such that any intensity outside a certain radius was set to zero. The results are shown in Fig. 6, for various radii, (i) 20, (ii) 37 and (iii) 61 pixels for (a) Populus deltoides and (i) 20 (ii) 38, (iii) 51 pixels for (b) Antirrhinum majus, with the total volume of each generated pollen grain (calculated via summing all pixels) and an L1 comparison (sum of the absolute difference between each masked scattering pattern generated volume and generated volume from scattering pattern without a circular mask) displayed by the blue line and orange line, respectively. The results show that by restricting the scattering pattern, the less accurate the image generation and lower the resolution. Up to a 32-pixel mask radius, the scattering pattern signal is fairly uniform and hence there is little change in the volume of the generated image. However, for greater than a 32-pixel mask radius, more variation and structure is visible on the scattering pattern within the circle. Indeed, a low L1 at 41-pixel mask radius for both (a) and (b) corresponds to Bragg scattering angles and thus scattering resolution of ∼ 2.4 µm. This size potentially indicates the minimum resolution of the object reconstruction. After this point, the L1 comparison likely increases due to the contribution of the signal contrast of the hard boundary of the circular mask, as such structures were not present in the training. However, as the circle radius increases and extends beyond the scattering pattern image (128 × 128 pixels), this contribution decreases again.

 figure: Fig. 6.

Fig. 6. Generated 3D images of pollen using masked scatterings patterns and corresponding graphs of total volume and L1 comparison (sum of the absolute difference between volume of unmasked generated and volume of masked generated) for (a) Populus deltoides and (b) Antirrhinum majus, where (i-iii) are volumes for different mask radii, indicated in the graphs.

Download Full Size | PDF

Increasing the signal-to-noise at higher angles and increasing the resolution of the scattering patterns would increase the amount of information in the captured scattering patterns and could improve the accuracy of the 3D image generation. To be able to acquire enough training data to allow for interpolation of more accurate 3D structures, a significant amount of varied data would need to be obtained. In addition, it would be advisable to use small and large particulates, and a variety of shapes and materials. This could be very time consuming and would require a heavily automated data collection setup. Training in a virtual environment (often referred to as an AI gym [44]) could aid in speeding up this process, as it would bypass using physical stages and samples, hence reducing training time and allowing for a much more varied training data set. Furthermore, employing physics-enhanced neural networks [45] could enable the creation of a model that could describe a wide variety of objects and thus negate the need for training on a large variety of data. Improving the accuracy of the experimentally collected 2D slices, and thus the 3D volumetric images, could allow for a more accurate neural network to be developed. This could also be achieved using deep learning methods, as demonstrated by Bai et al. [46], in which deep learning enabled optical-sectioned imaging to be achieved with less data, and also enabled greater imaging depth at much faster processing speeds, potentially allowing real-time 3D imaging [47].

4. Conclusion

In conclusion, deep learning has been used to demonstrate the ability to generate 3D images of pollen grains from their scattering patterns, where each volume was generated using a single scattering pattern. More specifically, we were able to generate images of Populus deltoides and Antirrhinum majus pollen grains with similar size and shape to the actual experimental images, with the volumes being generated having a mean accuracy of ∼84%. Future work should focus on imaging pollen grains in 3D at higher resolution, which could entail obtaining the 3D structure using alternative methods of microscopy, such as structured illumination microscopy, allowing both greater resolution and image generation accuracy. Additional work should also look at applying the technique to scattering patterns obtained from in-flight pollen. The ability to apply such a technique to airborne monitoring of pollen, as well as other bioaerosols would be invaluable for remote environmental sensing in real-time.

Funding

Engineering and Physical Sciences Research Council (EP/N03368X/1, EP/T026197/1).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in Ref. [48].

References

1. R. N. McInnes, D. Hemming, P. Burgess, D. Lyndsay, N. J. Osborne, C. A. Skjøth, S. Thomas, and S. Vardoulakis, “Mapping allergenic pollen vegetation in UK to study environmental exposure and human health,” Sci. Total Environ. 599-600, 483–499 (2017). [CrossRef]  

2. V. V. Rodinkova, “Airborne pollen spectrum and hay fever type prevalence in Vinnitsa, central Ukraine,” Acta Agrobot 68(4), 383–389 (2015). [CrossRef]  

3. M. Smith, J. Emberlin, and A. Kress, “Examining high magnitude grass pollen episodes at Worcester, United Kingdom, using back-trajectory analysis,” Aerobiologia 21(2), 85–94 (2005). [CrossRef]  

4. R. M. Newnham, T. H. Sparks, C. A. Skjøth, K. Head, B. Adams-Groom, and M. Smith, “Pollen season and climate: Is the timing of birch pollen release in the UK approaching its limit?” Int. J. Biometeorol. 57(3), 391–400 (2013). [CrossRef]  

5. M. J. Vonhof and L. D. Harder, “Size-number trade-offs and pollen production by papilionaceous legumes,” American Journal of Botany 82(2), 230–238 (1995). [CrossRef]  

6. T.-C. Lau and A. G. Stephenson, “Effects of soil nitrogen on pollen production, pollen grain size, and pollen performance in Cucurbita pepo (Cucurbitaceae),” Am. J. Botany 80(7), 763–768 (1993). [CrossRef]  

7. M. J. Ejsmond, D. Wrońska-Pilarek, A. Ejsmond, D. Dragosz-Kluska, M. Karpińska-Kołaczek, P. Kołaczek, and J. Kozłowski, “Does climate affect pollen morphology? Optimal size and shape of pollen grains under various desiccation intensity,” Ecosphere 2(10), art117 (2011). [CrossRef]  

8. H. Fatmi, S. Mâalem, B. Harsa, A. Dekak, and H. Chenchouni, “Pollen morphological variability correlates with a large-scale gradient of aridity,” Web Ecol. 20(1), 19–32 (2020). [CrossRef]  

9. E. Pacini, M. Guarnieri, and M. Nepi, “Pollen carbohydrates and water content during development, presentation, and dispersal: a short review,” Protoplasma 228(1-3), 73–77 (2006). [CrossRef]  

10. H. Halbritter, “Preparing Living Pollen Material for Scanning Electron Microscopy Using 2, 2-Dimethoxypropane (DMP) and Critical-Point Drying,” Biotech. Histochem. 73(3), 137–143 (1998). [CrossRef]  

11. J. A. Huffman, A. E. Perring, N. J. Savage, B. Clot, B. Crouzy, F. Tummon, O. Shoshanim, B. Damit, J. Schneider, V. Sivaprakasam, M. A. Zawadowicz, I. Crawford, M. Gallagher, D. Topping, D. C. Doughty, S. C. Hill, and Y. Pan, “Real-time sensing of bioaerosols: Review and current perspectives,” Aerosol Sci. Technol. 54(5), 465–495 (2020). [CrossRef]  

12. I. Šaulienė, L. Šukienė, G. Daunys, G. Valiulis, L. Vaitkevičius, P. Matavulj, S. Brdar, M. Panic, B. Sikoparija, B. Clot, B. Crouzy, and M. Sofiev, “Automatic pollen recognition with the Rapid-E particle counter: the first-level procedure, experience and next steps,” Atmos. Meas. Tech. 12(6), 3435–3452 (2019). [CrossRef]  

13. E. Levetin, C. A. Rogers, and S. A. Hall, “Comparison of pollen sampling with a Burkard Spore Trap and a Tauber Trap in a warm temperate climate,” Grana 39(6), 294–302 (2000). [CrossRef]  

14. N. J. Osborne, I. Alcock, B. W. Wheeler, S. Hajat, C. Sarran, Y. Clewlow, R. N. McInnes, D. Hemming, M. White, S. Vardoulakis, and L. E. Fleming, “Pollen exposure and hospitalization due to asthma exacerbations: daily time series in a European city,” Int. J. Biometeorol. 61(10), 1837–1848 (2017). [CrossRef]  

15. C. H. Pashley, J. Satchwell, and R. E. Edwards, “Ragweed pollen: is climate change creating a new aeroallergen problem in the UK?” Clin. Exp. Allergy 45(7), 1262–1265 (2015). [CrossRef]  

16. Q. Li, J. Gluch, P. Krüger, M. Gall, C. Neinhuis, and E. Zschech, “Pollen structure visualization using high-resolution laboratory-based hard X-ray tomography,” Biochem. Biophys. Res. Commun. 479(2), 272–276 (2016). [CrossRef]  

17. A. Egner, V. Andresen, and S. W. Hell, “Comparison of the axial resolution of practical Nipkow-disk confocal fluorescence microscopy with that of multifocal multiphoton microscopy: theory and experiment,” J. Microsc. (Oxford, U.K.) 206(1), 24–32 (2002). [CrossRef]  

18. W. Shen, L. Ma, X. Zhang, X. Li, Y. Zhao, Y. Jing, Y. Feng, X. Tan, F. Sun, and J. Lin, “Three-dimensional reconstruction of Picea wilsonii Mast. pollen grains using automated electron microscopy,” Sci. China Life Sci. 63(2), 171–179 (2020). [CrossRef]  

19. J. A. Grant-Jacob, Y. Xie, B. S. Mackay, M. Praeger, M. D. T. McDonnell, D. J. Heath, M. Loxham, R. W. Eason, and B. Mills, “Particle and salinity sensing for the marine environment via deep learning using a Raspberry Pi,” Environ. Res. Commun. 1(3), 035001 (2019). [CrossRef]  

20. C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (John Wiley & Sons, 2008).

21. B. Mills, C. F. Chau, E. T. F. Rogers, J. Grant-Jacob, S. L. Stebbings, M. Praeger, A. M. de Paula, C. A. Froud, R. T. Chapman, T. J. Butcher, J. J. Baumberg, W. S. Brocklesby, and J. G. Frey, “Direct measurement of the complex refractive index in the extreme ultraviolet spectral region using diffraction from a nanosphere array,” Appl. Phys. Lett. 93(23), 231103 (2008). [CrossRef]  

22. W. J. Wiscombe, “Improved Mie scattering algorithms,” Appl. Opt. 19(9), 1505–1509 (1980). [CrossRef]  

23. J. A. Grant-Jacob, B. S. Mackay, J. A. G. Baker, D. J. Heath, Y. Xie, M. Loxham, R. W. Eason, and B. Mills, “Real-time particle pollution sensing using machine learning,” Opt. Express 26(21), 27237–27246 (2018). [CrossRef]  

24. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

25. R. L. Sandberg, A. Paul, D. A. Raymondson, S. Hädrich, D. M. Gaudiosi, J. Holtsnider, R. I. Tobey, O. Cohen, M. M. Murnane, H. C. Kapteyn, C. Song, J. Miao, Y. Liu, and F. Salmassi, “Lensless Diffractive Imaging Using Tabletop Coherent High-Harmonic Soft-X-Ray Beams,” Phys. Rev. Lett. 99(9), 098103 (2007). [CrossRef]  

26. K. Giewekemeyer, P. Thibault, S. Kalbfleisch, A. Beerlink, C. M. Kewish, M. Dierolf, F. Pfeiffer, and T. Salditt, “Quantitative biological imaging by ptychographic x-ray diffraction microscopy,” Proc. Natl. Acad. Sci. U.S.A. 107(2), 529–534 (2010). [CrossRef]  

27. G. Zhang, T. Guan, Z. Shen, X. Wang, T. Hu, D. Wang, Y. He, and N. Xie, “Fast phase retrieval in off-axis digital holographic microscopy through deep learning,” Opt. Express 26(15), 19388–19405 (2018). [CrossRef]  

28. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019). [CrossRef]  

29. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

30. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-Scale Video Classification with Convolutional Neural Networks,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 1725–1732.

31. R. Gallardo-Caballero, C. J. García-Orellana, A. García-Manso, H. M. González-Velasco, R. Tormo-Molina, and M. Macías-Macías, “Precise Pollen Grain Detection in Bright Field Microscopy Using Deep Learning Techniques,” Sensors 19(16), 3583 (2019). [CrossRef]  

32. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks,” in 2017 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2017), pp. 2242–2251.

33. Y.-C. Liu, W.-C. Chiu, S.-D. Wang, and Y.-C. F. Wang, “Domain-Adaptive generative adversarial networks for sketch-to-photo inversion,” in 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP) (IEEE, 2017), pp. 1–6.

34. J. A. Grant-Jacob, B. S. Mackay, J. A. G. Baker, Y. Xie, D. J. Heath, M. Loxham, R. W. Eason, and B. Mills, “A neural lens for super-resolution biological imaging,” J. Phys. Commun. 3(6), 065004 (2019). [CrossRef]  

35. Y. Sun, Z. Xia, and U. S. Kamilov, “Efficient and accurate inversion of multiple scattering with deep learning,” Opt. Express 26(11), 14678 (2018). [CrossRef]  

36. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018). [CrossRef]  

37. C. Bai, T. Peng, J. Min, R. Li, Y. Zhou, and B. Yao, “Dual-wavelength in-line digital holography with untrained deep neural networks,” Photon. Res. 9(12), 2501–2510 (2021). [CrossRef]  

38. C. Bai, C. Liu, X. Yu, T. Peng, J. Min, S. Yan, D. Dan, and B. Yao, “Imaging Enhancement of Light-Sheet Fluorescence Microscopy via Deep Learning,” IEEE Photon. Technol. Lett. 31(22), 1803–1806 (2019). [CrossRef]  

39. J. A. Grant-Jacob, M. Praeger, M. Loxham, R. W. Eason, and B. Mills, “Lensless imaging of pollen grains at three-wavelengths using deep learning,” Environ. Res. Commun. 2(7), 075005 (2020). [CrossRef]  

40. D. D. Cody, “AAPM/RSNA Physics Tutorial for Residents: Topics in CT,” RadioGraphics 22(5), 1255–1268 (2002). [CrossRef]  

41. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2017), pp. 5967–5976.

42. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

43. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

44. M. Praeger, Y. Xie, J. A. Grant-Jacob, R. W. Eason, and B. Mills, “Playing optical tweezers with deep reinforcement learning: in virtual, physical and augmented environments,” Mach. Learn.: Sci. Technol. 2(3), 035024 (2021). [CrossRef]  

45. M. Kellman, E. Bostan, N. Repina, and L. Waller, “Physics-based Learned Design: Optimized Coded-Illumination for Quantitative Phase Imaging,” IEEE Trans. Comput. Imaging 5(3), 344–353 (2019). [CrossRef]  

46. C. Bai, J. Qian, S. Dang, T. Peng, J. Min, M. Lei, D. Dan, and B. Yao, “Full-color optically-sectioned imaging by wide-field microscopy via deep-learning,” Biomed. Opt. Express 11(5), 2619–2632 (2020). [CrossRef]  

47. X. Zhang, Y. Chen, K. Ning, C. Zhou, Y. Han, H. Gong, and J. Yuan, “Deep learning optical-sectioning method,” Opt. Express 26(23), 30762–30772 (2018). [CrossRef]  

48. J. A. Grant-Jacob, M. Praeger, R. W. Eason, and B. Mills, “Dataset to support the publication ‘Single-frame 3D lensless imaging of pollen grains via deep learning’,” University of Southampton (2022), https://doi.org/10.5258/SOTON/D2113.

Data availability

Data underlying the results presented in this paper are available in Ref. [48].

48. J. A. Grant-Jacob, M. Praeger, R. W. Eason, and B. Mills, “Dataset to support the publication ‘Single-frame 3D lensless imaging of pollen grains via deep learning’,” University of Southampton (2022), https://doi.org/10.5258/SOTON/D2113.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. (a) Diagram showing the experimental setup, which includes a laser beam that was focussed onto pollen grains present on a glass slide. The light scattered in the forward direction from the pollen grains was collected by a camera sensor placed ∼1 mm away from the glass slide. The pollen grains were imaged via back-reflection of white light onto a separate camera sensor. (b) Antirrhinum majus experimental scattering pattern and (c) corresponding maximum intensity projection microscope image.
Fig. 2.
Fig. 2. Examples of (a) Antirrhinum majus, (b) Populus deltoides and (c) Narcissus pollen grains, showing the experimental scattering pattern (top) with corresponding maximum intensity projection image (middle) and 3D image of pollen in 128 × 128 × 128-pixel volume (bottom). Here, each XYZ pixel in the 3D volume is ∼0.5 µm in length.
Fig. 3.
Fig. 3. Examples of the experimental scattering pattern with a color gradient in the green and blue channels (GB), indicating different heights for Z-axis information related to the Z-stack images from the corresponding 3D image of a Populus deltoides pollen grain.
Fig. 4.
Fig. 4. Simplified diagram of the neural network.
Fig. 5.
Fig. 5. The capability of the image neural network for (a) Populus deltoides and (b) Antirrhinum majus. In each case a comparison between actual experiment images (1st row) and generated images (2nd row) of the pollen grains is provided. From left to right, the columns represent a 3D view (1st column), X-Y plane (2nd column) and X-Z plane (3rd column). Here, 1 pixel is ∼0.5 µm in length. The maximum sizes of the pollen grains in X, Y and Z are indicated by the dashed lines and arrows.
Fig. 6.
Fig. 6. Generated 3D images of pollen using masked scatterings patterns and corresponding graphs of total volume and L1 comparison (sum of the absolute difference between volume of unmasked generated and volume of masked generated) for (a) Populus deltoides and (b) Antirrhinum majus, where (i-iii) are volumes for different mask radii, indicated in the graphs.

Tables (1)

Tables Icon

Table 1. Accuracy of the generated images shown in Fig. 5

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.