Abstract
Snapshot channeled imaging spectropolarimetry (SCISP), which can achieve spectral and polarization imaging without scanning (a single exposure), is a promising optical technique. As Fourier transform is used to reconstruct information, SCISP has its inherent limitations such as channel crosstalk, resolution and accuracy drop, the complex phase calibration, et al. To overcome these drawbacks, a nonlinear technique based on neural networks (NNs) is introduced to replace the role of Fourier reconstruction. Herein, abundant spectral and polarization datasets were built through specially designed generators. The established NNs can effectively learn the forward conversion procedure through minimizing a loss function, subsequently enabling a stable output containing spectral, polarization, and spatial information. The utility and reliability of the proposed technique is confirmed by experiments, which are proved to maintain high spectral and polarization accuracy.
© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Imaging spectropolarimetry (ISP) is a powerful tool that can simultaneously acquire spectral, polarization, and spatial information [1–8]. ISP provides an important visual extension and has been applied in remote sensing, biology, food analysis, and so on. Traditional ISP systems rely on multi-shots, filters array, or scanning specific domains such as the spatial domain in channeled spectropolarimetry (CSP). First described by K. Nordsieck and K. Oka [9], CSP modulates the incident Stokes parameters onto carrier frequencies and encodes the polarization state onto the output spectrum. CSP maintains the advantages of high throughput and multiplex, and is of great benefits in several respects. Recently, some CSP systems have been developed to eliminate the scanning requirements, such as combining CSP with Fourier transform imaging spectropolarimetry [10,11] or compressing imaging [12,13].
For CSP based systems, mainstream demodulation is based on the Fourier reconstruction (FR) method [9–13], which uses the Fourier transform to recover the Stokes parameters by separating them into channels based on their carrier frequencies. However, FR has some inherent limitations such as channel crosstalk, the complex phase calibration and noise sensitivity. In more detail, crosstalk between channels will degrade the reconstruction accuracy, the use of truncated windows will impose bandwidth limitation that cuts off high frequency details and degrades the spectral resolution. Phase calibration needs a wide-field, broadband, and large-aperture reference setup, which introduces much inconvenience such as, the need of periodic calibration, precision degradation, et.
Much efforts have been done to overcome the above-mentioned drawbacks of FR. Lee et al. proposed a compressed method by creating an optimized mathematical model, in which a cost function is introduced to solve for Stokes parameters. The compressed method reduces the need for truncated windows and effectively mitigates artifacts such as crosstalk and high-frequency loss [12]. Ren et al. proposed a compressive sensing based linear model, in which Fourier transform and spatial filter are no longer required. As a result, channel crosstalk and resolution limitations are effectively eliminated [13]. Meanwhile, iterative reconstruction algorithms were introduced to replace the role of FR [14]. The iterative method processes non-uniformly spaced samples without interpolation, and is able to mitigate noise effect and recovers the ground truth Stokes parameters more faithfully. However, all the aforementioned methods must involve a phase calibration procedure. Mu et al. presented a reconstruction routine, which incorporate the phase as an updated parameter in the iterative procedure, and the restriction of FR and phase calibration was eliminated [15]. However, such algorithm is limited to only three Stokes parameters, and the sliding kernel is required to enforce the assumption of a slowly changing input.
Neural networks (NNs) are well known for their efficiency in processing data, or identifying statistical significance as applied to pattern recognition, due to their non-linear modeling and self-adaptive weight. Recently, NNs have been utilized to determine the input-output response relationships in optical community [16–18]. For example, NNs can empirically calibrate various sensors to overcome systematic errors [19–21]. NNs can also solve forward and inverse problems of imaging systems [22–26], as well as hyperspectral reflectance cube reconstruction or classification [27–30]. Recently, Li et al. proposed a NNs based channel filtering framework for Spectral–temporal hybrid CSP, which predict filters that have wide bandwidth and anti-cross-talk features and effectively enhances the spectral resolution and reconstruction accuracy [31,32].
Snapshot channeled imaging spectropolarimeter (SCISP) is a computational technique to capture spectral, polarization, and spatial information in a snapshot [Fig. 1(a)], which avoids the need for spatial or temporal scanning. Nevertheless, recovering spectra and Stokes parameters usually requires several forward and inverse Fast Fourier transforms (FFTs) together with phase-correction procedure. Furthermore, the algorithms must be carefully applied to all measured pixels, which highly deteriorate the processing efficiency. In this paper, a neural networks (NNs)-based reconstruction method is proposed, in which all the aforementioned steps can be realized in a single forward direction, without any iterations nor inverse conversions. The proposed method is also proved by experiments with high efficiency and accuracy. Although it might be still far to be competitive to traditional method in some extent, it has great potential to be improved and further optimized.
2. SCISP and FFT reconstruction
2.1 SCISP system configuration
Figure 1(a) illustrates the schematic of SCISP, which consists of a 1:1 afocal telescope, a micro-lens array (MLA), two high-order birefringent crystal retarders R1, R2, and a birefringent polarization interferometer (BPI). The thickness ratio of R1 and R2 is ${\textrm{d}_1}/{d_2} = 3$ and their fast axis orientations are oriented at 45° and 90° relative to the y-axis, respectively. BPI contains two linear polarizers ($\textrm{L}{\textrm{P}_1}$&$\textrm{L}{\textrm{P}_\textrm{2}}$), two Nomarski prisms ($\textrm{N}{\textrm{P}_\textrm{1}}$&$\textrm{N}{\textrm{P}_\textrm{2}}$), and a half-waveplate (HWP). BPI separates the incident light into two paths and interfere on the CCD. Rotating BPI about the $z$-axis by an angle $\delta $ with respect to the $y$-axis, a special distribution of optical path difference (OPD) on the CCD can be created:
2.2 Fourier reconstruction method
The Stokes vector of the emergent light from SCISP can be described as,
Filtering channels ${\textrm{C}_\textrm{0}}$, ${\textrm{C}_\textrm{1}}$ and ${\textrm{C}_\textrm{2}}$ at each pixel and followed by FFT, as shown in Fig. 2(a), enable demodulation of the spectrally-dependent Stokes parameters,
Phase ${\varphi _1}$ and ${\varphi _2}$ are usually determined by a reference beam calibration technique, in which, SCISP measure a broadband $22.5^\circ $ linearly polarized light over all its field of view, producing a channeled interferogram with channels ${\textrm{C}_{i,\textrm{reference}}}$, $i = 0,1,2$. Then the Stokes parameters of an unknown sample are
One can see that phase calibration needs a setup to produce the reference beam and it introduces much uncertainty to the precise requirement. Meanwhile, it’s hard to build a wide field, and broad waveband reference setup, and every time the system is transferred to a new environment, a new calibration procedure is required. In FR method, the truncated windows were employed to filter channels, which impose bandwidth limitation in the OPD domain. Therefore, the spectral resolution is lower than the native spectral resolution of spectrometer [Fig. 2(c)], and reconstruction error is bigger. Furthermore, the truncated windows cannot fix the crosstalk issue between channels, which will degrade the reconstruction accuracy. Especially, for laser or other monochromatic light with long coherent length, the cross-talk is more severe, as shown in Fig. 2(d), and the stokes parameters cannot be recovered by the traditional ways. To solve the above problems, a NNs based reconstruction methodology was proposed, as well as two experimental setups for generating spectral and polarization training data.
3. Experimental configuration and NN architecture
3.1 Spectral sources for reconstruction
We acquire the spectral training data through a generator based on a digital mirror device (DMD), as shown in Fig. 3. Light from a xenon arc lamp propagates through L1 and focuses on a slit, before transmitting through L2 and a polarization grating (PG). The PG diffracts the light with high efficiency, into +1st and −1st order diffraction light that were chromatic. The +1st order beam was directly imaged to the DMD, while the −1st order beam was redirected to the DMD by a high reflective mirror. By loading 8-bit grayscale images to the DMD, the micro-mirror array’s pattern was encoded, therefore, various spectrums can be selectively reflected into the integrating sphere, as shown in Fig. 4. After the arbitrary spectrum being homogenized inside the integrating sphere, the output light fills SCISP’s field of view and the truth spectrums were measured by an Avantes (AvaSpec-ULS 2048) spectrometer simultaneously. The unique spectrums reflected by DMD consist of four types: monochromatic, dichromatic, trichromatic, and random. To enrich the dataset, six lasers of different wavelength were employed to illuminate the integrating sphere one by one, or in different combinations. In total, there are 2012 unique spectra collected by SCISP and AvaSpec, producing 2012 spectral training pairs, consisting of SCISP’s raw-images and AvaSpec’s label spectrums.
3.2 Polarization sources for reconstruction
For the polarization training set, we employed different lasers (488, 532, 552, 561, 633, 637 nm) to illuminate the integrating sphere in turns, thus the output light’s original polarization is eliminated by the sphere. Then the light is modulated by the linear polarizer (LP) and achromatic quarter-wave plate (AQP) to be specially polarized, which is determined by the angles of LP and AQP. The theoretical stokes parameters of the modulated light can be calculated by the muller matrix:
3.3 Training data preparation
To prepare the training set, the raw-images were firstly divided by a measured flatfield to remove illumination nonuniformity, dark frames were also subtracted to remove dark noise. Applying a registration procedure to each raw-image (13×18 sub-images), a 3D interference $({x,y,\textrm{OPD}} )$ cube was obtained, and the mean intensity was removed for each interferogram at every $({x,y} )$ pixel. The spectral training labels measured by AvaSpec were also processed. Firstly, the dark spectra were subtracted. Secondly, the AvaSpec spectrums were filtered to match SCISP’s spectral resolution. The SCISP’s spectral resolution can be calculated by
where $\textrm{OPD} = 40.7\mu \textrm{m}$, yielding a spectral resolution of $\textrm{122}\textrm{.6c}{\textrm{m}^{ - 1}}$. Firstly, AvaSpec’s spectra $\textrm{I}(\mathrm{\lambda } )$ was linearly sampled in wavenumber, producing a new spectrum $\textrm{I}^{\prime}({{\sigma_n}} )$. By mirroring the interpolated spectrum to negative wavenumbers, a double side spectrum ${\textrm{I}_\textrm{m}}(\sigma )$ was created. Applying an inverse Fourier transformed to the mirrored AvaSpec’s spectrum creats a new interforgram, which was then apodized using a rectangular function with a full width of $2\textrm{OPD} = 81.4\mu \textrm{m}$. Followed by a forward FFT, the AvaSpec’s spectrums get the matched spectral resolution to the SCISP. This procedure makes the NNs-based reconstruction methodology meaningful and justifiable, that is, the spectral resolution reconstructed by the NNs will not outstrip SCISP’s physical properties. Following the above process, the interferogram cubes and Avantes data are ready for NN training.3.4 NN training parameters and real-time implementation
Using the processed interferogram as training input, the AvaSpec spectrums and the Polsnap polarization values as labels, the NNs array of 192 fully connected feedforward Networks was built for training. For each network, its training interferograms were extracted along the corresponding yellow line within the 3D interference $({x,y,\textrm{OPD}} )$ cube, whose tilting angle is identical with the interferograms of a constant phase, as shown in Fig. 5(a). This ensures that each network is only responsible for one-pixel wide slice of the image, whose interferograms are with identical phase, and all the 192 networks combine to reconstruct the spatial information and improve uniformity. We also tried employing only one NN for the whole imaging reconstruction, and the results will be discussed in Section 4.2.
For each network’s topology, as shown in Fig. 5(b), the interferograms serve as input with 234 nodes, while each node is corresponding to a specific sample site of the interference fringe. Similarly, for the output layer with 137 nodes, the first 134 nodes match the spectrum label, and the last three nodes are the polarization stokes labels ${S_1}/{S_0}$, ${S_2}/{S_0}$, and ${S_3}/{S_0}$. To determine the architecture of the hidden layers, the kerastuner tool from the tensorflow 2.2 library was used to scan parameters. To find the architecture that can produce the minimum Root Mean Square error (RMSE), the number of hidden layers was scanned from 1 to 4, while the number of nodes of each layer was scanned from 32 to 512. Finally, the best number of nodes of the hidden layers are 448, 416, 320, 96, respectively. It is worth noting that, for a single network of the array, it is responsible for one-pixel wide slice of the 3D interference $({x,y,\textrm{OPD}} )$ cube, which consists of 180 interferograms from a raw-image, and its corresponding labels are one Avaspec spectrum vector and three stokes values. This ‘many-to-one’ strategy is due to the functionality of the Avantes fiber spectrometer and Polsnap polarimeter, which give only one spectrum or a set of stokes parameters by integrating their entire field of view.
To train the proposed NNs on a fair and convincing setting on our dataset, a shuffle process was firstly applied to the dataset to ensure that the images used for training and testing were randomly selected. Our dataset consists of 7080 images, with 5664 images in the training set and 1416 images in the validation set, where the validation set didn’t participate in the training process. The multiple fully connected layer model can be represented by ${\textrm{H}_\textrm{L}}\textrm{ = H}(\textrm{I} ), $ which consists of layers ${\textrm{H}_l}$, $l = (0 \cdots \textrm{L} - 1)$ as
Each model is trained for 50 epochs, and early-stopping was employed to prevent NNs from overfitting. The training methodology is the same for all the 192 networks unless otherwise specified. When the first network was established, the training procedure was repeated for adjacent NN. We initialized each NN with the parameters of the prior one to achieve reconstruction results with better continuity. The training is carried out on Intel Xeon Platinum 8276 CPU about 48 hours based on Tensorflow 2.2 library.
4. Results
4.1 Reconstruction accuracy
When the training process is finished, to evaluate the performance of spectral reconstruction, the NNs and FR method were applied to the post-processed interferograms of identical spectrums contained within the validation dataset, and the results were compared to the Avaspec spectra. Figure 6 shows the comparison results between NNs, FR, and Avaspec value, consisting of the representative monochromatic, dichromatic, trichromatic, random, and laser spectrums. All of the reconstructed spectra were interpolated onto the same wavelength axis for direct comparison. In Fig. 6(a-e), the NNs’ results fit well with the truth, while FR broadened the measured spectrums, giving a relative lower accuracy and lower spectral resolution. Figure 6(f) shows the full width at half maximum (FWHM) of the reconstructed spectrums, and NNs give a smaller FWHM than FR. For example, NNs’ FWHM values are 18.75, 8.09, 10.22, 12.40, 9.91, 13.17 nm, for lasers 450, 488, 532, 552, 561, 637 nm, respectively. This indicates that NNs preserves the high-resolution characteristic of SCISP, while FR gives a lower spectral resolution due to the use of truncated windows that imposes bandwidth limitation.
For quantitative evaluation, two different metric scales were used, the Root Mean Square error (RMSE) and the Goodness of Fit Coefficient (GFC). The RMSE is calculated by,
The GFC value is defined as
As mentioned before, due to the severe crosstalk of narrow-band input light’s interferogram, FR method cannot recover the stokes parameters of lasers or other monochromatic light. Instead, the proposed NNs can accomplish this challenge leveraging its inherent merits of sufficient dataset. To highlight this ability as well as its reconstruction accuracy, the NNs was applied to the 6 lasers’ post-processed interferograms within the validation dataset. Compared to the Polsnap (truth) polarization values, the predicted results of the representative 532 nm laser were shown in Fig. 7(a). The RMSE between the NN’s data and the Polsnap values are 0.1266, 0.1206, and 0.0367 for the normalized Stokes parameters ${\textrm{S}_\textrm{1}}\textrm{/}{\textrm{S}_\textrm{0}}$, ${\textrm{S}_\textrm{2}}\textrm{/}{\textrm{S}_\textrm{0}}$, ${\textrm{S}_\textrm{3}}\textrm{/}{\textrm{S}_\textrm{0}}$, respectively. The RMSE can be calculated by
Here, ${\textrm{S}_{i = 1, 2, 3}}$ denotes the latter three Stokes parameters. Figure 7(b) shows the RMSE of other different lasers contained within the polarization validation dataset that were not trained, where most values are blow 0.12.
Furthermore, we employed NNs and FR to reconstruct the polarization of a linear polarizer (LP) illuminated by broadband halogen light, while the transmission axis of LP was rotated from $0^\circ $ to $180^\circ $, step $1^\circ $. Figure 7(c-d) show the comparison results, NNs shows better fitness to the theoretical values with the RMSE of 0.0367, 0.0484 for ${\textrm{S}_\textrm{1}}\textrm{/}{\textrm{S}_\textrm{0}}, {\textrm{S}_\textrm{2}}\textrm{/}{\textrm{S}_\textrm{0}},$ respectively, while FR introduces some deviation with the RMSE of 0.1266, 0.1206.
4.2 Image reconstruction experiments
To evaluate the ability of NNs’ spatial, spectral and polarization imaging reconstruction, the NNs were employed to reconstruct an untrained scene out of the dataset. As shown in Fig. 8(a), the red and blue cards, and one linear polarizer (LP) were pasted on the white board, while a halogen light was used for illumination. The reconstructed spectrums of A, B, C are shown in Fig. 8(b). Figure 8(c) shows the spectral images recovered by FR method. When only one NN was used to reconstruct the two-dimensional spectral image across the sensor’s full field of view, as shown in Fig. 8(d), the spectral images show severe spatial banding artifacts and even wrong signals which originate from the phase discontinuity in the interference fringes produced by the system. When the proposed NNs array was employed, as shown in Fig. 8(e), the spectral slices’ uniformity was improved. Nevertheless, compared to FR’s results, some residual spatial artifacts and nonuniformity are still present, which can be attributed to the slight differences between adjacent networks. As shown in Fig. 8(f), the 12 colorful spectral slices were reconstructed based on the International Commission on Illumination (CIE) 1931 observer, and the two cards shows different contrast in 514 nm and 643 nm slices.
For polarization imaging, Fig. 8(g) shows the reconstructed linear stokes parameters by FR, while Fig. 8(h)’s results are reconstructed by NNs, both of them shows the polarization contrast between ${\textrm{S}_\textrm{1}}$ and ${\textrm{S}_2}$ stokes parameters. What’s more, to highlight NNs’ ability to recover polarimetric stokes vectors under monochromatic illumination, we visualized an untrained scene consisting of a glass window, three linear polarizers and two circular polarizers. As shown in Fig. 8(i), the polarizers were randomly oriented, under the output light from an integrating sphere with a 532 nm laser source. The full-stokes parameter images were recovered, which shows good lateral uniformity and high discrimination ability.
5. Discussion and conclusion
To summarize, the experimental results demonstrate the following superiorities compared to the FR method. Firstly, NNs effectively improve the spectral resolution and accuracy, without increasing number of operations. Secondly, NNs is able to recover the stokes parameters of narrow-waveband input, such as laser or other monochromatic light, with low RMSE. Thirdly, the phase calibration is embedded into the NNs training process. While for the FR method, phase calibration needs several forward and inverse FFTs, which have to be implemented for each interferogram, increasing the computational burden. Nevertheless, the above advantages necessitate many training pairs that come at the expense of the two additional dataset generators and the training procedures, which are experimentally time consuming. The proposed NNs method is inferior to FR regarding the imaging uniformity, which can be attributed to two reasons. Firstly, the weights differences between adjacent networks give the slight intensity difference. Secondly, due to the functionality of the Avantes fiber spectrometer and Polsnap polarimeter, there is only one pair of spectral and polarization value for the many interferograms containing in one raw-image, and this ‘many-to-one’ strategy introduces some limitation. In future work, a potential solution to solve the above drawback is to combine transfer learning with more training samples. Also, we will try to use more complicated networks such as Convolutional Neural Network (CNN) to suppress the imaging nonuniformity. Still, our method abandons some post-processing burden and gives a new possibility to achieve real-time implementation.
In conclusion, we have proposed a NNs-based framework to directly reconstruct spectral, polarization and spatial information of CSP imaging systems. Specifically, leveraging the specially designed training data generators, a dataset was built and NNs were established to process many parallel interferograms while simultaneously accounting for phase calibration, without any iterations or inverse conversions. The validation experiments demonstrated that the NNs approach outperforms FR in both spectral and polarization reconstruction accuracy. The averaged spectral RMSE is 0.13, and the averaged laser polarization RMSE is 0.1266, 0.1206, and 0.0367 for ${\textrm{S}_\textrm{1}}\textrm{/}{\textrm{S}_\textrm{0}}$, ${\textrm{S}_\textrm{2}}\textrm{/}{\textrm{S}_\textrm{0}}$, ${\textrm{S}_\textrm{3}}\textrm{/}{\textrm{S}_\textrm{0}}$, respectively. Furthermore, NNs can improve the spectral resolution through bypassing the need for windowing used in FR to extract channels and avoiding high-frequency loss, which is significant for real-time display and will open new prospects for many applications.
Funding
National Natural Science Foundation of China (62175050).
Disclosures
The authors declare no conflicts of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
References
1. L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016). [CrossRef]
2. Z. Yang, T. Albrow-Owen, W. Cai, and T. Hasan, “Miniaturization of optical spectrometers,” Science 371(6536), 1350–1355 (2021). [CrossRef]
3. Z. Lin, T. Dadalyan, S. Villers, T. Galstian, and W. Shi, “Chip-scale full-Stokes spectropolarimeter in silicon photonic circuits,” Photonics Res. 8(6), 864–874 (2020). [CrossRef]
4. W. Groner, J. W. Winkelman, A. G. Harris, C. Ince, G. J. Bouma, K. Messmer, and R. G. Nadeau, “Orthogonal polarization spectral imaging: a new method for study of the microcirculation,” Nat. Med. 5(10), 1209–1212 (1999). [CrossRef]
5. J. Tian, M. Lancry, S. H. Yoo, E. Garcia-Caurel, R. Ossikovski, and B. Poumellec, “Study of femtosecond laser-induced circular optical properties in silica by Mueller matrix spectropolarimetry,” Opt. Lett. 42(20), 4103–4106 (2017). [CrossRef]
6. D. Kim, Y. Seo, Y. Yoon, V. Dembele, J. W. Yoon, K. J. Lee, and R. Magnusson, “Robust snapshot interferometric spectropolarimetry,” Opt. Lett. 41(10), 2318–2321 (2016). [CrossRef]
7. J. Li, J. Zhu, and H. Wu, “Compact static Fourier transform imaging spectropolarimeter based on channeled polarimeter,” Opt. Lett. 35(22), 3784–3786 (2010). [CrossRef]
8. T. Somekawa, K. Oka, and M. Fujita, “Channeled spectropolarimetry using a coherent white-light continuum,” Opt. Lett. 35(22), 3811–3813 (2010). [CrossRef]
9. K. Oka and T. Kato, “Spectroscopic polarimetry with a channeled spectrum,” Opt. Lett. 24(21), 1475–1477 (1999). [CrossRef]
10. X. Lv, Y. Li, S. Zhu, X. Guo, J. Zhang, J. Lin, and P. Jin, “Snapshot spectral polarimetric light field imaging using a single detector,” Opt. Lett. 45(23), 6522–6525 (2020). [CrossRef]
11. Q. Li, F. Lu, X. Wang, and C. Zhu, “Low crosstalk polarization-difference channeled imaging spectropolarimeter using double-Wollaston prism,” Opt. Express 27(8), 11734–11747 (2019). [CrossRef]
12. D. J. Lee, C. F. LaCasse, and J. M. Craven, “Compressed channeled spectropolarimetry,” Opt. Express 25(25), 32041–32063 (2017). [CrossRef]
13. W. Ren, C. Fu, D. Wu, Y. Xie, and G. R. Arce, “Channeled compressive imaging spectropolarimeter,” Opt. Express 27(3), 2197–2211 (2019). [CrossRef]
14. D. J. Lee, C. F. LaCasse, and J. M. Craven, “Channeled spectropolarimetry using iterative reconstruction,” Proc. SPIE 9853, 98530V (2016). [CrossRef]
15. F. Han, T. Mu, A. Tuniyazi, D. Bao, H. Gong, Q. Li, and C. Zhang, “Iterative reconstruction for snapshot intensity-modulated linear imaging spectropolarimetry without Fourier transform and phase calibration,” Opt. Laser Eng. 134, 106286 (2020). [CrossRef]
16. D. Luo and M. Kudenov, “Neural network calibration of a snapshot birefringent Fourier transform spectrometer with periodic phase errors,” Opt. Express 24(10), 11266–11281 (2016). [CrossRef]
17. L. Shi, B. Li, C. Kim, P. Kellnhofer, and W. Matusik, “Towards real-time photorealistic 3D holography with deep neural networks,” Nature 591(7849), 234–239 (2021). [CrossRef]
18. I. S. Osborne, “Imaging out-of-sight objects,” Science 367(6481), 996–997 (2020). [CrossRef]
19. A. Szenicer, D. F. Fouhey, A. M. Jaramillo, P. J. Wright, R. Thomas, R. Galvez, M. Jin, and M. C. M. Cheung, “A deep learning virtual instrument for monitoring extreme UV solar spectral irradiance,” Sci. Adv. 5(10), eaaw6548 (2019). [CrossRef]
20. H. Tan, Y. Zhou, Q. Tao, J. Rosen, and S. Dijken, “Bioinspired multisensory neural network with crossmodal integration and recognition,” Nat Commun 12(1), 1–9 (2021). [CrossRef]
21. W. Lu, C. Chen, J. Wang, R. Leach, C. Zhang, X. Liu, Z. Lei, W. Yang, and X. Jiang, “Characterization of the displacement response in chromatic confocal microscopy with a hybrid radial basis function network,” Opt. Express 27(16), 22737–22752 (2019). [CrossRef]
22. L. Chen, M. Schär, K. W. Y. Chan, J. Huang, Z. Wei, H. Lu, Q. Qin, R. G. Weiss, P. C. M. V. Zijl, and J. Xu, “In vivo imaging of phosphocreatine with artificial neural networks,” Nat Commun 11, 1–10 (2020). [CrossRef]
23. N. Wagner, F. Beuttenmueller, N. Norlin, J. Gierten, J. C. Boffi, J. Wittbrodt, M. Weigert, L. Hufnagel, R. Prevedel, and A. Kreshuk, “Deep learning-enhanced light-field imaging with continuous validation,” Nat. Methods 18(5), 557–563 (2021). [CrossRef]
24. E. Nehme, L. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018). [CrossRef]
25. Y. Ma, X. Feng, and L. Gao, “Deep-learning-based image reconstruction for compressed ultrafast photography,” Opt. Lett. 45(16), 4400–4403 (2020). [CrossRef]
26. D. Gedalin, Y. Oiknine, and A. Stern, “DeepCubeNet: reconstruction of spectrally compressive sensed hyperspectral images with deep neural networks,” Opt. Express 27(24), 35811–35822 (2019). [CrossRef]
27. S. Zheng, Y. Liu, Z. Meng, M. Qiao, Z. Tong, X. Yang, S. Han, and X. Yuan, “Deep plug-and-play priors for spectral snapshot compressive imaging,” Photonics Res. 9(2), B18–B29 (2021). [CrossRef]
28. L. Giambagli, L. Buffoni, T. Carletti, W. Nocentini, and D. Fanelli, “Machine learning in spectral domain,” Nat Commun 12(1), 1330 (2021). [CrossRef]
29. J. Li, D. Mengu, N. T. Y. ardimci, Y. Luo, X. Li, M. Veli, Y. Rivenson, M. Jarrahi, and A. Ozcan, “Spectrally encoded single-pixel machine vision using diffractive networks,” Sci. Adv. 7(13), eabd7690 (2021). [CrossRef]
30. X. Lin, Y. Rivenson, N. T. Yardimic, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018). [CrossRef]
31. Q. Li, J. Song, A. S. Alenin, and J. Scott Tyo, “Spectral–temporal channeled spectropolarimetry using deep-learning-based adaptive filtering,” Opt. Lett. 46(17), 4394–4397 (2021). [CrossRef]
32. Q. Li, A. S. Alenin, and J. Scott Tyo, “Spectral–temporal hybrid modulation for channeled spectropolarimetry,” Appl. Opt. 59(30), 9359–9367 (2020). [CrossRef]