Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Central wavelength estimation in spectral imaging behind a diffuser via deep learning

Open Access Open Access

Abstract

Multispectral imaging through scattering media is an important practical issue in the field of sensing. The light from a scattering medium is expected to carry information about the spectral properties of the medium, as well as geometrical information. Because spatial and spectral information of the object is encoded in speckle images, the information about the structure and spectrum of the object behind the scattering medium can be estimated from those images. Here we propose a deep learning-based strategy that can estimate the central wavelength from speckle images captured with a monochrome camera. When objects behind scattering media are illuminated with narrowband light having different spectra with different spectral peaks, deep learning of speckle images acquired at different central wavelengths can extend the spectral region to reconstruct images and estimate the central wavelengths of the illumination light. The proposed method achieves central wavelength estimation in 1 nm steps for objects whose central wavelength varies in a range of 100 nm. Because our method can achieve image reconstruction and central wavelength estimation in a single shot using a monochrome camera, this technique will pave the way for multispectral imaging through scattering media.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. INTRODUCTION

Today, imaging technology is in high demand for surveillance cameras and in-vehicle cameras for self-driving cars. However, scattering media such as fog or biological tissue diffuse the transmitted light, which makes it more difficult to image objects behind scattering media. When an object is captured through a scattering medium, a scattering image called a speckle image is acquired [1]. The speckle image contains information about the object encoded by the scattering medium. Various methods have been proposed for imaging through scattering media such as the point spread function [2], speckle correlation [3], and transmission matrix [4]. Recently, deep learning-based methods have been proposed for versatile imaging through scattering media [57]. In deep learning-based imaging through scattering media, images before and after scattering are trained in pairs by a neural network, enabling reconstruction of the object from speckle images [8,9]. The deep learning-based method requires prior training data, but once trained, the trained model can be reconstructed from unknown speckle images [10]. The deep learning-based method can also improve the versatility of reconstruction by containing speckle images captured under different conditions as training data for deep learning [1113].

Obtaining not only the structure of an object behind a scattering medium, but also spectral information is an important issue for imaging through scattering media in practical scenes. Color imaging or multispectral imaging through scattering media has also been proposed [1421]. The spectral memory effect or chromato-axial memory effect, which is the variation of speckles along the spectral dimension, has been reported and verified in imaging through scattering media [2225]. Spectral information of the object is encoded from speckle images captured by a monochrome camera [14]. Therefore, the structure and wavelength of an object behind a scattering medium can be estimated from speckle images captured by a monochrome camera using deep learning.

Here we propose a deep learning-based strategy that enables estimation of the central wavelength from speckle images captured with a monochrome camera. In the experiment, objects are illuminated with narrowband light having different spectra with different spectral peaks by passing light from a white light emitting diode (LED) through a bandpass filter to transmit only a spectrum with an arbitrary central wavelength. A training dataset including speckle images with different central wavelengths to extend the spectral region is created. The proposed method can estimate the central wavelength of an object with an error of less than a few nanometers using a dataset model of 10 nm wavelength intervals, even when the central wavelength is changed to 1 nm steps.

 figure: Fig. 1.

Fig. 1. Experimental setup for capturing speckle images at different central wavelengths. L1, objective lens; L2, lens; LCTF, liquid crystal tunable filter; P, polarizer; SLM, spatial light modulator; D, diffuser.

Download Full Size | PDF

2. METHOD

A. Experimental Setup

The optical setup used in this experiment is shown in Fig. 1. A white LED (Thorlabs, SOLIS-3C) was used as the light source. The light emitted from the white LED was collimated by an objective lens (L1) and a lens (L2) and was transmitted through a liquid crystal tunable filter (LCTF; Cambridge Research & Instrumentation, VIS-07-HC-20). The LCTF is a bandpass filter whose central wavelength can be set arbitrarily with a full width at half-maximum (FWHM) of 7 nm. The light transmitted through the LCTF contained only a part of the spectrum and illuminated a spatial light modulator (SLM; Holoeye, LC2012) with ${1024} \times {768}\;{\rm pixels}$ and a pixel pitch of 36 µm. Two polarizers (P1 and P2) were set in a cross-Nicol configuration so that the SLM functioned as an amplitude object. The light modulated by the SLM passed through a holographic diffuser (Edmund, 47996) with a diffusion angle of 5°. The diffused light was captured as a speckle image by an industrial monochrome charge-coupled device (CCD) camera (The Imaging Source, DMK23U445) with ${1280} \times {960}\;{\rm pixels}$ and a pixel pitch of 3.75 µm with a bit depth of 8 bits. The frame rate of the camera was 15 fps and the exposure time was 0.25 s. The distance between the SLM and the diffuser was set to 200 mm, and the distance between the diffuser and the camera was set to 30 mm. An iris with a diameter of 5 mm was placed behind the diffuser to control the speckle size. We used the MNIST database as input images for the SLM [26]. The object size on the SLM was set to ${40} \times {40}\;{\rm pixels}$, and a monochrome CCD camera captured speckle images with ${256} \times {256}\;{\rm pixels}$. The central wavelength of the LCTF was varied between 530 and 630 nm, considering the spectral intensity of the light source, the transmittance of the LCTF, and the spectral sensitivity of the camera.

B. Deep Learning Architecture

In this study, deep learning models for image reconstruction and central wavelength estimation were created. The deep learning models are based on the spectral memory effect or the chromato-axial memory effect. A schematic illustration of image reconstruction and central wavelength estimation from speckle images captured by the monochrome camera is shown in Fig. 2(a). Speckle images with ${256} \times {256}\;{\rm pixels}$ captured by the monochrome camera were input to both deep learning models. The two deep learning models were constructed using Keras and calculated on an NVIDIA GeForce RTX 3060 (CPU: Intel Core i7-11700, RAM: 16 GB, VRAM: GeForce RTX 3060).

The deep learning model for image reconstruction is shown in Fig. 2(b). This model is a convolutional neural network based on the U-Net structure [27]. The input image is a speckle image with ${256} \times {256}\;{\rm pixels}$ captured by the monochrome camera, and MNIST handwritten digits are given as the correct image. The input image is encoded through ${3} \times {3}$ convolution layers and downsampling layers. The encoder and decoder are connected by a densely connected layer. The decoder repeats the upsampling layer and convolution layers, and outputs a reconstructed image with ${256} \times {256}\;{\rm pixels}$. The loss function was the negative Pearson correlation coefficient (NPCC), and the optimizer was Adam [28]. The NPCC is expressed as

$$\begin{split}&{\rm NPCC} = \\&\frac{{- 1 \times \mathop \sum \nolimits_{i = 1}^w \mathop \sum \nolimits_{j = 1}^h ({X({i,j} ) - \bar X} )({Y({i,j} ) - \bar Y} )}}{{\sqrt {\mathop \sum \nolimits_{i = 1}^w \mathop \sum \nolimits_{j = 1}^h {{({X({i,j} ) - \bar X} )}^2}} \sqrt {\mathop \sum \nolimits_{i = 1}^w \mathop \sum \nolimits_{j = 1}^h {{({Y({i,j} ) - \bar Y} )}^2}}}},\end{split}$$
where $w$ and $h$ are image sizes, $X$ and $Y$ are the ground-truth image and the reconstructed image, and $\bar X$ and $\bar Y$ are the average values of the ground-truth image and the reconstructed image. The details of the deep learning model were shown in a previous study [7].
 figure: Fig. 2.

Fig. 2. (a) Schematic of image reconstruction and central wavelength estimation from speckle images captured by the monochrome camera. (b) Deep learning model for image reconstruction. (c) Deep learning model for central wavelength estimation. The height of each block represents the size. The width of each block represents the channel depth.

Download Full Size | PDF

The deep learning model for central wavelength estimation is shown in Fig. 2(c). This model is a residual neural network based on the encoder part of efficient residual factorized (ERF) Net [13,29]. The input image is the speckle image with ${256} \times {256}\;{\rm pixels}$ captured by a monochrome camera and the displayed central wavelength of the LCTF that was set as the correct value. The input image was passed through a ${3} \times {3}$ convolution layer, and a downsampling layer was repeated twice to increase the channel depth, and was passed through a non-bottleneck-1D block repeated five times. The channel depth is further increased through the convolution layers and downsampling layers, and the non-bottleneck-1D block is repeated eight times. Finally, the estimated wavelength is output through a ${3} \times {3}$ average pooling layer and densely connected layers. The mean absolute error was used as the loss function, and the optimizer was Adam. The number of epochs was 70, and the learning late was 0.001 for 1 to 30 epochs, 0.0002 for 31 to 55 epochs, and 0.0001 for 56 to 70 epochs.

C. Training Dataset and Evaluation Method

We created training datasets that included speckle images captured under different conditions to validate the image reconstruction and central wavelength estimation. Details of the training datasets for imaging reconstruction and wavelength estimation are shown in Table 1. Dataset (1) includes 2000 speckle images captured at a central wavelength of 580 nm, set on the LCTF. Dataset (2a) includes 2000 speckle images at 20 nm intervals from 540 to 620 nm, for a total of 10,000 speckle images. Dataset (2b) includes 2000 speckle images at 10 nm intervals from 530 to 630 nm, for a total of 22,000 speckle images. In validation, 200 speckle images, which were not used for training, were used. We used structural similarity (SSIM) for evaluation [30]. SSIM is expressed as

 figure: Fig. 3.

Fig. 3. Image reconstruction results obtained with model (1) trained on the dataset, including speckle images captured at a single central wavelength and model (2a) trained on the dataset, including speckle images captured at several different central wavelengths. Speckle images captured by the monochrome camera are pseudo-colored for clarity. The reconstruction results at the central wavelengths included in the training data are indicated by green squares.

Download Full Size | PDF

$${\rm SSIM}({x,y} ) = \frac{{({2{\mu _x}{\mu _y} + {C_1}} )({2{\sigma _{\textit{xy}}} + {C_2}} )}}{{({\mu _x^2 + \mu _y^2 + {C_1}} )({\sigma _x^2 + \sigma _y^2 + {C_2}} )}},$$
where ${\mu _x}$ and ${\mu _y}$ are the mean values of the ground-truth image and the reconstructed image, ${\sigma _x}$ and ${\sigma _y}$ are the standard deviations, ${\sigma _{\textit{xy}}}$ is the covariance, and ${C_1}$ and ${C_2}$ are the normalization constants.
Tables Icon

Table 1. Outline of Training Datasets Containing Different Central Wavelengths

3. RESULTS

The training datasets shown in Table 1 were used to train the deep learning model for image reconstruction and for central wavelength estimation.

A. Spectral Imaging Through Scattering Media with a Monochrome Camera

First, we conducted spectral imaging through scattering media with the monochrome camera using the deep learning model for image reconstruction, as shown in Fig. 2(b). In validation, speckle images were captured by varying the central wavelength at 10 nm intervals between 530 and 630 nm.

The trained model (1) was created using datasets (1), including speckle images captured at a single central wavelength; the trained model (2a) was created using dataset (2a), including speckle images with different central wavelengths at 20 nm intervals from 540 to 620 nm. Figure 3 shows the reconstruction results obtained with models (1) and (2a) at different wavelengths. In the reconstruction results obtained with model (1), the reconstruction was successful at 580, 570, and 590 nm, which are near the central wavelength included in training data. However, when the central wavelength changed by more than 20 nm from 580 nm, the reconstructed image became significantly distorted. Since the speckle images also changed as the central wavelength changed, it became more difficult to reconstruct objects as the wavelength moved away from the central wavelength included in the training data. However, in the reconstruction results obtained with model (2a), the reconstruction was successful over a wavelength region of 100 nm, including an unseen wavelength region not included in the training data (i.e., 530, 550, 570, 590, 610, and 630 nm). Since the speckle image changed continuously as the central wavelength changes, the untrained wavelength region was also interpolated by training speckle images with different central wavelengths at appropriate intervals.

Figure 4 shows the average SSIM of 200 reconstructed images at different central wavelengths for models (1) and (2a). The SSIM of model (1) was the highest at 580 nm, and gradually decreased because of the spectral memory effect, which is the variation of speckles along the spectral dimension. As the wavelength changes, the reconstructed images become distorted out of the range of the spectral memory effect, and the mean value of SSIM decreases. To extend the spectral region for image reconstruction, we reconstructed images using model (2a). The SSIM of model (2a) was high in the entire wavelength region. From the above results, it is possible to reconstruct an image with different central wavelengths of illumination light, even using a monochrome camera by training speckle images in different wavelength regions with deep learning.

The image reconstruction and wavelength estimation were verified in real time between 540 and 620 nm. A trained model for image reconstruction and wavelength estimation was created in advance, and image reconstruction and wavelength estimation were performed in real time from speckle images captured by a monochrome camera for objects whose central wavelengths were varied. The camera was operated at 15 fps with an exposure time of 0.25 s. Real-time image reconstruction and wavelength estimation were achieved, even for objects whose spectra continuously changed by 10 nm every 0.5 s. The delay time between changes in the object image or object spectrum and their reflection in the reconstructed image was approximately 0.6 s. This is mostly due to the exposure time, and the computation for image reconstruction and color restoration by deep learning itself was fast. The wavelength estimation results were converted to color based on the CIE 1931 isochromatic function to improve the visibility of the results. (See Visualization 1 and Visualization 2 for more details.)

 figure: Fig. 4.

Fig. 4. SSIM evaluation of reconstructed images obtained with models (1) and (2a). The horizontal axis denotes the central wavelength of the light source. The vertical axis denotes the average SSIM value of 200 reconstructed images at each central wavelength. The error bars denote standard deviations.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Central wavelength estimation using speckle images at trained wavelengths. The results obtained with model (2a) are shown in (a). The results obtained with model (2b) are shown in (b). The horizontal axis denotes the central wavelength of the displayed image. The vertical axis denotes the estimated central wavelength estimation. The estimation results for 200 speckle images for validation at each central wavelength are shown. The red line denotes a theoretical value.

Download Full Size | PDF

B. Central Wavelength Estimation from Speckle Images Captured with a Monochrome Camera

We conducted central wavelength estimation from speckle images captured by a monochrome camera using a deep learning model, as shown in Fig. 2(c). In central wavelength estimation, we created trained models (2a) and (2b) using the dataset (2a) and (2b) in Table 1.

Figure 5 shows the wavelength estimation results for 200 speckle images at trained wavelengths obtained with models (2a) and (2b). The results obtained with model (2a) are shown in Fig. 5(a). In validation of Fig. 5(a), speckle images with different central wavelengths at 20 nm intervals between 540 and 620 nm were used. The central wavelength estimation can be achieved at 540, 560, 580, 600, and 620 nm. At the wavelengths of 530 and 630 nm, outside the learned wavelength, the estimation accuracy decreased.

The results obtained with model (2b) are shown in Fig. 5(b). In validation of Fig. 5(b), speckle images with different central wavelengths at 10 nm intervals between 530 and 630 nm were used. The central wavelength estimation can be achieved when the central wavelength of validation data included the central wavelength of trained datasets.

For a detailed investigation, speckle images captured with different central wavelengths at 1 nm intervals from 570 to 590 nm were used to evaluate the resolution of central wavelength estimation. Note that speckle images at untrained central wavelengths were included for validation. The results from 570 to 590 nm at 1 nm intervals are shown in Fig. 6. The results obtained with model (2a) are shown in Fig. 6(a). The results obtained with model (2b) are shown in Fig. 6(b). Model (2b) was able to estimate the central wavelength with smaller standard deviations compared to model (2a). The narrower the interval of wavelengths included in the training data, the better is the accuracy and resolution of central wavelength estimation. As mentioned in Section 3.A, the speckle image changes continuously as the central wavelength changes within the spectral memory effect. Therefore, by training with speckle images having different central wavelengths at appropriate intervals, the central wavelength estimation, including untrained wavelength regions, can be achieved. The estimated distributions are shown in more detail by the histograms in Figs. S1 and S2 in Supplement 1.

 figure: Fig. 6.

Fig. 6. Central wavelength estimation using speckle images at untrained wavelengths. Speckle images captured for validation data at 1 nm increments from 570 to 590 nm. The results obtained with model (2a) are shown in (a). The results obtained with model (2b) are shown in (b). The horizontal axis denotes the central wavelength of the displayed image. The vertical axis denotes the mean value of the central wavelength estimation. The estimation results for 200 speckle images for validation at each central wavelength. The red line denotes the theoretical value. The error bars denote standard deviations.

Download Full Size | PDF

4. DISCUSSION

The central wavelength estimation in 1 nm steps was demonstrated for objects within the spectral band ranging from 570 to 590 nm. We verified central wavelength estimation at 530 to 550 nm and 610 to 630 nm, and wavelength estimations in 1 nm increments were similarly achieved. Moreover, the same experiment was conducted from 450 to 550 nm. Even when the central wavelength varied from 450 to 550 nm, image reconstruction and wavelength estimation were achieved in all wavelength regions by training a dataset, including speckle images with different central wavelengths at appropriate intervals from 450 to 550 nm. Therefore, this method is effective for most of the visible light range.

In this experiment, a LCTF with a FWHM of 7 nm was always used. We conducted the same experiment using a bandpass filter with a FWHM of 10 nm, and we found similar trends. A more detailed relationship between the bandwidth and reconstruction accuracy is a subject for future work.

When speckle images generated by the white LED were inputted into model (2a) in Section 3.A, reconstruction was not possible. Since the training data contained only narrowband speckle images, wide-band speckle images such as white light could not be reconstructed. Since the proposed method is for estimating the central wavelength, wide-band speckle images such as white light could not be estimated. Spectrum estimation that can be applied to a wide band of light is a future issue.

5. SUMMARY

We demonstrated the feasibility of central wavelength estimation from speckle images captured by a monochrome camera using a deep learning model. The proposed method can estimate the central wavelength of illumination light to within 1 nm, with an error of less than a few nanometers. Because the proposed method can achieve image reconstruction and central wavelength estimation in a single shot using a monochrome camera, it is expected to be applied to spectral imaging devices capable of image reconstruction and wavelength estimation. Furthermore, if wavelength estimation of an object can be achieved from speckle images, single-shot multispectral imaging using a monochrome camera is expected to lower the cost of the system. Future studies will encompass spectral estimation of objects instead of wavelength estimation with a broadband spectrum or multiband spectra.

Funding

Japan Society for the Promotion of Science (21H01849).

Acknowledgment

The authors thank H. Arimoto of the National Institute of Advanced Industrial Science and Technology, AIST, for the use of the LCTF.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988). [CrossRef]  

2. E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016). [CrossRef]  

3. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014). [CrossRef]  

4. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010). [CrossRef]  

5. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017). [CrossRef]  

6. Q. Li, J. Zhao, Y. Zhang, X. Lai, Z. Chen, and J. Pu, “Imaging reconstruction through strongly scattering media by using convolutional neural networks,” Opt. Commun. 477, 126341 (2020). [CrossRef]  

7. T. Tsukada and W. Watanabe, “Investigation of image plane for image reconstruction of objects through diffusers via deep learning,” J. Biomed. Opt. 27, 056001 (2022). [CrossRef]  

8. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24, 13738–13743 (2016). [CrossRef]  

9. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6, 921–943 (2019). [CrossRef]  

10. T. Tsukada and W. Watanabe, “Tracking moving targets with wide depth of field behind a scattering medium using deep learning,” Jpn. J. Appl. Phys. 61, 072003 (2022). [CrossRef]  

11. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5, 1181–1190 (2018). [CrossRef]  

12. Y. Li, S. Cheng, Y. Xue, and L. Tian, “Displacement-agnostic coherent imaging through scatter with an interpretable deep neural network,” Opt. Express 29, 2244–2257 (2021). [CrossRef]  

13. S. Zhu, E. Guo, Q. Cui, L. Bai, J. Han, and D. Zheng, “Locating and imaging through scattering medium in a large depth,” Sensors 21, 90 (2021). [CrossRef]  

14. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4, 1209–1213 (2017). [CrossRef]  

15. H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016). [CrossRef]  

16. U. Kürüm, P. Wiecha, R. French, and O. Muskens, “Deep learning enabled real time speckle recognition and hyperspectral imaging using a multimode fiber array,” Opt. Express 27, 20965–20979 (2019). [CrossRef]  

17. X. Li, J. A. Greenberg, and M. E. Gehm, “Single-shot multispectral imaging through a thin scatterer,” Optica 6, 864–871 (2019). [CrossRef]  

18. L. Zhu, Y. Wu, J. Liu, T. Wu, L. Liu, and X. Shao, “Color imaging through scattering media based on phase retrieval with triple correlation,” Opt. Laser Eng. 124, 105796 (2020). [CrossRef]  

19. S. Zhu, E. Guo, J. Gu, Q. Cui, C. Zhou, L. Bai, and J. Han, “Efficient color imaging through unknown opaque scattering layers via physics-aware learning,” Opt. Express 29, 40024–40037 (2021). [CrossRef]  

20. Y. Lei, Y. Guo, M. Pu, Q. He, P. Gao, X. Li, X. Ma, and X. Luo, “Multispectral scattering imaging based on metasurface diffuser and deep learning,” Phys. Status Solidi 16, 2100469 (2022). [CrossRef]  

21. E. Guo, Y. Sun, S. Zhu, D. Zheng, C. Zuo, L. Bai, and J. Han, “Single-shot color object reconstruction through scattering medium based on neural network,” Opt. Laser Eng. 136, 106310 (2021). [CrossRef]  

22. X. Xu, X. Xie, A. Thendiyammal, H. Zhuang, J. Xie, Y. Liu, J. Zhou, and A. P. Mosk, “Imaging of objects through a thin scattering layer using a spectrally and spatially separated reference,” Opt. Express 26, 15073–15083 (2018). [CrossRef]  

23. A. G. Vesga, M. Hofer, N. K. Balla, H. B. D. Aguiar, M. Guillon, and S. Brasselet, “Focusing large spectral bandwidths through scattering media,” Opt. Express 27, 28384–28394 (2019). [CrossRef]  

24. L. Zhu, J. B. de Monvel, P. Berto, S. Brasselet, S. Gigan, and M. Guillon, “Chromato-axial memory effect through a forward-scattering slab,” Optica 7, 338–345 (2020). [CrossRef]  

25. K. Ehira, R. Horisaki, Y. Nishizaki, M. Naruse, and J. Tanida, “Spectral speckle-correlation imaging,” Appl. Opt. 60, 2388–2392 (2021). [CrossRef]  

26. Y. LeCun, C. C. Cortes, and C. J. Burges, “The MNIST database,” http://yann.lecun.com/exdb/mnist/.

27. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

28. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018). [CrossRef]  

29. E. Romera, J. M. Alvarez, L. M. Bergasa, and R. Arroyo, “ERFNet: efficient residual factorized ConvNet for real-time semantic segmentation,” IEEE Trans. Intell. Transp. Syst. 19, 263–272 (2017). [CrossRef]  

30. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error measurement to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]  

Supplementary Material (3)

NameDescription
Supplement 1       Supplemental document
Visualization 1       The image reconstruction and wavelength estimation were performed in real-time from speckle images captured by a monochrome camera for objects whose central wavelengths were varied.
Visualization 2       Rreal-time image reconstruction and wavelength estimation were achieved even for objects whose spectra continuously changed by 10 nm every 0.5 seconds.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Experimental setup for capturing speckle images at different central wavelengths. L1, objective lens; L2, lens; LCTF, liquid crystal tunable filter; P, polarizer; SLM, spatial light modulator; D, diffuser.
Fig. 2.
Fig. 2. (a) Schematic of image reconstruction and central wavelength estimation from speckle images captured by the monochrome camera. (b) Deep learning model for image reconstruction. (c) Deep learning model for central wavelength estimation. The height of each block represents the size. The width of each block represents the channel depth.
Fig. 3.
Fig. 3. Image reconstruction results obtained with model (1) trained on the dataset, including speckle images captured at a single central wavelength and model (2a) trained on the dataset, including speckle images captured at several different central wavelengths. Speckle images captured by the monochrome camera are pseudo-colored for clarity. The reconstruction results at the central wavelengths included in the training data are indicated by green squares.
Fig. 4.
Fig. 4. SSIM evaluation of reconstructed images obtained with models (1) and (2a). The horizontal axis denotes the central wavelength of the light source. The vertical axis denotes the average SSIM value of 200 reconstructed images at each central wavelength. The error bars denote standard deviations.
Fig. 5.
Fig. 5. Central wavelength estimation using speckle images at trained wavelengths. The results obtained with model (2a) are shown in (a). The results obtained with model (2b) are shown in (b). The horizontal axis denotes the central wavelength of the displayed image. The vertical axis denotes the estimated central wavelength estimation. The estimation results for 200 speckle images for validation at each central wavelength are shown. The red line denotes a theoretical value.
Fig. 6.
Fig. 6. Central wavelength estimation using speckle images at untrained wavelengths. Speckle images captured for validation data at 1 nm increments from 570 to 590 nm. The results obtained with model (2a) are shown in (a). The results obtained with model (2b) are shown in (b). The horizontal axis denotes the central wavelength of the displayed image. The vertical axis denotes the mean value of the central wavelength estimation. The estimation results for 200 speckle images for validation at each central wavelength. The red line denotes the theoretical value. The error bars denote standard deviations.

Tables (1)

Tables Icon

Table 1. Outline of Training Datasets Containing Different Central Wavelengths

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

N P C C = 1 × i = 1 w j = 1 h ( X ( i , j ) X ¯ ) ( Y ( i , j ) Y ¯ ) i = 1 w j = 1 h ( X ( i , j ) X ¯ ) 2 i = 1 w j = 1 h ( Y ( i , j ) Y ¯ ) 2 ,
S S I M ( x , y ) = ( 2 μ x μ y + C 1 ) ( 2 σ xy + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.