Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi-input mutual supervision network for single-pixel computational imaging

Open Access Open Access

Abstract

In this study, we propose a single-pixel computational imaging method based on a multi-input mutual supervision network (MIMSN). We input one-dimensional (1D) light intensity signals and two-dimensional (2D) random image signal into MIMSN, enabling the network to learn the correlation between the two signals and achieve information complementarity. The 2D signal provides spatial information to the reconstruction process, reducing the uncertainty of the reconstructed image. The mutual supervision of the reconstruction results for these two signals brings the reconstruction objective closer to the ground truth image. The 2D images generated by the MIMSN can be used as inputs for subsequent iterations, continuously merging prior information to ensure high-quality imaging at low sampling rates. The reconstruction network does not require pretraining, and 1D signals collected by a single-pixel detector serve as labels for the network, enabling high-quality image reconstruction in unfamiliar environments. Especially in scattering environments, it holds significant potential for applications.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Computational imaging is an advanced imaging technique that combines illumination, optics, image sensors, and post-processing algorithms. Single-pixel computational imaging establishes a relationship between an object scene and the observed image [1]. Subsequently, it leverages mathematical methodologies, including the resolution of inverse problems, to facilitate imaging through computational reconstruction. This method transcends the conventional point-to-point intensity sampling, and breaks through the limitations imposed by traditional optical imaging techniques in terms of optical system, power consumption, and costs. It significantly enhances its functionality, performance, reliability, and maintainability. However, achieving high-quality image reconstruction in single-pixel computational imaging typically requires collecting extensive data. This process can be cumbersome and time-consuming, limiting its real-time imaging capabilities. Furthermore, the practical imaging performance of single-pixel computational imaging techniques is greatly constrained by the accuracy of the forward mathematical models and the reliability of the inverse reconstruction algorithms. The unpredictability in the actual physical imaging process and the complexity of solving high-dimensional ill-posed inverse problems have become pressing bottleneck issues in this field.

In recent years, with the rapid growth of data and continuous advancements in computing power, artificial intelligence, particularly deep learning techniques, has seen rapid development. Deep learning involves fitting physical laws within constraints by learning existing information from datasets and has been widely applied in single-pixel computational imaging [2]. Deep learning has reduced the overreliance of traditional imaging techniques on physical models and reconstruction algorithms, while simultaneously enhancing imaging metrics. It has found widespread application in fields such as single-pixel imaging [35], super-resolution microscopy [69], and fast 3D imaging [1012]. Nevertheless, the existence of random noise and problems associated with missing data in the process of data acquisition introduces uncertainties in the predictions generated by neural networks. Additionally, the architectural design, parameter selections, and stochastic decisions made during the training of neural networks can impact local optima, further elevating the uncertainty of the results reconstructed by the network [13]. In complex underwater applications, such as underwater navigation and underwater survey missions, network uncertainty may potentially lead to mission failures and equipment damage. Therefore, it is imperative to enhance models to reduce uncertainty.

The majority of the uncertainty in neural networks stems from their inability to accurately interpret samples due to a deficiency in pertinent domain knowledge [14]. To address this issue, several computational imaging methods have been proposed, utilizing neural networks embedded with physical models to achieve high-quality image reconstruction [1517]. Wang et al. [18] add a strong physical model constraint to the network model, which can enhance the generalization capacity and reconstruction accuracy of the network. However, this image reconstruction method is grounded in the deep image prior (DIP) [19] theory and DIP is highly sensitive to the initial input image. The choice of the initial input image can significantly influence the final results, thereby increasing the uncertainty in the reconstructed outcome. Li et al. [20] use untrained neural networks to convert one-dimensional (1D) signals detected by detectors into two-dimensional (2D) signal while constraining neural network parameter convergence using the difference between the estimated and detected light intensities obtained from the reconstructed image. Unfortunately, due to the absence of 2D spatial pixel context information in 1D signals, the network must make assumptions about the spatial arrangement of pixels. Using 1D signals as the input to neural networks without any prior information may further exacerbate the uncertainty in the reconstructed images [21].

In this work, we present a single-pixel computational imaging method based on an untrained multi-input reconstruction network embedded with a physical model, which optimizes parameters by mutual supervision of the two inputs generating results. We simultaneously input 1D light intensity signals and 2D random image signal into a multi-input mutual supervision network (MIMSN) to enable the network to learn the correlation and feature representation between the two signals. Images generated by the two signals mutually supervise each other. The generated images can be used as input for subsequent iterations, continually incorporating prior information. While introducing prior information from 1D signals, we reduce the uncertainty of the neural network by constraining it with 2D signals. Through numerical simulations and experiments, the proposed method outperforms some existing reconstruction methods and holds promise for practical engineering applications.

2. Methods and experimental setup

2.1 Methodology

The schematic diagram of the proposed single-pixel computational imaging method is illustrated in Fig. 1. The MIMSN-enhanced single-pixel computational imaging experiment setup consists of an extended laser source, a Digital Micro-mirror Device (DMD), a single-pixel detector (SPD), an object, and a computer. The laser source operates at a wavelength of 532 nm. The DMD comprises a rectangular array of micro-mirrors with dimensions of 1280 $\times$ 800 and inter-mirror spacing of 10.6 $\mu$m. Initially, the light emitted by the laser source undergoes expansion through a beam expander mirror. Subsequently, the expanded laser beam is directed onto the DMD. The DMD modulates the laser by employing a series of random speckle patterns, denoted as $P_{N} (x,y)$, pre-stored in the computer. Following this, the SPD collects the intensity signals $I_{N} (real)$ as the laser passes through the object. $I_{N}$ (real) is then transmitted to the computer for further analysis. The architecture of the MIMSN is depicted in Fig. 1. It takes 1D signals and 2D random signal as input and generates corresponding images $img_{1D}$ and $img_{2D}$, respectively. Subsequently, the weighted sum of differences between the intensity signals $I_{N}(img_{1D})$ and $I_{N}(img_{2D})$ with $I_{N}(real)$, as well as the difference between $img_{1D}$ and $img_{2D}$, is computed as the loss function for training the parameters of the MIMSN. The reconstructed images can be updated at each iteration. To further enhance the efficiency of image reconstruction through the neural network, we attempt to use the 2D images generated by the MIMSN as input for subsequent iterations. Providing prior information to the neural network ensures high-quality final images with a low sampling rate after multiple iterative loops. This iterative process can be expressed in Algorithm  1.

 figure: Fig. 1.

Fig. 1. Schematic of experimental setup and the MIMSN. The laser, modulated by a DMD, traverses the object and is collected by the SPD, resulting in 1D light intensity signals. Subsequently, both 1D signals and 2D random signal are fed into the network. To incorporate the 1D signals into the network input, we use a fully connected layer to map the signal to a length of 4096, followed by reshaping to meet the input dimensions of the network. The outputs of these two inputs mutually supervise each other to optimize the network’s parameters.

Download Full Size | PDF

Tables Icon

Algorithm 1. MIMSN Algorithm.

The MIMSN employs an encoder-decoder architecture with residual blocks serving as the fundamental units of the encoder. Residual connections allow the direct propagation of original input information to subsequent layers, utilizing low-level features for the learning of high-level features, ensuring the network’s learning capacity even at greater depths. The decoder consists of transposed convolutions and dual convolutional layers, employed to extract local information from features and gradually restore images to their original dimensions. Skip connections are employed between the encoder and decoder to fuse low-level and high-level features, thereby better capturing both local and global context information of objects. 1D signals and 2D signal are jointly fed into the neural network, enabling the network to comprehensively learn the correlations and feature representations between these two types of signals. The results generated by both signal types mutually supervise each other, facilitating the adjustment of network parameters. 2D signal provide spatial information to the reconstruction process. Initiating the process with random 2D signal reduces the reliance of the reconstruction results on the input signal quality. Both types of signals, in conjunction with the physical model, jointly impose constraints on the reconstruction process, diminishing the network’s uncertainty. Ultimately, the average of the results generated by these two signal types is used to produce the final image.

2.2 Experimental setup

To demonstrate the effectiveness of the proposed single-pixel computational imaging method, we conducted two sets of controlled experiments to test its reconstruction performance in an unfamiliar free-space environment and an underwater turbulent environment. The experimental imaging objects were letters “I”, “O”, “P”, “E” and “N” generated by laser cutting. First, we evaluated the performance of the MIMSN in an unfamiliar free-space environment, as illustrated in Fig. 1. The total optical intensity of the laser, modulated by the DMD, was collected by the SPD after passing through the object. Subsequently, we conducted single-pixel computational imaging experiments in an underwater turbulent environment to assess the image reconstruction performance of the MIMSN. The total optical intensity of the laser, modulated by the DMD and reflected by the object, was collected by the SPD. We collected 250, 500, 750, 1000, 1250, and 1500 samples of intensity data for each object. These corresponded to sampling rates of 6.10${\% }$, 12.21${\% }$, 18.31${\% }$, 24.41${\% }$, 30.52${\% }$, and 36.62${\% }$, respectively. We quantitatively assessed the quality of the reconstructed images using contrast-to-noise ratio (CNR) [22], peak signal-to-noise ratio (PSNR) [23], and resolution. Resolution was calculated using the Ref. [24] method, where a lower resolution value indicates a higher resolution level.

3. Results and discussion

3.1 Image quality in free space using different inputs

Firstly, we demonstrate the impact of concurrently inputting two types of signals into the network in reducing network uncertainty and improving the reconstruction outcomes. We conducted experiments with various input types. These configurations included 1D signal input, 2D signal input, and the simultaneous input of both 1D and 2D signals. The experiments were conducted using the same dataset and network architecture. Figure 2 displays the resulting reconstructed images for each input type, along with their respective CNR, PSNR, and resolution values.

 figure: Fig. 2.

Fig. 2. Quantitative assessment of CNR, PSNR, and resolution for different input types in single-pixel computational imaging with 1500 measurements.

Download Full Size | PDF

When using 1D signals as the sole input, it introduces increased uncertainty into the reconstruction process, as the network had to infer pixel-wise spatial distribution in the absence of pixel-level spatial information. The absence of spatial information limits the neural network’s effective learning of the structural and detail features of the object, resulting in a decrease in image reconstruction quality and accuracy. Using random 2D signal as input, a significant drawback is the absence of object features. This deficiency results in a plethora of artifacts and noise in the reconstruction of the network. The network struggles to accurately comprehend and restore specific features from the input image. However, when both 1D and 2D signals were simultaneously input, the network learned the relationship between these two signals. The reconstruction results of these two signals are mutually supervised, ensuring consistency and enhancing the overall robustness of the system. This dual signal supervision approach serves to mitigate uncertainties, providing a more comprehensive information set to enhance the final reconstruction process, enabling better restoration of the details and structure of the object image.

3.2 Imaging quality at different iterations and sampling rates for different methods

In addition, we demonstrated that the MIMSN outperforms the other widespread image reconstruction methods in a free-space setting using the same dataset. We compared the MIMSN with traditional algorithm [25] and differential algorithm [26], Both imaging methods require a substantial amount of sampling for high-quality image reconstruction. As illustrated in Fig. 3, under 1500 measurements, both traditional algorithm and differential algorithm exhibit considerable background noise in their image reconstructions, falling significantly short of the quality achieved at 250 measurements with the MIMSN. We conducted a comparative analysis between the MIMSN and the GIDC [18]. The findings reveal that both methods demonstrate robust background denoising capabilities. Nevertheless, it is noteworthy that the reconstructed images produced by the GIDC generally exhibit darker overall tones, coupled with a noticeable blurring of the edges of the target objects. GIDC is fundamentally a denoising algorithm applied to images reconstructed by traditional methods. The quality of image reconstruction is contingent upon the reconstruction quality achieved by traditional methods. Simultaneously, the abundant information provided by the mutual supervision of the two signals facilitates superior structural and details reconstruction in the images. This is particularly evident in the reconstruction of the slit part of letters “O” and “P”. The grayscale values of the slit part in the reconstructed images of letters “O” and “P” are displayed in Fig. 4. The method demonstrates a significant advantage in reducing sampling rates while improving image quality.

 figure: Fig. 3.

Fig. 3. Comparative study of the MIMSN with some other image reconstruction methods with different measurements. The images were reconstructed by traditional algorithm, differential algorithm, the GIDC, and the MIMSN with the same collected single-pixel signals and speckle patterns. TA represents traditional algorithm and DA represents differential algorithm. In the upper right, the reconstructed images of the letter “O” and “P” at the slit part are shown.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. The gray values of the reconstructed images of the letters “O” and “P” at the slit part.

Download Full Size | PDF

The MIMSN enhances image quality and achieves low sampling rate image reconstruction in unfamiliar scenes through three aspects. Firstly, through harnessing the robust information extraction capabilities of convolutional neural networks, we can efficiently reduce the sampling rate. We embed a physical model into the neural network, enabling image reconstruction without the need for pre-training. This untrained approach avoids the generalization issues faced by traditional data-driven deep learning. Secondly, we input both the detected 1D signals and the 2D random signal into the network. This enables the network to learn the correlations between these two types of signals, improving the model’s stability. The complementary information also facilitates faster learning convergence. Thirdly, we initially feed random 2D signal into the MIMSN and subsequently loop back the self-generated reconstructed images into the MIMSN as the subsequent 2D input signal. By continuously integrating prior information into this iterative process, the accuracy and quality of reconstructed images are significantly improved. These three aspects collectively contribute to the remarkable enhancement of image reconstruction accuracy and quality in the MIMSN for low sampling rate scenarios in unfamiliar environments.

Following that, we computed the CNR, PSNR, and resolution of the reconstructed images using various methods. This was done to furnish quantitative evidence of the superior performance of the MIMSN. The corresponding results are depicted in Fig. 5, where the blue, green, red, and yellow lines represent the MIMSN, the GIDC, traditional algorithm, and differential algorithm, respectively. It can be observed that the MIMSN achieves significantly higher image quality even with only 250 samples compared to the results of traditional algorithm and differential algorithm after 1500 measurements. This demonstrates that the MIMSN is capable of achieving high-quality image reconstruction at low sampling rates. According to the PSNR metric, it is evident that both the MIMSN and the GIDC demonstrate robust background denoising capabilities. However, from the perspectives of CNR and resolution, the MIMSN outperforms the GIDC. The reconstructed images by the MIMSN exhibit a more significant contrast between the signal and noise, indicating a higher level of reconstruction quality.

 figure: Fig. 5.

Fig. 5. Quantitative evaluation of the CNR, PSNR, and resolution of traditional algorithm, differential algorithm, the GIDC, and the MIMSN with different measurements. TA represents traditional algorithm and DA represents differential algorithm.

Download Full Size | PDF

3.3 Image quality in turbulent water with different methods

Furthermore, the MIMSN demonstrates its advantages in unfamiliar and challenging optical environments, as evidenced by a reflected single-pixel computational imaging experiment conducted in a turbulent water environment with 1500 measurements. In the turbulent environment, particles suspended in water scatter light, resulting in reduced image clarity and contrast due to background noise. The dynamic movement of these particles causes the outlines of objects to become blurred, leading to the distortion. To further validate the superiority of the MIMSN, we compared the CNR, PSNR, and resolution of the images reconstructed by the MIMSN with those obtained using traditional algorithm, differential algorithm, and the GIDC. Figure 6 displays the images reconstructed through different algorithms using the same dataset, along with their respective CNR, PSNR, and resolution metrics. Compared to the free space environment, the propagation of light in water is influenced by scattering and absorption, resulting in a noticeable decrease in the average CNR and PSNR of the reconstructed images using traditional algorithm and differential algorithm. These reconstructed images tend to be darker and exhibit increased blurriness. While the reconstructed images using the MIMSN also experience a decline in CNR and PSNR, they maintain a relatively high level. The results indicate that the image quality of the five images reconstructed by the MIMSN is significantly superior to those obtained using traditional algorithm and differential algorithm. This demonstrates that the MIMSN can consistently produce high-quality reconstructed images in challenging environments. GIDC continues to demonstrate robust denoising capabilities. Nevertheless, constrained by the quality of the input images, the reconstructed image quality remains subpar compared to that achieved by the MIMSN. Both CNR and resolution metrics lag behind those attained by the MIMSN. Due to the distortion caused by the scattering and refraction of light in water, all retrieved images exhibit distortions and the loss of critical details. These distortions primarily result from optical field deformations and cannot be corrected by the MIMSN, which is specifically designed to enhance image quality and reduce sampling rates.

 figure: Fig. 6.

Fig. 6. Quantitative evaluation of the CNR, PSNR, and resolution of traditional algorithm, differential algorithm, the GIDC, and the MIMSN of reflected single-pixel computational imaging with 1500 measurements in turbulent water with a 4 m distance and 4.8 $\times$ $10^{4}$ L/hour turbulence.

Download Full Size | PDF

4. Conclusion

In summary, we have introduced the MIMSN-enhanced single-pixel computational imaging, which incorporates 1D light intensity signals collected by SPD and progressively optimized 2D image signal as inputs. In the MIMSN, we have devised a process where the generation of images from both types of signals supervises each other, ensuring the consistency of both signals in relation to the same object generation process enhances the system stability. The mutual supervision method provides additional information for the reconstruction process, resulting in a more effective restoration of the structure and details of the object image. The MIMSN utilizes the 1D signals collected by the SPD as labels for adaptive optimization and object image reconstruction, eliminating the need for pre-training and enabling it to perform image reconstruction tasks in unfamiliar environments. The results demonstrate that the MIMSN is capable of achieving high-quality image reconstruction at low sampling rates. Overall, the MIMSN provides an excellent solution for exploration in challenging optical environments, offering a new framework for neural network-based single-pixel computational imaging.

Funding

National Key Research and Development Program of China (2022YFC2808003); Fundamental Research Funds for the Central Universities (D5000220481).

Acknowledgments

Zhipeng Geng performed the data collection and image reconstruction. Zhe Sun conceived the idea, conducted the experiments, analyzed the results and reviewed this manuscript. Yifan Chen conceived the idea, reviewed this manuscript and participated in discussions. Lu Xin and Tong Tian contributed to the experiments and results analysis. Guanghua Cheng and Xuelong Li participated in discussions, and supervised the project.

Disclosures

The authors declare no conflict of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. B. I. Erkmen and J. H. Shapiro, “Ghost imaging: from quantum to classical to computational,” Adv. Opt. Photonics 2(4), 405–450 (2010). [CrossRef]  

2. G. M. Gibson, S. D. Johnson, and M. J. Padgett, “Single-pixel imaging 12 years on: a review,” Opt. Express 28(19), 28190–28208 (2020). [CrossRef]  

3. X. Li, Y. Chen, and T. Tian, “Part-based image-loop network for single-pixel imaging,” Opt. Laser Technol. 168, 109917 (2024). [CrossRef]  

4. Z. Sun, T. Tian, and S. Oh, “Underwater ghost imaging with pseudo-bessel-ring modulation pattern,” Chin. Opt. Lett. 21(8), 081101 (2023). [CrossRef]  

5. Y. Chen, Z. Sun, C. Li, et al., “Computational ghost imaging in turbulent water based on self-supervised information extraction network,” Opt. Laser Technol. 167, 109735 (2023). [CrossRef]  

6. H. Wang, Y. Rivenson, and Y. Jin, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019). [CrossRef]  

7. E. Nehme, L. E. Weiss, T. Michaeli, et al., “Deep-storm: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018). [CrossRef]  

8. W. Ouyang, A. Aristov, and M. Lelek, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018). [CrossRef]  

9. Y. Rivenson, Z. Göröcs, and H. Günaydin, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

10. S. Feng, C. Zuo, and W. Yin, “Micro deep learning profilometry for high-speed 3d surface imaging,” Opt. Lasers Eng. 121, 416–427 (2019). [CrossRef]  

11. W. Luo, A. G. Schwing, and R. Urtasun, “Efficient deep learning for stereo matching,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), pp. 5695–5703.

12. Y. Kuznietsov, J. Stuckler, and B. Leibe, “Semi-supervised deep learning for monocular depth map prediction,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), pp. 6647–6655.

13. K. Zou, Z. Chen, and X. Yuan, “A review of uncertainty estimation and its application in medical imaging,” arXiv, arXiv:2302.08119 (2023). [CrossRef]  

14. J. Gawlikowski, C. R. N. Tassi, and M. Ali, “A survey of uncertainty in deep neural networks,” Artificial Intelligence Review pp. 1–77 (2023).

15. S. Laine, T. Karras, J. Lehtinen, et al., “High-quality self-supervised deep image denoising,” Adv. Neural Inf. Process. Syst. 32, 6970–6980 (2019). [CrossRef]  

16. F. Wang, H. Wang, and H. Wang, “Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27(18), 25560–25572 (2019). [CrossRef]  

17. J. Zhang, J. Shao, and J. Chen, “Pfnet: an unsupervised deep network for polarization image fusion,” Opt. Lett. 45(6), 1507–1510 (2020). [CrossRef]  

18. F. Wang, C. Wang, and M. Chen, “Far-field super-resolution ghost imaging with a deep neural network constraint,” Light: Sci. Appl. 11(1), 1 (2022). [CrossRef]  

19. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2018), pp. 9446–9454.

20. J. Li, B. Wu, and T. Liu, “Urnet: High-quality single-pixel imaging with untrained reconstruction network,” Opt. Lasers Eng. 166, 107580 (2023). [CrossRef]  

21. M. Abdar, F. Pourpanah, and S. Hussain, “A review of uncertainty quantification in deep learning: Techniques, applications and challenges,” Inf. Fusion 76, 243–297 (2021). [CrossRef]  

22. Z. Sun, F. Tuitje, and C. Spielmann, “Improving the contrast of pseudothermal ghost images based on the measured signal distribution of speckle fields,” Appl. Sci. 11(6), 2621 (2021). [CrossRef]  

23. X. Yang, Z. Yu, and L. Xu, “Underwater ghost imaging based on generative adversarial networks with high imaging quality,” Opt. Express 29(18), 28388–28405 (2021). [CrossRef]  

24. Z. Sun, F. Tuitje, and C. Spielmann, “Toward high contrast and high-resolution microscopic ghost imaging,” Opt. Express 27(23), 33652–33661 (2019). [CrossRef]  

25. R. S. Bennink, S. J. Bentley, and R. W. Boyd, ““two-photon” coincidence imaging with a classical source,” Phys. Rev. Lett. 89(11), 113601 (2002). [CrossRef]  

26. F. Ferri, D. Magatti, and L. Lugiato, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Schematic of experimental setup and the MIMSN. The laser, modulated by a DMD, traverses the object and is collected by the SPD, resulting in 1D light intensity signals. Subsequently, both 1D signals and 2D random signal are fed into the network. To incorporate the 1D signals into the network input, we use a fully connected layer to map the signal to a length of 4096, followed by reshaping to meet the input dimensions of the network. The outputs of these two inputs mutually supervise each other to optimize the network’s parameters.
Fig. 2.
Fig. 2. Quantitative assessment of CNR, PSNR, and resolution for different input types in single-pixel computational imaging with 1500 measurements.
Fig. 3.
Fig. 3. Comparative study of the MIMSN with some other image reconstruction methods with different measurements. The images were reconstructed by traditional algorithm, differential algorithm, the GIDC, and the MIMSN with the same collected single-pixel signals and speckle patterns. TA represents traditional algorithm and DA represents differential algorithm. In the upper right, the reconstructed images of the letter “O” and “P” at the slit part are shown.
Fig. 4.
Fig. 4. The gray values of the reconstructed images of the letters “O” and “P” at the slit part.
Fig. 5.
Fig. 5. Quantitative evaluation of the CNR, PSNR, and resolution of traditional algorithm, differential algorithm, the GIDC, and the MIMSN with different measurements. TA represents traditional algorithm and DA represents differential algorithm.
Fig. 6.
Fig. 6. Quantitative evaluation of the CNR, PSNR, and resolution of traditional algorithm, differential algorithm, the GIDC, and the MIMSN of reflected single-pixel computational imaging with 1500 measurements in turbulent water with a 4 m distance and 4.8 $\times$ $10^{4}$ L/hour turbulence.

Tables (1)

Tables Icon

Algorithm 1. MIMSN Algorithm.

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.