Abstract
Similar to the “lucky imaging” technique that selects the best local features over time, spatial redundancy allows for the localization of turbulence induced image distortions and selection of the best features that are least distorted by turbulence. A new technique to restore turbulence degraded images is proposed based on imaging with spatial redundancies. Two imaging frameworks that are candidates for implementation of the technique are the plenoptic sensor and the light field camera, which collect multiple depictions of the target through sub-aperture imaging. Preliminary studies have demonstrated the effectiveness of either device in imaging through turbulence. However, as visual distortions vary significantly from weak to strong turbulence conditions, it is unclear when and how a light field approach should be applied to enhance target recognition over distorted media. We present an in-depth study on the fundamental differences between the two devices with regards to turbulence distortion, as well as their image restoration mechanisms. Our analysis combined with proof-of-concept experiments show that the turbulence resilience of light field imaging techniques depends strongly on the mechanism of mapping the light field. Such universal finding serves as guidance for imaging and object recognition with light field approaches.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Light field cameras were initially developed for refocusable photography and semi-3D microscopic imaging [1,2]. The significant impact quickly influenced many applications such as lens manufacturing [3–6], optical encryption [7,8], holography [9,10], optical sensing and imaging near the diffraction limit [11–15], and wavefront sensing [16–19].
Recently, new applications for the light field camera, and modifications thereof, have been proposed to solve the enduring problems of imaging through turbulent media [20,21]. Essentially, these approaches use the spatial redundancy collected through light field images to synthesize a result using a collection of non-distorted features or pixels. In other words, they also provide selections of good results in a similar fashion to the conventional “lucky imaging” methods. Two major approaches have proved effective: the pixel-based light field camera and the view-based plenoptic sensor. The former back traces the rays which form an image point to different sub-aperture points at the entrance pupil of the camera, and differentiates regional distortions thereafter. The latter maps the angular spectra of the image formation to an image array to avoid turbulence affected regions. Both methods isolate turbulence degradation through analyzing the image formation process, where both steady patterns from the target and temporal turbulence induced distortions can be identified, respectively. In the literature, the two light field approaches are also termed as “plenoptic 1.0 camera” and “plenoptic 2.0 camera” [22], or “unfocused plenoptic camera” and “focused plenoptic camera” [23].
As there are a wide range of possible turbulence scenarios such as weak, medium or strong turbulence [24,25], Kolmogorov or non-Kolmogorov [26,27], isotropic or anisotropic [24,27], various limitations may be imposed on the light field based imaging approaches. For example, the plenoptic sensor approach requires the sub-aperture size based on the micro-lens array (MLA) be smaller than the Fried parameter for object recognition through turbulence [21]. It appears unclear whether these two major light field approaches share duality with each other (have the same fundamental limitations), or each enhances imaging results under specific turbulence regimes.
To comprehensively analyze the effectiveness of different light field imaging approaches over a turbulent channel, we have built a hybrid plenoptic sensor system which can switch between the two major configurations of light field imaging over the same view. A water-tube based turbulence generator is integrated with a programmable heater to ensure similarity in turbulence generation over repeated runs. We deduce through theoretical analysis and experiments that the two approaches handle turbulence quite differently. For example, our study finds that the point-based light field camera approach requires image filters to suppress turbulence distortion, while the view-based plenoptic sensor relies on a metric selected cell image for a clear view of the target. Our comparative study provides the most fundamental understanding to date regarding the application of light field imaging techniques to overcome turbulence effects, where turbulence distortions can be selectively suppressed through wise choice of a light field imaging device.
The rest of the work is arranged as follows: Part 2 establishes the theory comparison between the two light field approaches, Part 3 illustrates the experimental validations, and conclusions are drawn in Part 4.
2. Comparison of two light field approaches in imaging through turbulence
The fundamental difference between a light field camera approach and a plenoptic sensor approach lies in the use of the lenslets in the MLA. The light field camera uses a lenslet as an image point resolver, and the plenoptic sensor uses a lenslet to provide a view of the scene. Their respective functions in imaging through turbulence based on geometric optics are illustrated in Fig. 1.
In Fig. 1, we have neglected the fact that the two systems have different MLA pitch sizes, and have plotted the components equally. Turbulence distortion is simplified as wavefront perturbation based on conventions [28], and represented by the dashed red curves to the left of the objective lenses. Consequently, the gray dashed lines represent ray tracing in absence of turbulence, and the red dashed and arrowed curves to the right of the MLA represent the turbulence perturbation results. It is evident that the same turbulence scenario manifests differing impacts when using different light field imaging platforms. In the light field camera formation, turbulence blurs adjacent pixels under each lenslet, which confuses the ray tracing mechanism. It is also worth noticing that such influence extends to all of the MLA lenslets that sample different points of the target. In the plenoptic sensor formation, as views of the same target are formed by an array of MLA lenslets in concert with a sub-aperture region of the front objective lens, turbulence distorts each view independently (and in differing severity) in each sub-aperture region. Needless to say, there are other light field recording structures in which turbulence induced image distortions are visualized differently from the two systems mentioned above. Therefore, it is paramount to understand whether there are any dominant advantages in restoring turbulence degraded images associated with an approach when selecting light field recording schemes. In other words, we are interested in finding whether turbulence correction can be optimized through the selection of a suitable light field system.
Our comparison focuses on the two representative systems shown in Fig. 1, due to their popularity, and a generalized discussion will be provided in part 4 to extend our finding to any arbitrary light field imaging system. The comparison is conducted through the framework of generalized equations describing their mechanisms in imaging through turbulence, followed by experimental validations. A few additional assumptions are necessary to make fair and practical comparisons between the two systems. First, pixel quantization on the imaging sensor is ignored. Second, the paraxial approximation is used by assuming that the target geometry is much smaller than its distance from the imaging system L. And third, the turbulence distortion is approximated as a 2D phase screen [29,30] that is distance z from the target.
Due to the use of a common camera lenses (objective lenses) in both of the systems shown in Fig. 1, we express the scalar field on the ideal image plane of the camera lens in Eq. (1) and use it as a central point for comparison.
In further propagation of $U_i(s,t)$, the light field camera transforms the field point through a MLA lenslet in to an image field that can be described by Eq. (2).
With turbulence’s involvement explained in the summation form of Eq. (6), conveying how a light field camera works in imaging through turbulence is straightforward: The cluster center of pixel intensities behind each MLA lenset represents a relatively good image point, assuming that the “signal” terms prevail over the “noise” terms in a single frame, or over a fractional observation time. The mechanism is essentially the same as “lucky imaging” [31,32] with the additional capacity to identify sharp features through the fan of rays that forms each image point. Although the light field camera loses its capacity to refocus under non-trivial turbulence influence, it does serve as a turbulence suppressor in imaging tasks.
We now demonstrate the plenoptic sensor mechanism in a very similar fashion to Eq. (1). The analysis can be understood as dividing Eq. (1) into its sub-aperture contributions from the camera lens, which are imaged under individual MLA lenslets. The function of the plenoptic sensor can be expressed as
It is now wise to point out that turbulence has distinctive impacts on different light field imaging platforms. The light field camera, as limited by the “noise” term in Eq. (5), works best when the Fried parameter (transverse coherence length) of the turbulent channel is larger or comparable with the diameter of the camera lens. The plenoptic sensor, under equal conditions as mentioned above, doesn’t perform as well as a light field camera due to the diffraction limit. When turbulence grows to the level where the Fried parameter drops lower than the diameter of the camera lens, the plenoptic sensor remains effective until the dimension limit due to the sub-aperture stop of an MLA lenslet defined through $P(x,y;N_a,N_b)$ in Eq. (7) is reached. It is also worth pointing out that the relative depth of the equivalent phase screen $\gamma$ plays a geometric scaling role, with ratio $1/\gamma$. In other words, the equivalent transverse coherence length is magnified by $1/\gamma$. This leads to the finding that the light field camera enhances its suppression of turbulence when it resides closer to the target. The plenoptic sensor, on the other hand, can be claimed as effective against deep or strong turbulence, where the path aggregated Fried length is expected to be less than the camera lens diameter.
Overall, a simple criterion can be summarized regarding the effectiveness of each light field approach in suppressing turbulence distortion. We denote D as the effective diameter of the objective lens with its magnification ratio fixed at 1:1, and the aforementioned Fried parameter of the channel as $r_0$. We assume that the number of sub-apertures along a given dimension of the plenoptic sensor is $N$. The light field camera have matched f-numbers between the MLA and the objective lens. This yields:
for the the light field camera, and for the plenoptic sensor. In Eqs. (8–9), $U_l^*$ and $U_p^*$ refer to image processing algorithm results (best features) over a recording period on the light field camera and plenoptic sensor, respectively. And $U_l$ and $U_p$ refer to image processing result per frame for the two configurations, respectively. Because the Fried coherence length provides the fundamental resolution limit in imaging through turbulence, we can hypothetically denote an “adaptive” camera with its lens diameter matching the Fried parameter for the best imaging result. Such an ideal camera is represented by $U_c$ with its aperture diameter as a variable in Eqs. (8–9). To be comparable with the light field approaches using the same objective lens, we mark the maximum lens diameter for such camera as D. When $r_0\!<\!D$, camera image $U_c(r_0)$ with aperture size $r_0$ renders less distorted point spread functions (PSFs) than the camera image $U_c(D)$ with the maximum size $D$ under diffraction limit, as phase distortion within the $r_0$ aperture is still coherent. The light field camera result $U_l$ performs similar to $U_c(r_0)$, which acts like an “adaptive” camera by filtering out the “noise” term that is incoherent and favoring the “signal” term that is still coherent. We use the symbol $\sim$ to denote similar performance between $U_l$ and $U_c$. Based on the frame by frame results, optimized result $U_l^*$ can be selected. Eq. (9) can be similarly understood for the plenoptic sensor configuration, while its frame by frame performance stays near $U_c(\frac {D}{N})$ in the turbulence scenarios of $\frac {D}{N}<r_0<D$. Intuitively, this means stationary performance over a wide range of turbulence until $r_0=\frac {D}{N}$ is reached. Such stationary property makes a metric selection rule for $U_p^*$ feasible, as discussed in our earlier work [21]. Meanwhile, the plenoptic sensor’s resolution is diffraction limited by the aperture size $\frac {D}{N}$, which is worse than the light field camera (which is limited by the aperture size $r_0$ in the same region). In the case of $r_0<\frac {D}{N}$, the two hardware configurations $U_l$ and $U_p$ have both reached a hard limit where the coherence length is less than the width of an MLA lenslet. In this situation, it is uncertain whether the processing algorithms still restore turbulence degraded images. One may conclude that the criteria suggests that the plenoptic sensor has better turbulence tolerance with a stationary performance, but is worse in diffraction related resolution limit.In summary, Eqs. (4–6) describe the turbulence involvement in the image formation process of a light field camera, by splitting its influence into the “signal” term and the “noise” term. Similarly, Eq. (7) presents the analysis for a plenoptic sensor. Because turbulence distortion behaves differently in the two light field approaches, their corresponding image processing algorithms to suppress turbulence effects are fundamentally different. In either configuration, one needs to apply the matching algorithm to restore the turbulence degraded images, as discussed by this section. The simple criterion described through Eqs. (8–9) can be used to outline the two approaches’ expected performance in imaging through turbulence with specified hardware configuration and turbulence condition.
3. Experimental comparison
A hybrid plenoptic imaging system has been designed and implemented based on the above analysis of imaging through turbulent media with a light field approach. The system makes two improvements upon the conventional plenoptic sensor. First, a commercial camera lens modified by binding its last lens-piece with a thin achromatic negative lens (to counteract the last lens-piece’s converging power) has been used to replace the objective lens. In this fashion, both the achromatic property of a commercial camera lens and much extended effective focal length (for $f\!/\#\geq \!16$ in full aperture) are harvested. The markers on a commercial camera indicating the original focal distances also facilitates calculation of the modified effective focal length for precise control of the hybrid system to convert between configurations of light field imaging and plenoptic sensor imaging. Second, the long light path behind the camera lens is folded through mirrors to make the hybrid system compact and practical in use. In this manner, the compact imaging system can balance its weight distribution, and we may register its rotational axis in line with the plane of the image sensor. Such design arrangement makes the system compatible with a gimbal for extended functions of pointing, acquisition and tracking (PAT) applications. The overall system design and implementation is shown in Fig. 2.
In Fig. 2, we show the two-deck optical design of the hybrid plenoptic sensor system in plot (a), with the upper deck holding key optical instruments such as the modified camera lens, filters, the MLA and the image sensor. The lower deck employs high reflectivity mirrors to wrap the long optical path after the objective lens within the compact system. Two sets of adjustable mirrors are used to calibrate the alignment of the light path upon leaving and re-entering the upper deck, respectively. The effective focal length for the camera lens is empirically set at 800mm as a central operating point to facilitate both plenoptic sensing and light field imaging (as shown in Fig. 1) over the same view. With the tuning of focal distance by the main camera lens, the two specific operating points for light field camera imaging and plenoptic sensor imaging shown in Fig. 1 can be achieved. Note that such simple interchangeable function is also facilitated by the fact that the intermediate image in the plenoptic sensor configuration is relatively far from the MLA (significantly larger than $F_{M\!L\!A}$). Otherwise, the spacing between the MLA and the image sensor should be adjusted to render focused cell images [33].
For turbulence generation, we employ a 1.5-meter-long water tube with embedded wire heaters at the bottom to create optical turbulence. The wire heaters are driven by an external programmable Variac transformer to ensure very similar turbulence scenarios over repeated trials. In fact, aside from small scale discrepancies, we find that the “programmed” turbulence distortion patterns over the first 60 seconds are largely repeatable per realization after sufficient reset time. The water tube is placed near the plenoptic system for efficient turbulence distortion generation (with the $\gamma$ value close to unity). The target under test is an LED array placed 10 meters apart from the plenoptic imaging system. Two additional mirrors are used to fold the free space imaging channel within lab space. The targeted LED array makes alignment undemanding, whereas real-world alignment of typical targets requires aiding via a side-view camera. In acquiring images, a neutral density filter ($N=4.0$) is used to prevent saturation. The camera exposure time is also set to $0.83$ ms (1/1200 s) to capture instantaneous turbulence degraded images. A view of the experimental arrangement is shown in Fig. 3.
In Fig. 3, we show the primary optical modules of the comparison experiments with labels indicating key components. Module A is the $1^{st}$ view folding mirror for the target located on another optical table. The mirror pairs with a $2^{nd}$ view folding mirror that sits near the target to multiply the target-to-water tube distance by a factor of 3 within a limited lab space. Module B is the water tube system acting as a turbulence generator to create non-trivial channel distortion. The programmable Variac transformer (Compact Power Systems, Titan Mac-01SH) is disjoint to the optical table (not shown in the setup picture) to avoid vibrations. Module C is the hybrid plenoptic imaging system with individual parts explained in Fig. 3. The side-view camera is not used for the experiment because the water tube (Module B) blocks its view of the target. The alignment is indirectly achieved by tuning the adjustable mirrors inside the hybrid plenoptic imaging system.
In the comparison experiment, we set the waveform of the Variac’s output voltage as 60Hz AC that linearly increases from 10 Volts to 60 Volts in 20 seconds to sweep through increasing levels of turbulence. Sufficient reset time is given between adjacent runs to minimize differences in turbulence realizations, so that the light field camera configuration and the plenoptic sensor configuration deal with almost the same channel distortion and can be compared side by side. The corresponding views from the light field camera configuration and the plenoptic sensor configuration are shown in Fig. 4(a) and Fig. 4(b), respectively.
In Fig. 4, we have manually added the red grid lines to indicate boundaries of the cell images in both configurations. Clearly, as each cell image in a light field camera essentially records rays converging to an imaging point, the hexagonal pattern of the 7 green LEDs can be outlined by the cell arrays depicted in Fig. 4(a). In the formation of the plenoptic sensor, Fig. 4(b) presents individual views of the LEDs per sub-aperture area of the camera lens. For demonstration purposes, we only show the central parts of the images, while the actual number of cells used for algorithms is 22-by-22 in both configurations. Visualization 1 shows the first 12 seconds, during which the heating voltage increases from 10 Volts to 40 Volts and the light field imaging approaches remain effective. At higher levels of simulated turbulence (with heating voltages higher than 40 Volts), the target is persistently unrecognizable, and neither light field approaches assures convergence to good results. In other words, the system operates beyond its limits for heating voltages above 40 Volts. Due to this reason, results after the $12^{th}$ second are not shown in the comparisons.
To suppress turbulence distortions, the light field camera should utilize the cluster center of pixel intensities in each cell image to stabilize image performance point by point. To do this, we simply used the pixel histogram and picked the intensity with maximum frequency per MLA lenslet. The cell picked pixels are assembled and linearly interpolated to synthesize a “good” light field image. In the plenoptic sensor configuration, a metric based method is used to select the best performing cell image automatically [21]. The metric is summarized as
In order to show frame-by-frame comparisons between the two light field approaches to restore degraded images, we first adopted metric $M_s$ to process the plenoptic sensor images, and whose comparison with the light field camera’s filtering algorithm is shown in Fig. 5. Later we indicate the metric selected ”best” cell images using $M$ with all three dimensions in a summarized comparison chart. In other words, $M$ provides better results than $M_s$ using additional consideration of distortion evolution over adjacent frames in the plenoptic sensor [21].
In Fig. 5 and Visualization 2, we show the image correction results through the light field camera and the plenoptic sensor under increasing levels of turbulence. It is obvious to see that during the beginning 6 seconds, the light field camera correction performs better than metric $M_s$ selected results. Majorly, the light field camera corrected images are less diffraction limited, and clearer in revealing the shapes and patterns of the LEDs. During the latter 6 seconds, however, the performance outcome flips. The light field camera image correction begins to be ineffective and faulty towards the last few seconds. The plenoptic sensor, on the other hand, still reflects major portions of the LEDs and their layout during the same period. Such an observation matches the theoretical prediction of Eqs. (8–9), where the plenoptic sensor produces better turbulence corrected results once the reduced Fried parameter $r_0$ continues to drop below than the camera lens’ diameter D. For weak turbulence levels, on the other hand, the light field camera correction evidently restores a sharper and clearer view of the target.
It is also of great interest to show the frame-by-frame comparison by turning off the algorithms. Therefore, we fixed the focus of the light field camera at the exact plane of the LEDs to render light field imaging result over turbulence without invoking the correction algorithm. Such settings can also be treated as camera views, because a light field camera with fixed focal depth acts the same as a normal camera. Similarly, we turned off the cell selection on the plenoptic sensor and only used the central cell image in results. The corresponding results with the correction turned off are shown in Fig. 6.
From Fig. 6 and Visualization 3, the light field camera image (no correction) grows unrecognizable after the $6^{th}$ second, and the plenoptic sensor image (no correction) grows unrecognizable after the $11^{th}$ second. In the first $2$ seconds, however, the ordinary light field camera image appears to be improved when the correction algorithm is turned off. This is because the correction algorithm essentially acts like an image filter that operates cell by cell traversing the MLA. Consequently, the resulting image is discretized by MLA cells that may reflect discrepancy against the LEDs’ round shape, while the original light field camera image is not limited by such quantization. Moreover, detailed studies by Pepe [11] and D’Angelo [34] further remove resolution loss due to either diffraction or MLA quantization for a light field camera based on correlation studies among cell images. This means the image correction algorithms for the light field cameras will inevitably downgrade the original image resolution as a trade-off for turbulence resilience.
As resolution loss is inevitable for both the light field camera (majorly due to the MLA discretization) and the plenoptic sensor (majorly due to diffraction limit), we obtained the ground truth (reference images) for both configurations with the same procedure of rendering Fig. 6(a) and Fig. 6(b), with turbulence removed. In this manner the correction results shown in Fig. 5 and Visualization 2 can be measured through the correlation coefficient with the reference images for the two light field approaches. The measured correlation coefficients help us understand the performance of the two image correction procedures under increasing levels of turbulence. To avoid potential bias caused by common background patterns and differences in the field of view (FOV), we use a threshold (which is 0.2 times maximum pixel value per image) to cutoff low illumination in background patterns, and tailor the region of interest (ROI) to the centralized 7 LEDs before calculating the correlation coefficients. We also apply the same measures for results shown in Fig. 6 and Visualization 3 to indicate turbulence degradation with the correction algorithms turned off. The metric based overall comparison is shown in Fig. 7. In addition, based on the $6^{th}$ second divide line we apply the full metric search (Metric $M$) on the plenoptic sensor during the two $6\!-\!second$ periods to show the measure of ”best” cell images based upon the 3D selection. Similar searches over processed frame sequences in the light field camera configuration are not conducted, due to the lack of a clear “guide-star” to indicate the Strehl ratio [35,36] over time.
In Fig. 7, we co-plot the improvement curves for both the light field camera and the plenoptic sensor results, respectively. Note that the comparison between curves is meaningful if and only if referenced on the same light field device configuration. Although we take procedures to reduce influences from background light and differences in FOV that improve the correctness of the general trends, numerical cross-comparison between the two configurations’ metrics does not reflect performance differences precisely. In the beginning $6$ seconds, the correlation coefficients for imaging results (with or without the correction algorithms) all fall from near 1.0 to values close to 0.9, which shows very marginal improvements granted by the correction algorithms. For this reason, we empirically labeled this period as having “normal visual distortion”. Based on the visualizations, it can be witnessed that the images only suffer from normal visual distortions where the shape and structure of the target remain recognizable. In this regime, the advantage of algorithm provided turbulence resilience in the light field camera is offset by its loss of resolution accuracy, as discussed above. For the case of a plenoptic sensor, the spatial metric $M_s$ also provides marginal gains. The overall metric $M$ only lifts the correlation coefficient metric from 0.969 to 0.991 in this period, which can also be viewed as a marginal improvement. When turbulence level continues to increase, the latter $6$ seconds start to report significant improvement granted by each correction algorithm. Specifically, the original light field imaging (that acts like a normal camera) quickly loses recognition of the target, while the correction retains a recognizable target until the $9^{th}$ second, as can be witnessed in Visualization 2 and Visualization 3. For to this reason we empirically labeled the latter $6$ seconds as a ”strong visual distortion” period. In the case of the plenoptic sensor configuration, the gain through spatial metric selection $M$ also becomes significant, which can also be witnessed in Visualization 2 and Visualization 3. When the overall 3D metric $M$ is applied over the latter $6$ seconds, the correlation coefficient is uplifted from 0.918 to 0.982, an extra improvement.
For additional visualization demonstrations, we present the algorithm processed results at the $6^{th}$ second and the $12^{th}$ second in Fig. 8 and Fig. 9, respectively. We also show corresponding snapshots when the correction algorithms are turned off at the $6^{th}$ second and the $12^{th}$ second in Fig. 10 and Fig. 11, respectively.
The snapshots in Figs. (8–11) are confirmation of claims within the above discussion where: 1) both light field imaging approaches provide effective correction over non-trivial turbulence distortions; 2) the plenoptic sensor has additional tolerance of small values of Fried parameter (lower spatial coherence length) at the cost of lower resolution due to diffraction. Therefore, the generalized rules in Eqs. (8–9) regarding applying different light field techniques to imaging through turbulence have been validated through our lab experiments. It is also clear from both our theory and experimental studies that a light field camera works relatively close to a normal camera with its turbulence correction algorithm, but gains extra resilience when turbulence levels increase. In harsher situations, the plenoptic sensor configuration can be applied to work against stronger turbulence distortions for target recognition. However, there is still an upper limit of turbulence level, inferred as $r_0<\frac {D}{N}$ by Eq. (9), indicating even the plenoptic sensor configuration may not work beyond this range.
4. Conclusions and discussions
In this study, we have systematically analyzed the differences between two light field approaches for imaging through turbulent media by way of theory and proof-of-concept experiments. Our results show that different light field imaging platforms point to unique approaches to correct turbulence degraded images based upon their respective principles of 4D light field intensity mapping. In generalized light field imaging configurations, known as the focused plenoptic cameras [33,37,38] and where the imaging results can both be point-based or sub-angular-view-based per MLA lenslet, the image correction algorithm can be engineered based on its proximity to either of the two major configurations. Correspondingly, its performance in resurrecting turbulence degraded images shall fall between a light field camera and a plenoptic sensor.
Funding
Office of Naval Research (ONR) (N000141812008).
Acknowledgments
The authors sincerely thank Ms. Sarwat Chappell for her foresight and strong support of the plenoptic sensor development over the past many years.
References
1. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. CSTR 2, 1–11 (2005).
2. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]
3. H. Wang, L. Niu, W. Dai, X. Zhang, H. Wang, and C. Xie, “Matrix distributed liquid-crystal microlens arrays driven by electrically scanning voltage signals,” in Tenth International Conference on Information Optics and Photonics, vol. 10964 (International Society for Optics and Photonics, 2018), p. 109641V.
4. W. Dai, X. Xie, D. Li, X. Han, Z. Liu, D. Wei, Z. Xin, X. Zhang, H. Wang, and C. Xie, “Liquid-crystal microlens array with swing and adjusting focus and constructed by dual patterned ito-electrodes,” in MIPPR 2017: Multispectral Image Acquisition, Processing, and Analysis, vol. 10607 (International Society for Optics and Photonics, 2018), p. 106070A.
5. A. Pan, T. Chen, C. Li, and X. Hou, “Parallel fabrication of silicon concave microlens array by femtosecond laser irradiation and mixed acid etching,” Chin. Opt. Lett. 14(5), 052201 (2016). [CrossRef]
6. R. J. Lin, V.-C. Su, S. Wang, M. K. Chen, T. L. Chung, Y. H. Chen, H. Y. Kuo, J.-W. Chen, J. Chen, Y.-T. Huang, J.-H. Wang, C. H. Chu, P. C. Wu, T. Li, Z. Wang, S. Zhu, and D. P. Tsai, “Achromatic metalens array for full-colour light-field imaging,” Nat. Nanotechnol. 14(3), 227–231 (2019). [CrossRef]
7. S. You, Y. Lu, W. Zhang, B. Yang, R. Peng, and S. Zhuang, “Micro-lens array based 3-d color image encryption using the combination of gravity model and arnold transform,” Opt. Commun. 355, 419–426 (2015). [CrossRef]
8. P. Paudyal, F. Battisti, A. Neri, and M. Carli, “A study of the impact of light fields watermarking on the perceived quality of the refocused data,” in 2015 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON), (IEEE, 2015), pp. 1–4.
9. Y. Endo, K. Wakunami, T. Shimobaba, T. Kakue, D. Arai, Y. Ichihashi, K. Yamamoto, and T. Ito, “Computer-generated hologram calculation for real scenes using a commercial portable plenoptic camera,” Opt. Commun. 356, 468–471 (2015). [CrossRef]
10. J.-H. Park and M. Askari, “Non-hogel-based computer generated hologram from light field using complex field recovery technique from wigner distribution function,” Opt. Express 27(3), 2562–2574 (2019). [CrossRef]
11. F. V. Pepe, F. Di Lena, A. Mazzilli, E. Edrei, A. Garuccio, G. Scarcelli, and M. D’Angelo, “Diffraction-limited plenoptic imaging with correlated light,” Phys. Rev. Lett. 119(24), 243602 (2017). [CrossRef]
12. F. V. Pepe, G. Scarcelli, A. Garuccio, and M. D’Angelo, “Plenoptic imaging with second-order correlations of light,” Quantum Meas. Quantum Metrol. 3(1), 20–26 (2016). [CrossRef]
13. L. Su, Y. Liu, and Y. Yuan, “Spectrum reconstruction of the light-field multimodal imager,” IEEE Access 7, 9688–9696 (2019). [CrossRef]
14. G. Scala, M. D’Angelo, A. Garuccio, S. Pascazio, and F. V. Pepe, “Signal-to-noise properties of correlation plenoptic imaging with chaotic light,” Phys. Rev. A 99(5), 053808 (2019). [CrossRef]
15. F. Di Lena, F. Pepe, A. Garuccio, and M. D’Angelo, “Correlation plenoptic imaging: An overview,” Appl. Sci. 8(10), 1958 (2018). [CrossRef]
16. C. Wu, J. Ko, and C. C. Davis, “Determining the phase and amplitude distortion of a wavefront using a plenoptic sensor,” J. Opt. Soc. Am. A 32(5), 964–978 (2015). [CrossRef]
17. Z. Xin, D. Wei, M. Chen, X. Wang, X. Zhang, H. Wang, and C. Xie, “Polarized wavefront measurement using an electrically tunable focused plenoptic camera,” in Photonic Instrumentation Engineering VI, vol. 10925 (International Society for Optics and Photonics, 2019), p. 1092517.
18. Y.-S. Luan, B. Xu, P. Yang, and G.-M. Tang, “Wavefront analysis for plenoptic camera imaging,” Chin. Phys. B 26(10), 104203 (2017). [CrossRef]
19. C. Wu, D. A. Paulson, J. R. Rzasa, and C. C. Davis, “Extracting phase distortion from laser glints on a remote target using phase space plenoptic mapping,” J. Opt. Soc. Am. B 36(7), 1964–1971 (2019). [CrossRef]
20. M. Loktev, O. Soloviev, S. Savenko, and G. Vdovin, “Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation,” Opt. Lett. 36(14), 2656–2658 (2011). [CrossRef]
21. C. Wu, J. Ko, and C. C. Davis, “Imaging through strong turbulence with a light field approach,” Opt. Express 24(11), 11975–11986 (2016). [CrossRef]
22. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015). [CrossRef]
23. S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Appl. Opt. 57(1), A1–A11 (2018). [CrossRef]
24. L. Andrews, R. Phillips, R. Crabbs, and T. Leclerc, “Deep turbulence propagation of a gaussian-beam wave in anisotropic non-kolmogorov turbulence,” in Laser Communication and Propagation through the Atmosphere and Oceans II, vol. 8874 (International Society for Optics and Photonics, 2013), p. 887402.
25. M. Vorontsov, J. Riker, G. Carhart, V. R. Gudimetla, L. Beresnev, T. Weyrauch, and L. C. Roberts Jr, “Deep turbulence effects compensation experiments with a cascaded adaptive optics system using a 3.63 m telescope,” Appl. Opt. 48(1), A47–A57 (2009). [CrossRef]
26. I. Toselli, L. C. Andrews, R. L. Phillips, and V. Ferrero, “Free-space optical system performance for laser beam propagation through non-kolmogorov turbulence,” Opt. Eng. 47(2), 026003 (2008). [CrossRef]
27. S. Gladysz, M. Segel, C. Eisele, R. Barros, and E. Sucher, “Estimation of turbulence strength, anisotropy, outer scale and spectral slope from an led array,” in Laser Communication and Propagation through the Atmosphere and Oceans IV, vol. 9614 (International Society for Optics and Photonics, 2015), p. 961402.
28. M. C. Roggemann, B. M. Welsh, and B. R. Hunt, Imaging through turbulence, (CRC press, 2018).
29. R. G. Lane, A. Glindemann, and J. C. Dainty, “Simulation of a kolmogorov phase screen,” Waves random media 2(3), 209–224 (1992). [CrossRef]
30. D. A. Paulson, C. Wu, and C. C. Davis, “Randomized spectral sampling for efficient simulation of laser propagation through optical turbulence,” arXiv preprint (2019).
31. N. Joshi and M. Cohen, “Seeing mt. rainier: Lucky imaging for multi-image denoising, sharpening, and haze removal,” (Microsoft Research, 2010).
32. N. M. Law, C. D. Mackay, and J. E. Baldwin, “Lucky imaging: high angular resolution imaging in the visible from the ground,” Astron. Astrophys. 446(2), 739–745 (2006). [CrossRef]
33. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in 2009 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–8.
34. M. D’Angelo, F. V. Pepe, A. Garuccio, and G. Scarcelli, “Correlation plenoptic imaging,” Phys. Rev. Lett. 116(22), 223602 (2016). [CrossRef]
35. G. Rousset, J. Fontanella, P. Kern, P. Gigan, and F. Rigaut, “First diffraction-limited astronomical images with adaptive optics,” Astron. Astrophys. 230, L29–L32 (1990).
36. D. R. Iskander, “Computational aspects of the visual strehl ratio,” Optom. Vis. Sci. 83(1), 57–59 (2006). [CrossRef]
37. T. G. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 021106 (2010). [CrossRef]
38. Y. Li, R. Olsson, and M. Sjöström, “Compression of unfocused plenoptic images using a displacement intra prediction,” in 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), (IEEE, 2016), pp. 1–4.