Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Comparison between the plenoptic sensor and the light field camera in restoring images through turbulence

Open Access Open Access

Abstract

Similar to the “lucky imaging” technique that selects the best local features over time, spatial redundancy allows for the localization of turbulence induced image distortions and selection of the best features that are least distorted by turbulence. A new technique to restore turbulence degraded images is proposed based on imaging with spatial redundancies. Two imaging frameworks that are candidates for implementation of the technique are the plenoptic sensor and the light field camera, which collect multiple depictions of the target through sub-aperture imaging. Preliminary studies have demonstrated the effectiveness of either device in imaging through turbulence. However, as visual distortions vary significantly from weak to strong turbulence conditions, it is unclear when and how a light field approach should be applied to enhance target recognition over distorted media. We present an in-depth study on the fundamental differences between the two devices with regards to turbulence distortion, as well as their image restoration mechanisms. Our analysis combined with proof-of-concept experiments show that the turbulence resilience of light field imaging techniques depends strongly on the mechanism of mapping the light field. Such universal finding serves as guidance for imaging and object recognition with light field approaches.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Light field cameras were initially developed for refocusable photography and semi-3D microscopic imaging [1,2]. The significant impact quickly influenced many applications such as lens manufacturing [36], optical encryption [7,8], holography [9,10], optical sensing and imaging near the diffraction limit [1115], and wavefront sensing [1619].

Recently, new applications for the light field camera, and modifications thereof, have been proposed to solve the enduring problems of imaging through turbulent media [20,21]. Essentially, these approaches use the spatial redundancy collected through light field images to synthesize a result using a collection of non-distorted features or pixels. In other words, they also provide selections of good results in a similar fashion to the conventional “lucky imaging” methods. Two major approaches have proved effective: the pixel-based light field camera and the view-based plenoptic sensor. The former back traces the rays which form an image point to different sub-aperture points at the entrance pupil of the camera, and differentiates regional distortions thereafter. The latter maps the angular spectra of the image formation to an image array to avoid turbulence affected regions. Both methods isolate turbulence degradation through analyzing the image formation process, where both steady patterns from the target and temporal turbulence induced distortions can be identified, respectively. In the literature, the two light field approaches are also termed as “plenoptic 1.0 camera” and “plenoptic 2.0 camera” [22], or “unfocused plenoptic camera” and “focused plenoptic camera” [23].

As there are a wide range of possible turbulence scenarios such as weak, medium or strong turbulence [24,25], Kolmogorov or non-Kolmogorov [26,27], isotropic or anisotropic [24,27], various limitations may be imposed on the light field based imaging approaches. For example, the plenoptic sensor approach requires the sub-aperture size based on the micro-lens array (MLA) be smaller than the Fried parameter for object recognition through turbulence [21]. It appears unclear whether these two major light field approaches share duality with each other (have the same fundamental limitations), or each enhances imaging results under specific turbulence regimes.

To comprehensively analyze the effectiveness of different light field imaging approaches over a turbulent channel, we have built a hybrid plenoptic sensor system which can switch between the two major configurations of light field imaging over the same view. A water-tube based turbulence generator is integrated with a programmable heater to ensure similarity in turbulence generation over repeated runs. We deduce through theoretical analysis and experiments that the two approaches handle turbulence quite differently. For example, our study finds that the point-based light field camera approach requires image filters to suppress turbulence distortion, while the view-based plenoptic sensor relies on a metric selected cell image for a clear view of the target. Our comparative study provides the most fundamental understanding to date regarding the application of light field imaging techniques to overcome turbulence effects, where turbulence distortions can be selectively suppressed through wise choice of a light field imaging device.

The rest of the work is arranged as follows: Part 2 establishes the theory comparison between the two light field approaches, Part 3 illustrates the experimental validations, and conclusions are drawn in Part 4.

2. Comparison of two light field approaches in imaging through turbulence

The fundamental difference between a light field camera approach and a plenoptic sensor approach lies in the use of the lenslets in the MLA. The light field camera uses a lenslet as an image point resolver, and the plenoptic sensor uses a lenslet to provide a view of the scene. Their respective functions in imaging through turbulence based on geometric optics are illustrated in Fig. 1.

 figure: Fig. 1.

Fig. 1. Structural difference of imaging through turbulence using a light field camera and a plenoptic sensor with turbulence effects simplified as wavefront distortion

Download Full Size | PDF

In Fig. 1, we have neglected the fact that the two systems have different MLA pitch sizes, and have plotted the components equally. Turbulence distortion is simplified as wavefront perturbation based on conventions [28], and represented by the dashed red curves to the left of the objective lenses. Consequently, the gray dashed lines represent ray tracing in absence of turbulence, and the red dashed and arrowed curves to the right of the MLA represent the turbulence perturbation results. It is evident that the same turbulence scenario manifests differing impacts when using different light field imaging platforms. In the light field camera formation, turbulence blurs adjacent pixels under each lenslet, which confuses the ray tracing mechanism. It is also worth noticing that such influence extends to all of the MLA lenslets that sample different points of the target. In the plenoptic sensor formation, as views of the same target are formed by an array of MLA lenslets in concert with a sub-aperture region of the front objective lens, turbulence distorts each view independently (and in differing severity) in each sub-aperture region. Needless to say, there are other light field recording structures in which turbulence induced image distortions are visualized differently from the two systems mentioned above. Therefore, it is paramount to understand whether there are any dominant advantages in restoring turbulence degraded images associated with an approach when selecting light field recording schemes. In other words, we are interested in finding whether turbulence correction can be optimized through the selection of a suitable light field system.

Our comparison focuses on the two representative systems shown in Fig. 1, due to their popularity, and a generalized discussion will be provided in part 4 to extend our finding to any arbitrary light field imaging system. The comparison is conducted through the framework of generalized equations describing their mechanisms in imaging through turbulence, followed by experimental validations. A few additional assumptions are necessary to make fair and practical comparisons between the two systems. First, pixel quantization on the imaging sensor is ignored. Second, the paraxial approximation is used by assuming that the target geometry is much smaller than its distance from the imaging system L. And third, the turbulence distortion is approximated as a 2D phase screen [29,30] that is distance z from the target.

Due to the use of a common camera lenses (objective lenses) in both of the systems shown in Fig. 1, we express the scalar field on the ideal image plane of the camera lens in Eq. (1) and use it as a central point for comparison.

$$\begin{aligned} U_i(s,t) & = \frac{1}{M^3}U_o(Ms,Mt)\mathcal{F}[\phi(\gamma x,\gamma y)]\Big|_{f_x=0, f_y=0}\\ & \quad+\frac{1}{M^3}\iint_{x\neq{Ms},y\neq{Mt}}{U_o(x,y)\,\mathcal{F}[\phi(\gamma x,\gamma y)]\Big|_{f_x=\frac{x-Ms}{\lambda M F_\textrm{eff}}, f_y=\frac{y-Mt}{\lambda M F_\textrm{eff}}}\,dx\,dy}. \end{aligned}$$
In Eq. (1), we have applied the lens law that ideally renders linearity between target field $U_o$ and the ideal image field $U_i$ with scaling ratio of $M =-\frac {L}{F_{e\!f\!f}}$ in absence of turbulence. We temporarily neglect the finite aperture of the camera lens in the use of the Fourier transform to describe the influence of turbulence induced phase distortion $\phi$ of the image point $(s,t)$. Parameter $\gamma \,=\frac {z}{L}$ is evaluated as the ratio between target to phase screen distance ($z$) and target to camera distance ($L$). The effective focal length is expressed by $F_{\textrm eff}$. For convenience of illustrating turbulence effects in light field imaging, we have separated its influence into two terms. The first term describes turbulence distortion on the ideal point-to-point field transformation to the image point $(s,t)$, while the second term characterizes the summation of all the other distorted image points with part of their point spread function (PSF) overlaying on image point $(s,t)$. Heuristically, we note the first term in Eq. (1) as the “signal” term, and the second term as the “noise” term based on the analogy of characterizing turbulence effects as type I and II errors in an imaging process.

In further propagation of $U_i(s,t)$, the light field camera transforms the field point through a MLA lenslet in to an image field that can be described by Eq. (2).

$$U_l(u,v;N_s,N_t) =\frac{1}{j\!\lambda\!F_{MLA}}\iint{U_i(s,t)\,P_{MLA}(s,t){exp}{\bigg [}{-}j\frac{2\pi}{\lambda\!F_{MLA}}(su+tv){\bigg ]}\,ds\,dt}.$$
In Eq. (2), the local coordinates and indices of the lenslet are expressed through $(u,v)$ and integer pair $(N_s,N_t)$, respectively. We have dropped the parabolic phase term per $(u,v)$ point on the plane of the image sensor, as it doesn’t affect the image sampling process. The finite aperture of the MLA lenslet has been accounted for by the pupil function $P_{MLA}$ that is evaluated as $\textit {"1"}$ inside its domain and $\textit {"0"}$ otherwise. The image sensor of the light field camera is typically placed at the back focal plane of the MLA, which has a focal length represented by $F_{M\!L\!A\!}$. As Eq. (2) takes the form of a Fourier transform, we can further simplify it as:
$$U_l(u,v;N_s,N_t) =\kappa\,\mathcal{F}[U_i(s-N_sd,t-N_td)]*\mathcal{F}[P_{MLA}(s,t)]\Big|_{f_s={-}\frac{u}{\lambda\!F_{MLA}}, f_t={-}\frac{v}{\lambda\!F_{MLA}}}.$$
In Eq. (3), $\kappa$ represents the constant coefficient in reforming Eq. (2) as a result of the convolution between the Fourier transforms of the image field and the pupil function of an MLA lenslet, respectively. The MLA lenslet indices are $(N_s,N_t)$ and d represents the pitch length of the MLA. The first Fourier spectrum term in Eq. (3) agrees with the geometric understanding of light field cameras, where an image point spreads into a “disk” pattern after the MLA to make “rays” tractable. The second Fourier transform in Eq. (3) considers the diffraction limit effect of the small MLA lenslets. With turbulence involved, we discuss its influence on the light field camera based on the two terms derived in Eq. (1) as
$$U_{l}^{sig}(u,v;N_s,N_t) =\frac{\kappa}{M^3}\,\frac{\iint_A{\phi(\gamma x,\gamma y)dx\,dy}}{||A||}{\bigg [}e^{{-}j2\pi (N_sdf_s+N_tdf_t)}\widehat{U_o}{\bigg (}\frac{f_s}{M},\frac{f_t}{M}{\bigg )}{\bigg ]}*\widehat{P_{MLA}}(f_s,f_t).$$
$$\begin{aligned} U_{l}^{noise}(u,v;N_s,N_t) & =\frac{\kappa F_{M\!L\!A}^2}{M^3 F_{e\!f\!f}^2}{\bigg \{} \iint_{x\neq{M\!s},y\neq{M\!t}}{U_o(x,y)e^{-\frac{j2\pi\,\lambda\,F_{e\!f\!f}}{M}{\big [}(x+MN_sd)\,f_s+(y+MN_td)\,f_t{\big ]}}dx\,dy}\\ & \cdot{\phi}(\gamma\lambda\,F_{e\!f\!f}f_s,\gamma\lambda\,F_{e\!f\!f}f_t){\bigg \}} *\widehat{P_{MLA}}(f_s,f_t). \end{aligned}$$
$$U_{l}(u,v;N_s,N_t) =U_{l}^{sig}(u,v;N_s,N_t)+U_{l}^{noise}(u,v;N_s,N_t) .$$
In Eq. (4), we have dropped the repetitive notation for evaluating the frequency components as discussed in Eq. (3). The spatial Fourier transforms of fields $U_o$ and $P_{M\!L\!A}$ based on image plane coordinates (s,t) are represented with the hat annotation. The integral area A covers the entrance aperture of the camera lens. The $e\!x\!p$ term serves as a region selector on $U_o$, which is further convolved with the Fourier spectra of the MLA lenslet pupil function to include its diffraction effects. The frequency terms $(f_s,f_t)$ correspond to the angular rays converging through each image point $U_i(s,t)$, and can be matched with the ray explanation of the light field camera. Eq. (4) shows that turbulence creates a uniform complex field attenuation of the “signal” term in the light field camera for each realization. Similarly in Eq. (5), the evaluation notations for frequencies have been dropped. The “noise” term in a light field camera, unlike the “signal” term, depends on the local geometry under each MLA lenslet. In other words, the group of pixels behind an MLA lenslet that interpret the same image point field are distorted differently, as shown by the $(u,v)$ dependent phase screen function $\phi$ in the convolution. Additionally, the integral in Eq. (5) is nontrivial in the vicinity of $U_0(-M\!N_s d,-M\!N_t d)$, which means its convolution will be non-trivial in the same vicinity. Overall, Eq. (5) interprets the “noise” term as fans of rays from nearby points on the target which are locally distorted in reaching sub-aperture areas of the camera lens, get convolved with the pupil function of the diffraction limited MLA lenslets, and consequently lead to spoiled ray spreading after the MLA.

With turbulence’s involvement explained in the summation form of Eq. (6), conveying how a light field camera works in imaging through turbulence is straightforward: The cluster center of pixel intensities behind each MLA lenset represents a relatively good image point, assuming that the “signal” terms prevail over the “noise” terms in a single frame, or over a fractional observation time. The mechanism is essentially the same as “lucky imaging” [31,32] with the additional capacity to identify sharp features through the fan of rays that forms each image point. Although the light field camera loses its capacity to refocus under non-trivial turbulence influence, it does serve as a turbulence suppressor in imaging tasks.

We now demonstrate the plenoptic sensor mechanism in a very similar fashion to Eq. (1). The analysis can be understood as dividing Eq. (1) into its sub-aperture contributions from the camera lens, which are imaged under individual MLA lenslets. The function of the plenoptic sensor can be expressed as

$$\begin{aligned} U_p(u,v;N_a,N_b) & =\frac{1}{M^3M'}{\bigg \{} U_o(MM'u,MM'v)\mathcal{F}[\phi(\gamma x,\gamma y)P(x,y;N_a,N_b)]\Big |_{f_x=0, f_y=0}\\ & +\iint\limits_{(x,y)\in \overline{O}}{U_o(x,y)\,\mathcal{F}[\phi(\gamma x,\gamma y)P(x,y;N_a,N_b)]\Big |_{f_x=\frac{x-M\!M'u}{\lambda M F_\textrm{eff}}, f_y=\frac{y-M\!M'v}{\lambda M F_\textrm{eff}}}\,dx\,dy}{\bigg \}}. \end{aligned}$$
In Eq. (7), the pupil function P represents the equivalent aperture stop on the camera lens for the MLA lenslet indexed with $(N_a,N_b)$. This also accounts for the diffraction limited MLA lenslets in the plenoptic imaging results. The additional scaling factor between the field point after the camera lens $U_o(s,t)$ and the field point on the plenoptic sensor’s image plane $U_p(u,v;N_a,N_b)$ is represented by $M'$ by direct application of the lens law. We have also simplified the integral limit with $(x,y)\in \overline {O}$ representing the same the limit in Eq. (1). Because the “signal” and “noise” terms in the plenoptic sensor remain separated in each cell image shown by Eq. (7), one can always select the best performing cell image that experiences the least turbulence distortion over a recording period. An image metric has been engineered in previous work to auto select the best performing cell image over a static scene through turbulence [21]. Overall, the key mechanism of the plenoptic sensor approach in imaging through turbulence is to image within the temporal coherence length of turbulence. In this manner, a sub-aperture area with least phase distortion can be identified to render good imaging results.

It is now wise to point out that turbulence has distinctive impacts on different light field imaging platforms. The light field camera, as limited by the “noise” term in Eq. (5), works best when the Fried parameter (transverse coherence length) of the turbulent channel is larger or comparable with the diameter of the camera lens. The plenoptic sensor, under equal conditions as mentioned above, doesn’t perform as well as a light field camera due to the diffraction limit. When turbulence grows to the level where the Fried parameter drops lower than the diameter of the camera lens, the plenoptic sensor remains effective until the dimension limit due to the sub-aperture stop of an MLA lenslet defined through $P(x,y;N_a,N_b)$ in Eq. (7) is reached. It is also worth pointing out that the relative depth of the equivalent phase screen $\gamma$ plays a geometric scaling role, with ratio $1/\gamma$. In other words, the equivalent transverse coherence length is magnified by $1/\gamma$. This leads to the finding that the light field camera enhances its suppression of turbulence when it resides closer to the target. The plenoptic sensor, on the other hand, can be claimed as effective against deep or strong turbulence, where the path aggregated Fried length is expected to be less than the camera lens diameter.

Overall, a simple criterion can be summarized regarding the effectiveness of each light field approach in suppressing turbulence distortion. We denote D as the effective diameter of the objective lens with its magnification ratio fixed at 1:1, and the aforementioned Fried parameter of the channel as $r_0$. We assume that the number of sub-apertures along a given dimension of the plenoptic sensor is $N$. The light field camera have matched f-numbers between the MLA and the objective lens. This yields:

$$U_l^*\geq U_l\sim U_c(min\{D,r_0\})\,,$$
for the the light field camera, and
$$U_p^*\geq U_p \sim U_c(min{\big \{} \frac{D}{N},r_0{\big \}} )\,.$$
for the plenoptic sensor. In Eqs. (89), $U_l^*$ and $U_p^*$ refer to image processing algorithm results (best features) over a recording period on the light field camera and plenoptic sensor, respectively. And $U_l$ and $U_p$ refer to image processing result per frame for the two configurations, respectively. Because the Fried coherence length provides the fundamental resolution limit in imaging through turbulence, we can hypothetically denote an “adaptive” camera with its lens diameter matching the Fried parameter for the best imaging result. Such an ideal camera is represented by $U_c$ with its aperture diameter as a variable in Eqs. (89). To be comparable with the light field approaches using the same objective lens, we mark the maximum lens diameter for such camera as D. When $r_0\!<\!D$, camera image $U_c(r_0)$ with aperture size $r_0$ renders less distorted point spread functions (PSFs) than the camera image $U_c(D)$ with the maximum size $D$ under diffraction limit, as phase distortion within the $r_0$ aperture is still coherent. The light field camera result $U_l$ performs similar to $U_c(r_0)$, which acts like an “adaptive” camera by filtering out the “noise” term that is incoherent and favoring the “signal” term that is still coherent. We use the symbol $\sim$ to denote similar performance between $U_l$ and $U_c$. Based on the frame by frame results, optimized result $U_l^*$ can be selected. Eq. (9) can be similarly understood for the plenoptic sensor configuration, while its frame by frame performance stays near $U_c(\frac {D}{N})$ in the turbulence scenarios of $\frac {D}{N}<r_0<D$. Intuitively, this means stationary performance over a wide range of turbulence until $r_0=\frac {D}{N}$ is reached. Such stationary property makes a metric selection rule for $U_p^*$ feasible, as discussed in our earlier work [21]. Meanwhile, the plenoptic sensor’s resolution is diffraction limited by the aperture size $\frac {D}{N}$, which is worse than the light field camera (which is limited by the aperture size $r_0$ in the same region). In the case of $r_0<\frac {D}{N}$, the two hardware configurations $U_l$ and $U_p$ have both reached a hard limit where the coherence length is less than the width of an MLA lenslet. In this situation, it is uncertain whether the processing algorithms still restore turbulence degraded images. One may conclude that the criteria suggests that the plenoptic sensor has better turbulence tolerance with a stationary performance, but is worse in diffraction related resolution limit.

In summary, Eqs. (46) describe the turbulence involvement in the image formation process of a light field camera, by splitting its influence into the “signal” term and the “noise” term. Similarly, Eq. (7) presents the analysis for a plenoptic sensor. Because turbulence distortion behaves differently in the two light field approaches, their corresponding image processing algorithms to suppress turbulence effects are fundamentally different. In either configuration, one needs to apply the matching algorithm to restore the turbulence degraded images, as discussed by this section. The simple criterion described through Eqs. (89) can be used to outline the two approaches’ expected performance in imaging through turbulence with specified hardware configuration and turbulence condition.

3. Experimental comparison

A hybrid plenoptic imaging system has been designed and implemented based on the above analysis of imaging through turbulent media with a light field approach. The system makes two improvements upon the conventional plenoptic sensor. First, a commercial camera lens modified by binding its last lens-piece with a thin achromatic negative lens (to counteract the last lens-piece’s converging power) has been used to replace the objective lens. In this fashion, both the achromatic property of a commercial camera lens and much extended effective focal length (for $f\!/\#\geq \!16$ in full aperture) are harvested. The markers on a commercial camera indicating the original focal distances also facilitates calculation of the modified effective focal length for precise control of the hybrid system to convert between configurations of light field imaging and plenoptic sensor imaging. Second, the long light path behind the camera lens is folded through mirrors to make the hybrid system compact and practical in use. In this manner, the compact imaging system can balance its weight distribution, and we may register its rotational axis in line with the plane of the image sensor. Such design arrangement makes the system compatible with a gimbal for extended functions of pointing, acquisition and tracking (PAT) applications. The overall system design and implementation is shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. The hybrid plenoptic imaging system with (a) structural design, (b) exposed view from the viewpoint of the camera, and (c) in weatherized housing.

Download Full Size | PDF

In Fig. 2, we show the two-deck optical design of the hybrid plenoptic sensor system in plot (a), with the upper deck holding key optical instruments such as the modified camera lens, filters, the MLA and the image sensor. The lower deck employs high reflectivity mirrors to wrap the long optical path after the objective lens within the compact system. Two sets of adjustable mirrors are used to calibrate the alignment of the light path upon leaving and re-entering the upper deck, respectively. The effective focal length for the camera lens is empirically set at 800mm as a central operating point to facilitate both plenoptic sensing and light field imaging (as shown in Fig. 1) over the same view. With the tuning of focal distance by the main camera lens, the two specific operating points for light field camera imaging and plenoptic sensor imaging shown in Fig. 1 can be achieved. Note that such simple interchangeable function is also facilitated by the fact that the intermediate image in the plenoptic sensor configuration is relatively far from the MLA (significantly larger than $F_{M\!L\!A}$). Otherwise, the spacing between the MLA and the image sensor should be adjusted to render focused cell images [33].

For turbulence generation, we employ a 1.5-meter-long water tube with embedded wire heaters at the bottom to create optical turbulence. The wire heaters are driven by an external programmable Variac transformer to ensure very similar turbulence scenarios over repeated trials. In fact, aside from small scale discrepancies, we find that the “programmed” turbulence distortion patterns over the first 60 seconds are largely repeatable per realization after sufficient reset time. The water tube is placed near the plenoptic system for efficient turbulence distortion generation (with the $\gamma$ value close to unity). The target under test is an LED array placed 10 meters apart from the plenoptic imaging system. Two additional mirrors are used to fold the free space imaging channel within lab space. The targeted LED array makes alignment undemanding, whereas real-world alignment of typical targets requires aiding via a side-view camera. In acquiring images, a neutral density filter ($N=4.0$) is used to prevent saturation. The camera exposure time is also set to $0.83$ ms (1/1200 s) to capture instantaneous turbulence degraded images. A view of the experimental arrangement is shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Experimental setup of light field camera and plenoptic sensor comparison in imaging through turbulent media.

Download Full Size | PDF

In Fig. 3, we show the primary optical modules of the comparison experiments with labels indicating key components. Module A is the $1^{st}$ view folding mirror for the target located on another optical table. The mirror pairs with a $2^{nd}$ view folding mirror that sits near the target to multiply the target-to-water tube distance by a factor of 3 within a limited lab space. Module B is the water tube system acting as a turbulence generator to create non-trivial channel distortion. The programmable Variac transformer (Compact Power Systems, Titan Mac-01SH) is disjoint to the optical table (not shown in the setup picture) to avoid vibrations. Module C is the hybrid plenoptic imaging system with individual parts explained in Fig. 3. The side-view camera is not used for the experiment because the water tube (Module B) blocks its view of the target. The alignment is indirectly achieved by tuning the adjustable mirrors inside the hybrid plenoptic imaging system.

In the comparison experiment, we set the waveform of the Variac’s output voltage as 60Hz AC that linearly increases from 10 Volts to 60 Volts in 20 seconds to sweep through increasing levels of turbulence. Sufficient reset time is given between adjacent runs to minimize differences in turbulence realizations, so that the light field camera configuration and the plenoptic sensor configuration deal with almost the same channel distortion and can be compared side by side. The corresponding views from the light field camera configuration and the plenoptic sensor configuration are shown in Fig. 4(a) and Fig. 4(b), respectively.

 figure: Fig. 4.

Fig. 4. Imaging through artificial turbulence with (a) the light field camera configuration, and (b) the plenoptic sensor configuration (see Visualization 1).

Download Full Size | PDF

In Fig. 4, we have manually added the red grid lines to indicate boundaries of the cell images in both configurations. Clearly, as each cell image in a light field camera essentially records rays converging to an imaging point, the hexagonal pattern of the 7 green LEDs can be outlined by the cell arrays depicted in Fig. 4(a). In the formation of the plenoptic sensor, Fig. 4(b) presents individual views of the LEDs per sub-aperture area of the camera lens. For demonstration purposes, we only show the central parts of the images, while the actual number of cells used for algorithms is 22-by-22 in both configurations. Visualization 1 shows the first 12 seconds, during which the heating voltage increases from 10 Volts to 40 Volts and the light field imaging approaches remain effective. At higher levels of simulated turbulence (with heating voltages higher than 40 Volts), the target is persistently unrecognizable, and neither light field approaches assures convergence to good results. In other words, the system operates beyond its limits for heating voltages above 40 Volts. Due to this reason, results after the $12^{th}$ second are not shown in the comparisons.

To suppress turbulence distortions, the light field camera should utilize the cluster center of pixel intensities in each cell image to stabilize image performance point by point. To do this, we simply used the pixel histogram and picked the intensity with maximum frequency per MLA lenslet. The cell picked pixels are assembled and linearly interpolated to synthesize a “good” light field image. In the plenoptic sensor configuration, a metric based method is used to select the best performing cell image automatically [21]. The metric is summarized as

$$\begin{aligned} M(N_1,N_2,N_t): & =M_{s}(N_1,N_2,N_t)\cdot\alpha_{time}(N_1,N_2,N_t),\\ \alpha_{time}(N_1,N_2,N_t): & =\sqrt{\Sigma_{u,v}{\big [}I_{N_1,N_2,N_t+1}(u,v)+I_{N_1,N_2,N_t-1}(u,v)-2I_{N_1,N_2,N_t}(u,v) {\big ]}^2}. \end{aligned}$$
In Eq. (10), $N_1$ and $N_2$ represent cell indices along the vertical and horizontal directions, respectively. The frame number, ordered in time, is represented by $N_t$. The factor $\alpha _{time}$ evaluates the second order image differences per cell image based on adjacent cells’ images along the time dimension. Similarly, $M_s$ handles the spatial evaluation of the metric $M$ for each frame with its calculation defined as:
$$\begin{aligned} M_{s}(N_1,N_2,N_t): & =\alpha_{vertical}(N_1,N_2,N_t)\cdot\alpha_{horizontal}(N_1,N_2,N_t),\\ \alpha_{vertical}(N_1,N_2,N_t): & =\sqrt{\Sigma_{u,v}{\big [}I_{N_1+1,N_2,N_t}(u,v)+I_{N_1-1,N_2,N_t}(u,v)-2I_{N_1,N_2,N_t}(u,v) {\big ]}^2},\\ \alpha_{horizontal}(N_1,N_2,N_t): & =\sqrt{\Sigma_{u,v}{\big [}I_{N_1,N_2+1,N_t}(u,v)+I_{N_1,N_2-1,N_t}(u,v)-2I_{N_1,N_2,N_t}(u,v) {\big ]}^2}. \end{aligned}$$
In Eq. (11), $\alpha _{vertical}$ and $\alpha _{horizontal}$ track the second order cell image differences along the vertical and horizontal direction in the same fashion as $\alpha _{time}$. Because only nearest neighbors are used in the metric, mitigation of the viewing angle among the closest cells is trivial, which can be compensated by small shifts of cell image centers to compute Eq. (11). In our experimental case, the shift is $1$ pixel per direction. The overall metric $M$ multiples all three $\alpha$ factors to indicate a ”best” performing cell image with its peak value. The ”best” performing cell image reveals fundamental truths of the target as if turbulence distortion has been largely removed. Such a metric was invented to search for particular wavefront shapes among sub-aperture areas which render good cell images. It is not literally the best cell image closest to ground truth, but yields a similar level of high correlation to the ground truth. For this reason, we have used quotations to denote the special meaning.

In order to show frame-by-frame comparisons between the two light field approaches to restore degraded images, we first adopted metric $M_s$ to process the plenoptic sensor images, and whose comparison with the light field camera’s filtering algorithm is shown in Fig. 5. Later we indicate the metric selected ”best” cell images using $M$ with all three dimensions in a summarized comparison chart. In other words, $M$ provides better results than $M_s$ using additional consideration of distortion evolution over adjacent frames in the plenoptic sensor [21].

 figure: Fig. 5.

Fig. 5. Frame-by-frame comparison in correcting turbulence degraded images by using (a) the light field camera configuration, and (b) the plenoptic sensor configuration (see Visualization 2).

Download Full Size | PDF

In Fig. 5 and Visualization 2, we show the image correction results through the light field camera and the plenoptic sensor under increasing levels of turbulence. It is obvious to see that during the beginning 6 seconds, the light field camera correction performs better than metric $M_s$ selected results. Majorly, the light field camera corrected images are less diffraction limited, and clearer in revealing the shapes and patterns of the LEDs. During the latter 6 seconds, however, the performance outcome flips. The light field camera image correction begins to be ineffective and faulty towards the last few seconds. The plenoptic sensor, on the other hand, still reflects major portions of the LEDs and their layout during the same period. Such an observation matches the theoretical prediction of Eqs. (89), where the plenoptic sensor produces better turbulence corrected results once the reduced Fried parameter $r_0$ continues to drop below than the camera lens’ diameter D. For weak turbulence levels, on the other hand, the light field camera correction evidently restores a sharper and clearer view of the target.

It is also of great interest to show the frame-by-frame comparison by turning off the algorithms. Therefore, we fixed the focus of the light field camera at the exact plane of the LEDs to render light field imaging result over turbulence without invoking the correction algorithm. Such settings can also be treated as camera views, because a light field camera with fixed focal depth acts the same as a normal camera. Similarly, we turned off the cell selection on the plenoptic sensor and only used the central cell image in results. The corresponding results with the correction turned off are shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. Frame-by-frame comparison with the correction algorithms turned off for (a) the light field camera configuration, and (b) the plenoptic sensor configuration (see Visualization 3).

Download Full Size | PDF

From Fig. 6 and Visualization 3, the light field camera image (no correction) grows unrecognizable after the $6^{th}$ second, and the plenoptic sensor image (no correction) grows unrecognizable after the $11^{th}$ second. In the first $2$ seconds, however, the ordinary light field camera image appears to be improved when the correction algorithm is turned off. This is because the correction algorithm essentially acts like an image filter that operates cell by cell traversing the MLA. Consequently, the resulting image is discretized by MLA cells that may reflect discrepancy against the LEDs’ round shape, while the original light field camera image is not limited by such quantization. Moreover, detailed studies by Pepe [11] and D’Angelo [34] further remove resolution loss due to either diffraction or MLA quantization for a light field camera based on correlation studies among cell images. This means the image correction algorithms for the light field cameras will inevitably downgrade the original image resolution as a trade-off for turbulence resilience.

As resolution loss is inevitable for both the light field camera (majorly due to the MLA discretization) and the plenoptic sensor (majorly due to diffraction limit), we obtained the ground truth (reference images) for both configurations with the same procedure of rendering Fig. 6(a) and Fig. 6(b), with turbulence removed. In this manner the correction results shown in Fig. 5 and Visualization 2 can be measured through the correlation coefficient with the reference images for the two light field approaches. The measured correlation coefficients help us understand the performance of the two image correction procedures under increasing levels of turbulence. To avoid potential bias caused by common background patterns and differences in the field of view (FOV), we use a threshold (which is 0.2 times maximum pixel value per image) to cutoff low illumination in background patterns, and tailor the region of interest (ROI) to the centralized 7 LEDs before calculating the correlation coefficients. We also apply the same measures for results shown in Fig. 6 and Visualization 3 to indicate turbulence degradation with the correction algorithms turned off. The metric based overall comparison is shown in Fig. 7. In addition, based on the $6^{th}$ second divide line we apply the full metric search (Metric $M$) on the plenoptic sensor during the two $6\!-\!second$ periods to show the measure of ”best” cell images based upon the 3D selection. Similar searches over processed frame sequences in the light field camera configuration are not conducted, due to the lack of a clear “guide-star” to indicate the Strehl ratio [35,36] over time.

 figure: Fig. 7.

Fig. 7. Improvement curves through correlation coefficient with ground truth under increasing levels of turbulence distortion.

Download Full Size | PDF

In Fig. 7, we co-plot the improvement curves for both the light field camera and the plenoptic sensor results, respectively. Note that the comparison between curves is meaningful if and only if referenced on the same light field device configuration. Although we take procedures to reduce influences from background light and differences in FOV that improve the correctness of the general trends, numerical cross-comparison between the two configurations’ metrics does not reflect performance differences precisely. In the beginning $6$ seconds, the correlation coefficients for imaging results (with or without the correction algorithms) all fall from near 1.0 to values close to 0.9, which shows very marginal improvements granted by the correction algorithms. For this reason, we empirically labeled this period as having “normal visual distortion”. Based on the visualizations, it can be witnessed that the images only suffer from normal visual distortions where the shape and structure of the target remain recognizable. In this regime, the advantage of algorithm provided turbulence resilience in the light field camera is offset by its loss of resolution accuracy, as discussed above. For the case of a plenoptic sensor, the spatial metric $M_s$ also provides marginal gains. The overall metric $M$ only lifts the correlation coefficient metric from 0.969 to 0.991 in this period, which can also be viewed as a marginal improvement. When turbulence level continues to increase, the latter $6$ seconds start to report significant improvement granted by each correction algorithm. Specifically, the original light field imaging (that acts like a normal camera) quickly loses recognition of the target, while the correction retains a recognizable target until the $9^{th}$ second, as can be witnessed in Visualization 2 and Visualization 3. For to this reason we empirically labeled the latter $6$ seconds as a ”strong visual distortion” period. In the case of the plenoptic sensor configuration, the gain through spatial metric selection $M$ also becomes significant, which can also be witnessed in Visualization 2 and Visualization 3. When the overall 3D metric $M$ is applied over the latter $6$ seconds, the correlation coefficient is uplifted from 0.918 to 0.982, an extra improvement.

For additional visualization demonstrations, we present the algorithm processed results at the $6^{th}$ second and the $12^{th}$ second in Fig. 8 and Fig. 9, respectively. We also show corresponding snapshots when the correction algorithms are turned off at the $6^{th}$ second and the $12^{th}$ second in Fig. 10 and Fig. 11, respectively.

 figure: Fig. 8.

Fig. 8. Correction results at the $6^{th}$ second for (a) the light field camera, and (b) the plenoptic sensor.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Correction results at the $12^{th}$ second for (a) the light field camera, and (b) the plenoptic sensor.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Imaging results at the $6^{th}$ second for (a) the light field camera without correction algorithm, and (b) the plenoptic sensor’s central cell image.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Imaging results at the $12^{th}$ second for (a) the light field camera without correction algorithm, and (b) the plenoptic sensor’s central cell image.

Download Full Size | PDF

The snapshots in Figs. (811) are confirmation of claims within the above discussion where: 1) both light field imaging approaches provide effective correction over non-trivial turbulence distortions; 2) the plenoptic sensor has additional tolerance of small values of Fried parameter (lower spatial coherence length) at the cost of lower resolution due to diffraction. Therefore, the generalized rules in Eqs. (89) regarding applying different light field techniques to imaging through turbulence have been validated through our lab experiments. It is also clear from both our theory and experimental studies that a light field camera works relatively close to a normal camera with its turbulence correction algorithm, but gains extra resilience when turbulence levels increase. In harsher situations, the plenoptic sensor configuration can be applied to work against stronger turbulence distortions for target recognition. However, there is still an upper limit of turbulence level, inferred as $r_0<\frac {D}{N}$ by Eq. (9), indicating even the plenoptic sensor configuration may not work beyond this range.

4. Conclusions and discussions

In this study, we have systematically analyzed the differences between two light field approaches for imaging through turbulent media by way of theory and proof-of-concept experiments. Our results show that different light field imaging platforms point to unique approaches to correct turbulence degraded images based upon their respective principles of 4D light field intensity mapping. In generalized light field imaging configurations, known as the focused plenoptic cameras [33,37,38] and where the imaging results can both be point-based or sub-angular-view-based per MLA lenslet, the image correction algorithm can be engineered based on its proximity to either of the two major configurations. Correspondingly, its performance in resurrecting turbulence degraded images shall fall between a light field camera and a plenoptic sensor.

Funding

Office of Naval Research (ONR) (N000141812008).

Acknowledgments

The authors sincerely thank Ms. Sarwat Chappell for her foresight and strong support of the plenoptic sensor development over the past many years.

References

1. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep. CSTR 2, 1–11 (2005).

2. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]  

3. H. Wang, L. Niu, W. Dai, X. Zhang, H. Wang, and C. Xie, “Matrix distributed liquid-crystal microlens arrays driven by electrically scanning voltage signals,” in Tenth International Conference on Information Optics and Photonics, vol. 10964 (International Society for Optics and Photonics, 2018), p. 109641V.

4. W. Dai, X. Xie, D. Li, X. Han, Z. Liu, D. Wei, Z. Xin, X. Zhang, H. Wang, and C. Xie, “Liquid-crystal microlens array with swing and adjusting focus and constructed by dual patterned ito-electrodes,” in MIPPR 2017: Multispectral Image Acquisition, Processing, and Analysis, vol. 10607 (International Society for Optics and Photonics, 2018), p. 106070A.

5. A. Pan, T. Chen, C. Li, and X. Hou, “Parallel fabrication of silicon concave microlens array by femtosecond laser irradiation and mixed acid etching,” Chin. Opt. Lett. 14(5), 052201 (2016). [CrossRef]  

6. R. J. Lin, V.-C. Su, S. Wang, M. K. Chen, T. L. Chung, Y. H. Chen, H. Y. Kuo, J.-W. Chen, J. Chen, Y.-T. Huang, J.-H. Wang, C. H. Chu, P. C. Wu, T. Li, Z. Wang, S. Zhu, and D. P. Tsai, “Achromatic metalens array for full-colour light-field imaging,” Nat. Nanotechnol. 14(3), 227–231 (2019). [CrossRef]  

7. S. You, Y. Lu, W. Zhang, B. Yang, R. Peng, and S. Zhuang, “Micro-lens array based 3-d color image encryption using the combination of gravity model and arnold transform,” Opt. Commun. 355, 419–426 (2015). [CrossRef]  

8. P. Paudyal, F. Battisti, A. Neri, and M. Carli, “A study of the impact of light fields watermarking on the perceived quality of the refocused data,” in 2015 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON), (IEEE, 2015), pp. 1–4.

9. Y. Endo, K. Wakunami, T. Shimobaba, T. Kakue, D. Arai, Y. Ichihashi, K. Yamamoto, and T. Ito, “Computer-generated hologram calculation for real scenes using a commercial portable plenoptic camera,” Opt. Commun. 356, 468–471 (2015). [CrossRef]  

10. J.-H. Park and M. Askari, “Non-hogel-based computer generated hologram from light field using complex field recovery technique from wigner distribution function,” Opt. Express 27(3), 2562–2574 (2019). [CrossRef]  

11. F. V. Pepe, F. Di Lena, A. Mazzilli, E. Edrei, A. Garuccio, G. Scarcelli, and M. D’Angelo, “Diffraction-limited plenoptic imaging with correlated light,” Phys. Rev. Lett. 119(24), 243602 (2017). [CrossRef]  

12. F. V. Pepe, G. Scarcelli, A. Garuccio, and M. D’Angelo, “Plenoptic imaging with second-order correlations of light,” Quantum Meas. Quantum Metrol. 3(1), 20–26 (2016). [CrossRef]  

13. L. Su, Y. Liu, and Y. Yuan, “Spectrum reconstruction of the light-field multimodal imager,” IEEE Access 7, 9688–9696 (2019). [CrossRef]  

14. G. Scala, M. D’Angelo, A. Garuccio, S. Pascazio, and F. V. Pepe, “Signal-to-noise properties of correlation plenoptic imaging with chaotic light,” Phys. Rev. A 99(5), 053808 (2019). [CrossRef]  

15. F. Di Lena, F. Pepe, A. Garuccio, and M. D’Angelo, “Correlation plenoptic imaging: An overview,” Appl. Sci. 8(10), 1958 (2018). [CrossRef]  

16. C. Wu, J. Ko, and C. C. Davis, “Determining the phase and amplitude distortion of a wavefront using a plenoptic sensor,” J. Opt. Soc. Am. A 32(5), 964–978 (2015). [CrossRef]  

17. Z. Xin, D. Wei, M. Chen, X. Wang, X. Zhang, H. Wang, and C. Xie, “Polarized wavefront measurement using an electrically tunable focused plenoptic camera,” in Photonic Instrumentation Engineering VI, vol. 10925 (International Society for Optics and Photonics, 2019), p. 1092517.

18. Y.-S. Luan, B. Xu, P. Yang, and G.-M. Tang, “Wavefront analysis for plenoptic camera imaging,” Chin. Phys. B 26(10), 104203 (2017). [CrossRef]  

19. C. Wu, D. A. Paulson, J. R. Rzasa, and C. C. Davis, “Extracting phase distortion from laser glints on a remote target using phase space plenoptic mapping,” J. Opt. Soc. Am. B 36(7), 1964–1971 (2019). [CrossRef]  

20. M. Loktev, O. Soloviev, S. Savenko, and G. Vdovin, “Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation,” Opt. Lett. 36(14), 2656–2658 (2011). [CrossRef]  

21. C. Wu, J. Ko, and C. C. Davis, “Imaging through strong turbulence with a light field approach,” Opt. Express 24(11), 11975–11986 (2016). [CrossRef]  

22. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015). [CrossRef]  

23. S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Appl. Opt. 57(1), A1–A11 (2018). [CrossRef]  

24. L. Andrews, R. Phillips, R. Crabbs, and T. Leclerc, “Deep turbulence propagation of a gaussian-beam wave in anisotropic non-kolmogorov turbulence,” in Laser Communication and Propagation through the Atmosphere and Oceans II, vol. 8874 (International Society for Optics and Photonics, 2013), p. 887402.

25. M. Vorontsov, J. Riker, G. Carhart, V. R. Gudimetla, L. Beresnev, T. Weyrauch, and L. C. Roberts Jr, “Deep turbulence effects compensation experiments with a cascaded adaptive optics system using a 3.63 m telescope,” Appl. Opt. 48(1), A47–A57 (2009). [CrossRef]  

26. I. Toselli, L. C. Andrews, R. L. Phillips, and V. Ferrero, “Free-space optical system performance for laser beam propagation through non-kolmogorov turbulence,” Opt. Eng. 47(2), 026003 (2008). [CrossRef]  

27. S. Gladysz, M. Segel, C. Eisele, R. Barros, and E. Sucher, “Estimation of turbulence strength, anisotropy, outer scale and spectral slope from an led array,” in Laser Communication and Propagation through the Atmosphere and Oceans IV, vol. 9614 (International Society for Optics and Photonics, 2015), p. 961402.

28. M. C. Roggemann, B. M. Welsh, and B. R. Hunt, Imaging through turbulence, (CRC press, 2018).

29. R. G. Lane, A. Glindemann, and J. C. Dainty, “Simulation of a kolmogorov phase screen,” Waves random media 2(3), 209–224 (1992). [CrossRef]  

30. D. A. Paulson, C. Wu, and C. C. Davis, “Randomized spectral sampling for efficient simulation of laser propagation through optical turbulence,” arXiv preprint (2019).

31. N. Joshi and M. Cohen, “Seeing mt. rainier: Lucky imaging for multi-image denoising, sharpening, and haze removal,” (Microsoft Research, 2010).

32. N. M. Law, C. D. Mackay, and J. E. Baldwin, “Lucky imaging: high angular resolution imaging in the visible from the ground,” Astron. Astrophys. 446(2), 739–745 (2006). [CrossRef]  

33. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in 2009 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–8.

34. M. D’Angelo, F. V. Pepe, A. Garuccio, and G. Scarcelli, “Correlation plenoptic imaging,” Phys. Rev. Lett. 116(22), 223602 (2016). [CrossRef]  

35. G. Rousset, J. Fontanella, P. Kern, P. Gigan, and F. Rigaut, “First diffraction-limited astronomical images with adaptive optics,” Astron. Astrophys. 230, L29–L32 (1990).

36. D. R. Iskander, “Computational aspects of the visual strehl ratio,” Optom. Vis. Sci. 83(1), 57–59 (2006). [CrossRef]  

37. T. G. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 021106 (2010). [CrossRef]  

38. Y. Li, R. Olsson, and M. Sjöström, “Compression of unfocused plenoptic images using a displacement intra prediction,” in 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), (IEEE, 2016), pp. 1–4.

Supplementary Material (3)

NameDescription
Visualization 1       Frame by frame comparison of turbulence degraded light field images between the light field camera setting and the plenoptic sensor setting.
Visualization 2       Frame by frame comparison of turbulence corrected images between the light field camera setting and the plenoptic sensor setting.
Visualization 3       Frame by frame comparison of turbulence degraded images between the light field camera output and the plenoptic sensor output without using the correction

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Structural difference of imaging through turbulence using a light field camera and a plenoptic sensor with turbulence effects simplified as wavefront distortion
Fig. 2.
Fig. 2. The hybrid plenoptic imaging system with (a) structural design, (b) exposed view from the viewpoint of the camera, and (c) in weatherized housing.
Fig. 3.
Fig. 3. Experimental setup of light field camera and plenoptic sensor comparison in imaging through turbulent media.
Fig. 4.
Fig. 4. Imaging through artificial turbulence with (a) the light field camera configuration, and (b) the plenoptic sensor configuration (see Visualization 1).
Fig. 5.
Fig. 5. Frame-by-frame comparison in correcting turbulence degraded images by using (a) the light field camera configuration, and (b) the plenoptic sensor configuration (see Visualization 2).
Fig. 6.
Fig. 6. Frame-by-frame comparison with the correction algorithms turned off for (a) the light field camera configuration, and (b) the plenoptic sensor configuration (see Visualization 3).
Fig. 7.
Fig. 7. Improvement curves through correlation coefficient with ground truth under increasing levels of turbulence distortion.
Fig. 8.
Fig. 8. Correction results at the $6^{th}$ second for (a) the light field camera, and (b) the plenoptic sensor.
Fig. 9.
Fig. 9. Correction results at the $12^{th}$ second for (a) the light field camera, and (b) the plenoptic sensor.
Fig. 10.
Fig. 10. Imaging results at the $6^{th}$ second for (a) the light field camera without correction algorithm, and (b) the plenoptic sensor’s central cell image.
Fig. 11.
Fig. 11. Imaging results at the $12^{th}$ second for (a) the light field camera without correction algorithm, and (b) the plenoptic sensor’s central cell image.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

U i ( s , t ) = 1 M 3 U o ( M s , M t ) F [ ϕ ( γ x , γ y ) ] | f x = 0 , f y = 0 + 1 M 3 x M s , y M t U o ( x , y ) F [ ϕ ( γ x , γ y ) ] | f x = x M s λ M F eff , f y = y M t λ M F eff d x d y .
U l ( u , v ; N s , N t ) = 1 j λ F M L A U i ( s , t ) P M L A ( s , t ) e x p [ j 2 π λ F M L A ( s u + t v ) ] d s d t .
U l ( u , v ; N s , N t ) = κ F [ U i ( s N s d , t N t d ) ] F [ P M L A ( s , t ) ] | f s = u λ F M L A , f t = v λ F M L A .
U l s i g ( u , v ; N s , N t ) = κ M 3 A ϕ ( γ x , γ y ) d x d y | | A | | [ e j 2 π ( N s d f s + N t d f t ) U o ^ ( f s M , f t M ) ] P M L A ^ ( f s , f t ) .
U l n o i s e ( u , v ; N s , N t ) = κ F M L A 2 M 3 F e f f 2 { x M s , y M t U o ( x , y ) e j 2 π λ F e f f M [ ( x + M N s d ) f s + ( y + M N t d ) f t ] d x d y ϕ ( γ λ F e f f f s , γ λ F e f f f t ) } P M L A ^ ( f s , f t ) .
U l ( u , v ; N s , N t ) = U l s i g ( u , v ; N s , N t ) + U l n o i s e ( u , v ; N s , N t ) .
U p ( u , v ; N a , N b ) = 1 M 3 M { U o ( M M u , M M v ) F [ ϕ ( γ x , γ y ) P ( x , y ; N a , N b ) ] | f x = 0 , f y = 0 + ( x , y ) O ¯ U o ( x , y ) F [ ϕ ( γ x , γ y ) P ( x , y ; N a , N b ) ] | f x = x M M u λ M F eff , f y = y M M v λ M F eff d x d y } .
U l U l U c ( m i n { D , r 0 } ) ,
U p U p U c ( m i n { D N , r 0 } ) .
M ( N 1 , N 2 , N t ) : = M s ( N 1 , N 2 , N t ) α t i m e ( N 1 , N 2 , N t ) , α t i m e ( N 1 , N 2 , N t ) : = Σ u , v [ I N 1 , N 2 , N t + 1 ( u , v ) + I N 1 , N 2 , N t 1 ( u , v ) 2 I N 1 , N 2 , N t ( u , v ) ] 2 .
M s ( N 1 , N 2 , N t ) : = α v e r t i c a l ( N 1 , N 2 , N t ) α h o r i z o n t a l ( N 1 , N 2 , N t ) , α v e r t i c a l ( N 1 , N 2 , N t ) : = Σ u , v [ I N 1 + 1 , N 2 , N t ( u , v ) + I N 1 1 , N 2 , N t ( u , v ) 2 I N 1 , N 2 , N t ( u , v ) ] 2 , α h o r i z o n t a l ( N 1 , N 2 , N t ) : = Σ u , v [ I N 1 , N 2 + 1 , N t ( u , v ) + I N 1 , N 2 1 , N t ( u , v ) 2 I N 1 , N 2 , N t ( u , v ) ] 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.