Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Edge detection based on joint iteration ghost imaging

Open Access Open Access

Abstract

Imaging and edge detection have been widely applied and played an important role in security checking and medical diagnosis. However, as we know, most edge detection based on ghost imaging system requires large measurement times and the target object image cannot be provided directly. In this work, a new edge detection based on joint iteration of projected Landweber iteration regularization and guided filter ghost imaging method has been proposed, which can improve the feature detection quality in ghost imaging. This method can also achieve high-quality imaging. Simulation and experiment results show that the spatial information and edge information of target object are successfully recovered from the random speckle patterns without special coding under a low measurement times, and the edge image quality is improved remarkably. This approach improves the the applicability of ghost imaging and can satisfy the practical application fields of imaging and edge detection at the same time.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Ghost imaging (GI) is a novel optical imaging technology that is rather different from conventional modalities. In conventional optical imaging, the object image is directly acquired by a multi-pixel detector. Surprisingly, GI uses a single-pixel detector to detect the total light signal strength of object, while a detector with spatial resolution measures the information about the light source. The image can be retrieved merely by correlating the signals of these two detectors, but not either one alone [1]. Remarkably, GI has the superiority of higher sensitivity in detection and anti-atmospheric disturbance than conventional optical imaging [2]. Hence, GI is increasingly focused on its applications, such as remote sensing [35], optical coherence tomography [6,7].

In 2008, a computational ghost imaging (CGI) theoretical scheme that requires only one single-pixel detector was proposed by Shapiro [8]. Subsequently, CGI scheme was experimentally demonstrated by Bromberg [9]. Then, many methods were proposed to improve the imaging quality, including compressive GI [10], differential GI [11], pseudo-inverse GI [12,13] and so on [1417]. Especially, compressive GI can achieve the high-quality reconstruction image under undersampling [10], which promotes the practical application of GI technology [1826].

Recently, edge information detection of target object for GI was considered [2733]. Edge detection is widely used in computer vision, target recognition, earth observation and security check [34,35]. As we all know, traditional edge detection methods (e.g. Canny [36], Sobel [37], Roberts [38], etc.) rely on the original image. However, in many practical application scenarios in which harsh or noisy environments, the traditional edge detection methods are ineffective because the image information of the target object is difficult to obtain. Different from traditional edge detection methods, edge detection based on GI scheme can detect edge information directly without needing the original image. Hence, edge detection based on GI can solve the problem of disturbance due to its advantages on good anti-disturbance imaging and direct edge detection of unknown objects.

Here, some edge detection based on GI methods have been reviewed [2733]. Liu et al. proposed a gradient GI (GGI) scheme which directly achieved the edge information of an unknown target object [27]. Subsequently, a more optimized edge detection method named speckle-shifting GI (SSGI) was reported by Mao [28]. The SSGI scheme does’t need the gradient angle or any other prior knowledge of the object in GGI. Then, wang et al. [29] proposed another similar method called subpixel-speckle-shifting GI (SPSGI), which is based on a set of subpixel-shifted Walsh-Hadamard speckle pattern pairs and has the advantage of enhancing the resolutions of the edge detection. Meanwhile, Yuan et al. [30] used structured illuminations based on the interference principle to get edge information, and the method can extract the edges of binary and gray targets in any direction at the same time. From the perspective of light field coding, special sinusoidal patterns for the x-direction edge and also y-direction edge of the unknown object were designed by Ren [31]. Furthermore, a novel variable size Sobel operator whose coefficients are isotropic and sensitive to all directions was designed and used for edge detection based on GI by Ren et al, whereby the edge detection based on GI method could directly achieve the edges of an unknown object without choosing the gradient angle or any other prior knowledge of the object [32]. However, these methods still have some shortcomings, such as high measurement times and poor quality of edge information acquisition.

In order to improve the efficiency of edge detection based on ghost imaging, Guo et al. proposed a compressed ghost edge imaging (CGEI) scheme, which designed special random patterns with the characteristic of different speckle-shifting, and used compressed sensing technology and Sobel operator, whereby the measurements required for edge detection can be further reduced [33]. Noteworthy, these methods can only obtain edge information singly, unless the reconstruction calculation of the whole information is carried out again. If the edge information and the whole image information of the object can be obtained simultaneously at a lower sampling times, this will greatly promote the air surveillance and ocean monitoring application of GI. We find that the problem can be well solved by using the compressive GI based on guided filtering method [39].

In this paper, we demonstrate an edge detection based on joint iteration ghost imaging (JIGI) method for simultaneously acquiring the global edge and whole image information. Because the JIGI method is based on projected Landweber iteration regularization and guided filter, in which guided filter is an edge-preserving filter which can enhance the signal-to-noise ratio of edge detection, the proposed method have some benefits: 1) High efficiency: simultaneous acquisition of high quality edge and whole image information is implemented with fewer measurement times; 2) More convenient: edge detection based on ghost imaging can be realized in any light field without designing special light field and pair measurement; 3) Strong universality: it is not limited to computational ghost imaging using light field modulation equipment, but also suitable for other ghost imaging methods such as dual-path pseudo-thermal ghost imaging based on rotating ground glass.

2. Theoretical analysis

In CGI system, the detection light source is generated from a light beam through a spatial light modulator (SLM) [or a digital micromirror devices (DMD)], and then passes through an optical lens to adjust the size of light beam. The transmitted or reflected light field $S^{(m)}(i,\;j)$ ($m=1,2,3,\ldots ,M$ represents the number of measurement times) passing through the target object with a transmission coefficient of $T(i,\;j)$ is recorded by a single-pixel detector; and the detection value obtained from the $m$-th sampling is expressed as $B^{(m)}$.

Here, by reconfiguring the elements of each speckle pattern (dimensions $r\times c$) pre-generated by computer into a row vector of length $K=r\times c$ to form one row of the matrix $\Phi$, we obtain the following $M\times K$ matrix, based on $A$ measurements:

$$A=\left[ \begin{array}{cccc} S_1(1,1) & S_1(1,2) & \cdots & S_1(r,\;c) \\ S_2(1,1) & S_2(1,2) & \cdots & S_2(r,\;c) \\ \vdots & \vdots & \ddots & \vdots \\ S_M(1,1) & S_M(1,2) & \cdots & S_M(r,\;c) \\ \end{array} \right],$$
The $M$ results from the single-pixel detector can be permutated into an $M\times 1$ column vector $y$:
$$y=\left[B^{(1)},B^{(2)},\ldots,B^{(M)}\right]^T,$$
Then, if we denote the unknown target object $O(i,\;j)$ as an $K$ dimensional column vector $o$ $(K\times 1)$, we will have the framework $y=Ao$, and the matrix form is expressed as:
$$\left[ \begin{array}{c} B^{(1)} \\ B^{(2)} \\ \vdots \\ B^{(M)} \\ \end{array} \right]= \left[ \begin{array}{cccc} S^{(1)}(1,1) & S^{(1)}(1,2) & \cdots & S^{(1)}(r,\;c) \\ S^{(2)}(1,1) & S^{(2)}(1,2) & \cdots & S^{(2)}(r,\;c) \\ \vdots & \vdots & \ddots & \vdots \\ S^{(M)}(1,1) & S^{(M)}(1,2) & \cdots & S^{(M)}(r,\;c) \\ \end{array} \right] \left[ \begin{array}{c} T(1,1) \\ T(1,2) \\ \vdots \\ T(r,\;c) \\ \end{array} \right].$$
In common use, we can get the image information through the second-order correlation imaging equation:
$$G^{(2)}(i,\;j)=\langle B^{(m)}S^{(m)}(i,\;j) \rangle,$$
where $``\langle \rangle ''$ represent the ensemble averages. And then Eq. (4) can be expressed in matrix form as:
$$G^{(2)}(i,\;j)=\frac{1}{M}A^{T}y.$$
However, the quality of edge detection and reconstructed image obtained by second-order correlation equation is poor at low measurement times. More than anything, we expect that the limited detection value and corresponding optical field coding information can be fully utilized to obtain high quality edge detection and reconstructed image at the same time with fewer detection times. Therefore, we propose the joint iterative ghost imaging method based on iterative regularization and edge preserving filtering, in which the regularization mainly uses measurement information in an uninterrupted and efficient way, and the filter improves the quality of edge detection and reconstruction image continuously. In this paper, projected Landweber iteration regularization and guided filtering are used.

The idea is to get an initial estimate firstly by the projected Landweber regularization, and then use the guided filter to remove undersampling noise effectually for higher PSNR of the ghost imaging. In our JIGI method, the ghost imaging result is getting better and better with the increase of the number of iteration. So we do a joint iteration rather than do it once. The guided filter has a fast and non-approximate linear-time operator, whose computational complexity is only depend on the size of image. It has an $O(r\times c)$ time (in the number of pixels $(r\times c$) exact algorithm for both gray-scale and color images. So every time we apply a guided filter, it costs a little on computation. In the new version of MATLAB (R2014 and later), the function “imguidedfilter” for guided filtering has been built in (this function actually implements the Fast Guided Filter, it has an $O(r\times c/s)$ time, where “$s$” denotes the kernel window size.

2.1 Step 1: projected Landweber iteration regularization

As we all know, some special regularization methods are exploited to solve Eq. (3), such as projected Landweber iteration regularization (PLIR) [3941]. Here, we use PLIR to get the preliminary reconstructed image. The PLIR is defined as [41]:

$$x_{t} = x_{t-1}+\alpha PA^T(y-Ax_{t-1}), ~~~~~~t=1,2,3,\ldots,$$
where $P$ is the pseudo-inverse of $A^T A$, $\alpha$ is the gain factor to control the convergence speed, $x_{t}$ is the approximate solution of Eq. (3), and $x_{t-1}$ is the approximate solution of the previous (initial supposition: $x_0=\left [0,0,\ldots ,0\right ]^T$). Let $t=1, ~x_{t-1} := x_0$. Hence, Eq. (6) can be written as:
$$x_{1} =\alpha PA^Ty,$$
where, $\alpha$ is a set value, $P$ is a square matrix with diagonal protrusion which can be used to analyze the optical properties of speckle light field in GI, and the latter two terms $A^Ty$ are identical to the matrix form of the second-order correlation imaging equation [Eq. (5)], which is also the key to imaging.

So, we first obtain initial reconstruction image $x_1$.

2.2 Step 2: guided filter

In order to realize the edge detection based on GI, the resulting image $x_1$ is reshaped $K\times 1$ dimensions into a matrix of $r\times c$ dimensions and processed with a guided filter method. Here, we denote the guided filter as [42,43]:

$$q_t=\textrm{guidefilter}(I_t,\;x_t), ~~~~~~t=1,2,3,\ldots,$$
where $x_t$ is the filter input image (i.e., the reconstruction result of PLIR), $I_t$ is the guidance image ($t=1:I_1=x_1; t>1:I_t=q_{t-1}$ is the matrix of $r\times c$ dimensions ), $q_t$ is an output image. The filtering output at a pixel $i$ is expressed as
$$q_{ti}=\sum_j W_{i,\;j}(I_t)x_{tj},$$
where $i$ and $j$ are pixel indexes. The filter kernel $W_{i,\;j}$ is a function of the guidance image $I$ and is independent of $x$, this definition follows [42]
$$W_{i,\;j}(I)=\frac{1}{|\omega|^2} \sum_{k:(i,\;j)\in \omega_k} \left[1+\frac{(x'_i-\mu_k)(x'_j-\mu_k)}{\sigma_k^2+\epsilon}\right],$$
where, $x'$ is the coordinate of the pixel value, $\omega _k$ is the $k$-th kernel function window, $|\omega |$ is the number of pixels in $\omega _k$, $\epsilon$ is a regularization parameter. Here, $\mu _k$ and $\sigma _k^2$ are the mean and variance of $x$ in $\omega _k$.

In guided filter, it assumes that there is a local linear relationship between the guided image $I_t$ and the output image $q_{ti}$ in a window $\omega _k$ centered at the pixel $k$:

$$q_{ti}= a_kI_{ti}+b_k, \forall_i\in \omega_k,$$
where $(a_k, b_k)$ are some linear coefficients assumed to be constant in $\omega _k$. Let’s take the gradient of both sides of Eq. (11):
$$\nabla q= a\nabla I.$$
Such local linear model ensures that $q$ has an edge only if $I$ has an edge. This model [Eq. (12)] has been proven useful in image matting [44], image super-resolution [45], and haze removal [46]. To determine the linear coefficients $(a_k, b_k)$, we minimize the following cost function in the window $\omega _k$:
$$E(a_k, b_k)=\sum_{i\in \omega k}((a_kI_{ti}+b_k-x_{ti})^2+\epsilon a_k^2),$$
where, $\epsilon$ is a regularization parameter penalizing large $a_k$. Using linear ridge regression method, the coefficient of Eq. (13) are obtained as follows:
$$a_k=\frac{\frac{1}{|\omega|}\sum_{i\in \omega_k}I_{ti}x_{ti}-\mu_k \bar{x}_{tk}}{\sigma^2_k+\epsilon},$$
$$b_k=\bar{x_t}_k-a_k\mu_k.$$
Here, $\bar {x}_{tk}=\frac {1}{|\omega |}\sum _{i\in \omega _k}x_{ti}$ is the mean of $x$ in $\omega _k$. Ordinary, we average all the possible $q_{ti}$ values as the final $q_{ti}$. Hence, after computing $(a_k,\;b_k)$ for all windows $\omega _k$ in the image, we compute the filtering output by
$$q_{ti} = \frac{1}{|\omega|}\sum_{k:i\in \omega_k}(a_kI_{ti}+b_k), $$
$$ = \bar{a}_iI_{ti}+\bar{b}_i , $$
where, $\bar {a}_i=\frac {1}{|\omega |}\sum _{k:i\in \omega _k}a_k,\bar {b}_i=\frac {1}{|\omega |}\sum _{k:i\in \omega _k}b_k$. In order to obtain the edge information of GI, we add $a_k$ to the output of Eq. (8). Hence, the new guided filter that contains both global edge and whole image information of the object is expressed as:
$$[q_t,\;a_k]=\textrm{guidefilter}(I_t,\;x_t), ~~~~~~t=1,2,3,\ldots.$$

2.3 Step 3: joint iteration

The output result $q_t$ of Eq. (18) in Step 2 is taken as the input image $x_{t-1}$ of Eq. (6) in Step 1, i.e., $x_{t-1}=q_t$. Then, follow the loop iterations from step 1 to step 2 until the output results $q_t$ and $a_k$ converges at high quality. In this way, the edge detection based on JIGI method is realized by joint iteration of PLIR and guided filter, which obtains the high quality edge and whole image information at the same time. The joint iteration process is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Schematic diagram of edge detection based on joint iteration ghost imaging.

Download Full Size | PDF

More specifically, in our edge detection based on joint iteration ghost imaging method, PLIR is used to constantly add limited information into the reconstruction results, which makes a small number of signals be fully utilized. Meanwhile, according to the result information of the PLIR and the detailed information of the guidance image (the previous guided filter output image), the whole image information and the feature information (edge information) of the image can be acquired. In the whole process, the key is the acquisition of edges, which is a method to obtain image information through edges. In the process of each iteration, guided filtering will first extract the edge information from the guidance image $I_t$, and then get higher quality output image $q_t$.

The mechanism for obtaining edge information through $a_k$ is explained in detail below. It is shown from the Eqs. (14)–(15) that guided filter has the edge-preserving smoothing property [43]. First, let $I_t=x_t$. Then we rewrite the Eq. (14) as:

$$a_k=\frac{\frac{1}{|\omega|}\sum_{i\in \omega_k}I_{ti}^2-\mu_k ^2 }{\sigma^2_k+\epsilon},$$
where,
$$\sigma^2_k=\frac{1}{|\omega|}\sum_{i\in \omega_k}I_{ti}^2-\mu_k ^2.$$
Here, $\sigma ^2_k$ denotes the variance of region $\omega _k$, i.e., the local variance of the guidance image $I_{ti}$. So, Eq. (19) can be further expressed as:
$$a_k=\frac{\sigma^2_k}{\sigma^2_k+\epsilon}.$$
This can be explained intuitively as follows. If the region $\omega _k$ contains more texture and edge features, then $a_k$ becomes close to 1 on account of $\sigma ^2_k$ has a larger value. Contrary, if the region $\omega _k$ is constant or relatively smooth, then $a_k$ becomes close to 0 because the value of $\sigma ^2_k$ is very small. From the above Eqs. (19)–(21) can see, $a_k$ is the global edge information of output image $q_{ti}$, and $b_k$ is the internal information (i.e., contains no edge information).

2.4 Performance evaluation

In order to objectively evaluate the performance of our edge detection method, the signal-to-ratio (SNR) is used, which is defined:

$$\textrm{SNR} = \frac{mean(q_{edge})-mean(q_{back})}{(var(q_{back}))^{0.5}},$$
where $q_{edge}$ and $q_{back}$ are the intensities of the edge detection result in the object edge and background region respectively, and var stands for the variance. At the same time, we use a peak signal-to-noise ratio (PSNR) to estimate the quality of edge detection image. The PSNR reads:
$$\textrm{PSNR} = 10\times log_{10}\left[ \frac{\textrm{max}Val^2}{MSE}\right],$$
where $\textrm {max}Val^2$ is the maximum possible pixel value of the image and
$${\textrm{MSE}}=\frac{1}{r\times c}\sum^{r}_{i=1}\sum^{c}_{j=1}\left[O_{edge}-a_k\right]^2,$$
where $O_{edge}$ represents the original edge image consisting of $r\times c$ pixels, and $a_k$ denotes the reconstructed edge image.

3. Results

In this section, we will carry out the numerical simulation of JIGI method for different target objects, and select an aircraft model as the real object for the actual experiment. The original images (numerical simulation), speckle patterns and results image all have resolutions of $128\times 128$ pixels.

3.1 Numerical simulation results

For different practical application scenarios, we demonstrate two types of numerical simulation that simultaneously carry out edge detection and imaging.

Edge detection in a sparse scene with a large field of view, e.g., airplanes in the sky, ships in the sea. To simulate this scenario, we use a white background (the pixel value is 1) as the large field of view environment, and the aircraft (the pixel value is 0) as the target object in the scene [see Fig. 2 Original image]. The original image size is $128\times 128$ pixels. With M=230-270 random binary speckle patterns the JIGI results are shown in Fig. 2, and the SNRs and PSNRs values of the reconstructed images are listed below the corresponding results. With definition in Eq. (18) and PLIR, the simultaneous acquisition of whole image [Figs. 2(a)–2(e)] and global edge [Figs. 2(f)–2(j)] information is realized. With M=230, the blurry image of aircraft is obtained in Fig. 2(a). However, the corresponding edge image is clearer [as shown in Fig. 2(f)]. The PSNR of edge image is higher than that of whole image 2.8891 dB. When the number of measurements is increased to 260, the edge and image quality are greatly improved. The SNR and PSNR of the edge are increased to 13.2407 and 21.1543 dB. And the PSNR of the image is increased to 18.3843 dB. Excitedly, with M=270, the results converged to the original image are obtained by joint iteration of PLIR and guided filter [see Figs. 2(e) and 2(j)].

 figure: Fig. 2.

Fig. 2. The numerical simulation results of the aircraft object, where SNRs and PSNRs are presented together.

Download Full Size | PDF

Edge detection in complex scene with gray scale. The application of edge detection based on JIGI is not only to binary objects, but also to gray scale objects. For the actual application scene, the edge detection of unknown target are more gray scale objects. First of all, a simple gray scale image is treated as the target object which is commonly used for edge detection, as shown in Fig. 3 Original image. As can be seen from Fig. 3, the edge and image information of the simple gray scale objects are effectively acquired, and the SNR and PSNR of the edge are increased to 13.1372 and 21.7232 dB. Similar to the phenomenon in Figs. 2(e) and 2(j), when the measurement number is 170, the obtained results converge infinitely to the original image, as shown in Figs. 3(e) and 3(j). The results in Fig. 3 show that JIGI method can still obtain high quality reconstruction results for simple gray scale object at lower measurement times.

 figure: Fig. 3.

Fig. 3. The numerical simulation results of the simple gray scale object, where SNRs and PSNRs are presented together.

Download Full Size | PDF

To further illustrate the effectiveness of JIGI, a gray scale image with more gray levels and higher edge complexity is used and the results are shown in Fig. 4. The same with the previous simulation results is that when the measurement times of this complex gray scale object is 780, JIGI can still get the reconstruction results which almost identical to the original image, as shown in Figs. 4(e) and 4(j). Due to the multiple gray scale and edge complexity, the measurement times of convergence are higher than those in Fig. 2 (M=270) and Fig. 3 (M=170).

 figure: Fig. 4.

Fig. 4. The numerical simulation results of the complex gray scale object, where SNRs and PSNRs are presented together.

Download Full Size | PDF

In order to illustrate the advantages of this method, we compare and analyze the results of edge detection and reconstruction image from compressive GI based on orthogonal matching pursuit with guided filtering (OMPGF). Among the results are obtained by using the reconstructed image of compressive GI based on orthogonal matching pursuit as the guidince image and the input image for guiding filtering [as shown in Fig. 5]. Here, all the measurement times of the target object are the corresponding measurement times when JIGI method recovers completely. The results show that OMPGF method obtains three kinds of object’s fuzzy reconstructed image and edge detection results when JIGI method achieves perfect reconstruction.

 figure: Fig. 5.

Fig. 5. The numerical simulation results for compressive GI (OMP) with guided filter, where SNRs and PSNRs are presented together.

Download Full Size | PDF

In order to further testify the performance of JIGI scheme under background light noise, we calculate the SNR of edge detection results under different detection signal-to-noise ratio [DSNR, i.e., the signal power to background noise power ratio, that can be expressed as $DSNR=10log_{10}(\langle B^{(m)}\rangle /\langle Noise^{(m)}\rangle )$, where $\langle Noise^{(m)}\rangle$ is the average background noise power] for aircraft. Figure 6 is the curve of DSNR and edge detection SNR when the number of measurement is 270. The curve shows that the edge detection SNR decreases with the decrease of the DSNR when the different intensity of Gauss random noise is added to the signal. However, we can find that edge detection SNR can reach the edge detection level of 230 measurements [Fig. 2(f)] without noise when DSNR>25dB. Moreover, the edge detection SNR of JIGI method is still higher than that of OMP method without noise [Fig. 6(d)] when DSNR>10dB. It proves that the JIGI reconstruction method has good robustness.

 figure: Fig. 6.

Fig. 6. The SNR performance of edge information against DSNR of JIGI method.

Download Full Size | PDF

3.2 Experimental results

To demonstrate the feasibility of this JIGI scheme, the actual experiment is conducted. The experiment system configuration is illustrated in Fig. 7, which includes a camera lens, a DMD, two reflecting mirror, a beam splitter, a collecting lens and a photomultiplier tube (PMT). The applied DMD is an excellent device in the scheme for pixel multiplexed modulation and consists of $1024\times 768$ micromirrors, each of which can be switched between two directions of $\pm 12^\circ$, corresponding to 1 and 0. The DMD displayed a preloaded sequence of random speckle patterns ($128~\textrm {pixel}\times 128~ \textrm {pixel}$) at rate of 1K patterns/s. Under ambient illumination (cold white LED), the speckle light field modulated by DMD is imaged onto the target object by the camera lens.

 figure: Fig. 7.

Fig. 7. The experiment system diagram of computational ghost imaging.

Download Full Size | PDF

The target object is an aircraft model (see Fig. 8 Object) with the size of $20~\textrm {cm}\times 17~\textrm {cm}$ and positioned about 5m away from the DMD. A current output type Hamamatsu H10721-01 PMT is placed on the reflection direction of beam splitter to make the measurement of total echo signal.

 figure: Fig. 8.

Fig. 8. Reconstructed images obtained at different measurement times. (a), (c) and (e) are the experimental imaging results of GI, OMPGF and JIGI respectively, (b) (d) and (f) are the edge detection experimental results of GGI, OMPGF and JIGI respectively.

Download Full Size | PDF

For comparison, CGI [8], GGI [27], OMPGF and JIGI experiments are taken for the aircraft model. The imaging results of CGI are shown in Fig. 8(a). Since the GGI method needs to select the gradient vector of $+45^\circ$ and $-45^\circ$ for edge detection, the measurement times of PMT detector to obtain global edge image [as shown in Fig. 8(b)] is triple the number of the CGI random patterns (total $M\times 3$ measurements). According to the experimental results in Fig. 8(b), GGI’s edge detection quality is poor, and the effective edge information can be obtained hardly. Figures 8(d)–8(e) is the results of OMPGF. The results show that the edge detection and reconstructed image obtained by OMPGF method are much better than those obtained by CGI and GGI. By this method, blurred aircraft image information and edge information can be obtained when M>2000. However, by using JIGI algorithm, we can acquire a satisfactory image quality, as shown in the illustration of Figs. 8(e)–8(f). Specifically, Figs. 8(e)–8(f) shows five reconstructed images of the aircraft model by using different numbers of patterns. We can see that better quality of reconstructed edge and images with the measurement times increasing. When the measurement times is 500, the imaging and edge detection results of JIGI are obviously better than CGI and GGI, and the edge shape of the aircraft can be roughly distinguished. When the measurement times increases to 1000, the edge information of the aircraft model can be clearly distinguished. As the number of measurements is further increased, the details of the reconstructed image gradually emerge. For example, the engines on both sides of the aircraft have been reconstructed by measuring 8000 times, and the edge and texture feature information is more abundant [as shown in Fig. 8(f) M=8000]. The results of JIGI experiments verify the feasibility of simultaneously carry out high quality edge detection and imaging.

4. Conclusion

In this paper, we have proposed and demonstrated a new edge detection method named joint iteration ghost imaging which uses joint iteration of projected Landweber iteration regularization and guided filtering to realize the high quality edge and whole image information acquisition at the same time. The numerical simulations and experiments show that our JIGI method is validated. Moreover, the proposed method could directly extract the high quality edges in any direction or any GI experimental scheme (e.g. CGI, pseudo-thermal GI, etc.), no matter whether the unknown object is binary or grayscale. We also have compared the performance of CGI, GGI and OMPGF by experiments. The results have showed that the measurement times could be dramatically reduce by using JIGI. As guided filter has the ability of image matting, image super-resolution and haze removal, we believe that edge detection based on JIGI will be valuable in many real applications such as remote sensing, security check and medical imaging [47,48].

Funding

Department of Science and Technology of Jilin Province (20170204023GX); Education Department of Jilin Province (2019LY508L35); Special Funds for Provincial Industrial Innovation in Jilin Province (2018C040-4, 2019C025).

References

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

2. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classical correlation,” Phys. Rev. Lett. 93(9), 093602 (2004). [CrossRef]  

3. C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101(14), 141123 (2012). [CrossRef]  

4. W. Gong and S. Han, “High-resolution far-field ghost imaging via sparsity constraint,” Sci. Rep. 5(1), 9280 (2015). [CrossRef]  

5. W. Gong, C. Zhao, H. Yu, M. Chen, W. Xu, and S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. 6(1), 26133 (2016). [CrossRef]  

6. X.-F. Liu, X.-R. Yao, X.-H. Chen, L.-A. Wu, and G.-J. Zhai, “Thermal light optical coherence tomography for transmissive objects,” J. Opt. Soc. Am. A 29(9), 1922–1926 (2012). [CrossRef]  

7. C. Amiot, P. Ryczkowski, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost optical coherence tomography,” arXiv preprint arXiv:1810.03380 (2018).

8. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

9. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

10. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

11. F. Ferri, D. Magatti, L. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010). [CrossRef]  

12. C. Zhang, S. Guo, J. Cao, J. Guan, and F. Gao, “Object reconstitution using pseudo-inverse for ghost imaging,” Opt. Express 22(24), 30063–30073 (2014). [CrossRef]  

13. W. Gong, “High-resolution pseudo-inverse ghost imaging,” Photonics Res. 3(5), 234–237 (2015). [CrossRef]  

14. K. W. C. Chan, M. N. O’Sullivan, and R. W. Boyd, “High-order thermal ghost imaging,” Opt. Lett. 34(21), 3343–3345 (2009). [CrossRef]  

15. W. Wang, Y. P. Wang, J. Li, X. Yang, and Y. Wu, “Iterative ghost imaging,” Opt. Lett. 39(17), 5150–5153 (2014). [CrossRef]  

16. B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012). [CrossRef]  

17. M.-J. Sun, M.-F. Li, and L.-A. Wu, “Nonlocal imaging of a reflective object using positive and negative correlations,” Appl. Opt. 54(25), 7494–7499 (2015). [CrossRef]  

18. H. Yu, E. Li, W. Gong, and S. Han, “Structured image reconstruction for three-dimensional ghost imaging lidar,” Opt. Express 23(11), 14541–14551 (2015). [CrossRef]  

19. W. Gong, H. Yu, C. Zhao, Z. Bo, M. Chen, and W. Xu, “Improving the imaging quality of ghost imaging lidar via sparsity constraint by time-resolved technique,” Remote Sens. 8(12), 991 (2016). [CrossRef]  

20. A. M. Kingston, D. Pelliccia, A. Rack, M. P. Olbinado, Y. Cheng, G. R. Myers, and D. M. Paganin, “Ghost tomography,” Optica 5(12), 1516–1520 (2018). [CrossRef]  

21. S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360(6394), 1246–1251 (2018). [CrossRef]  

22. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117(11), 113901 (2016). [CrossRef]  

23. D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117(11), 113902 (2016). [CrossRef]  

24. A. Schori and S. Shwartz, “X-ray ghost imaging with a laboratory source,” Opt. Express 25(13), 14822–14828 (2017). [CrossRef]  

25. A.-X. Zhang, Y.-H. He, L.-A. Wu, L.-M. Chen, and B.-B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica 5(4), 374–377 (2018). [CrossRef]  

26. S. Li, F. Cropp, K. Kabra, T. Lane, G. Wetzstein, P. Musumeci, and D. Ratner, “Electron ghost imaging,” Phys. Rev. Lett. 121(11), 114801 (2018). [CrossRef]  

27. X.-F. Liu, X.-R. Yao, R.-M. Lan, C. Wang, and G.-J. Zhai, “Edge detection based on gradient ghost imaging,” Opt. Express 23(26), 33802–33811 (2015). [CrossRef]  

28. T. Mao, Q. Chen, W. He, Y. Zou, H. Dai, and G. Gu, “Speckle-shifting ghost imaging,” IEEE Photonics J. 8(4), 1–10 (2016). [CrossRef]  

29. L. Wang, L. Zou, and S. Zhao, “Edge detection based on subpixel-speckle-shifting ghost imaging,” Opt. Commun. 407, 181–185 (2018). [CrossRef]  

30. S. Yuan, D. Xiang, X. Liu, X. Zhou, and P. Bing, “Edge detection based on computational ghost imaging with structured illuminations,” Opt. Commun. 410, 350–355 (2018). [CrossRef]  

31. H. Ren, S. Zhao, and J. Gruska, “Edge detection based on single-pixel imaging,” Opt. Express 26(5), 5501–5511 (2018). [CrossRef]  

32. H.-D. Ren, L. Wang, and S.-M. Zhao, “Efficient edge detection based on ghost imaging,” OSA Continuum 2(1), 64–73 (2019). [CrossRef]  

33. H. Guo, L. Wang, and S. Zhao, “Compressed ghost edge imaging,” arXiv preprint arXiv:1902.09344 (2019).

34. M. Kmieć and A. Glowacz, “Object detection in security applications using dominant edge directions,” Pattern Recognit. Lett. 52, 72–79 (2015). [CrossRef]  

35. X. Li, S. Zhang, X. Pan, P. Dale, and R. Cropp, “Straight road edge detection from high-resolution remote sensing images based on the ridgelet transform with the revised parallel-beam radon transform,” Int. J. Remote. Sens. 31(19), 5041–5059 (2010). [CrossRef]  

36. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8(6), 679–698 (1986). [CrossRef]  

37. I. Sobel, “Camera models and machine perception,” Tech. rep., Computer Science Department, Technion (1972).

38. I. E. Abdou and W. K. Pratt, “Quantitative design and evaluation of enhancement/thresholding edge detectors,” Proc. IEEE 67(5), 753–763 (1979). [CrossRef]  

39. H. Huang, C. Zhou, T. Tian, D. Liu, and L. Song, “High-quality compressive ghost imaging,” Opt. Commun. 412, 60–65 (2018). [CrossRef]  

40. M. Piana and M. Bertero, “Projected landweber method and preconditioning,” Inverse Probl. 13(2), 441–463 (1997). [CrossRef]  

41. Q. Jin and U. Amato, “A discrete scheme of landweber iteration for solving nonlinear ill-posed problems,” J. Math. Analysis Appl. 253(1), 187–203 (2001). [CrossRef]  

42. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013). [CrossRef]  

43. K. He, J. Sun, and X. Tang, “Guided image filtering,” in European conference on computer vision (Springer, 2010), pp. 1–14.

44. A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008). [CrossRef]  

45. A. Zomet and S. Peleg, “Multi-sensor super-resolution,” in IEEE Workshop on Applications of Computer Vision, (2002), pp. 27–31.

46. K. He, S. Jian, and X. Tang, “Single image haze removal using dark channel prior,” in IEEE Conference on Computer Vision & Pattern Recognition, (2009), pp. 1956–1963.

47. A. M. Kingston, G. R. Myers, M. P. Olbinado, A. Rack, D. Pelliccia, and D. M. Paganin, “Practical x-ray ghost imaging,” Microsc. Microanal. 24(S2), 134–135 (2018). [CrossRef]  

48. A. M. Kingston, G. R. Myers, D. Pelliccia, I. D. Svalbe, and D. M. Paganin, “X-ray ghost-tomography: Artefacts, dose distribution, and mask considerations,” IEEE Trans. Comput. Imaging 5(1), 136–149 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematic diagram of edge detection based on joint iteration ghost imaging.
Fig. 2.
Fig. 2. The numerical simulation results of the aircraft object, where SNRs and PSNRs are presented together.
Fig. 3.
Fig. 3. The numerical simulation results of the simple gray scale object, where SNRs and PSNRs are presented together.
Fig. 4.
Fig. 4. The numerical simulation results of the complex gray scale object, where SNRs and PSNRs are presented together.
Fig. 5.
Fig. 5. The numerical simulation results for compressive GI (OMP) with guided filter, where SNRs and PSNRs are presented together.
Fig. 6.
Fig. 6. The SNR performance of edge information against DSNR of JIGI method.
Fig. 7.
Fig. 7. The experiment system diagram of computational ghost imaging.
Fig. 8.
Fig. 8. Reconstructed images obtained at different measurement times. (a), (c) and (e) are the experimental imaging results of GI, OMPGF and JIGI respectively, (b) (d) and (f) are the edge detection experimental results of GGI, OMPGF and JIGI respectively.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

A = [ S 1 ( 1 , 1 ) S 1 ( 1 , 2 ) S 1 ( r , c ) S 2 ( 1 , 1 ) S 2 ( 1 , 2 ) S 2 ( r , c ) S M ( 1 , 1 ) S M ( 1 , 2 ) S M ( r , c ) ] ,
y = [ B ( 1 ) , B ( 2 ) , , B ( M ) ] T ,
[ B ( 1 ) B ( 2 ) B ( M ) ] = [ S ( 1 ) ( 1 , 1 ) S ( 1 ) ( 1 , 2 ) S ( 1 ) ( r , c ) S ( 2 ) ( 1 , 1 ) S ( 2 ) ( 1 , 2 ) S ( 2 ) ( r , c ) S ( M ) ( 1 , 1 ) S ( M ) ( 1 , 2 ) S ( M ) ( r , c ) ] [ T ( 1 , 1 ) T ( 1 , 2 ) T ( r , c ) ] .
G ( 2 ) ( i , j ) = B ( m ) S ( m ) ( i , j ) ,
G ( 2 ) ( i , j ) = 1 M A T y .
x t = x t 1 + α P A T ( y A x t 1 ) ,             t = 1 , 2 , 3 , ,
x 1 = α P A T y ,
q t = guidefilter ( I t , x t ) ,             t = 1 , 2 , 3 , ,
q t i = j W i , j ( I t ) x t j ,
W i , j ( I ) = 1 | ω | 2 k : ( i , j ) ω k [ 1 + ( x i μ k ) ( x j μ k ) σ k 2 + ϵ ] ,
q t i = a k I t i + b k , i ω k ,
q = a I .
E ( a k , b k ) = i ω k ( ( a k I t i + b k x t i ) 2 + ϵ a k 2 ) ,
a k = 1 | ω | i ω k I t i x t i μ k x ¯ t k σ k 2 + ϵ ,
b k = x t ¯ k a k μ k .
q t i = 1 | ω | k : i ω k ( a k I t i + b k ) ,
= a ¯ i I t i + b ¯ i ,
[ q t , a k ] = guidefilter ( I t , x t ) ,             t = 1 , 2 , 3 , .
a k = 1 | ω | i ω k I t i 2 μ k 2 σ k 2 + ϵ ,
σ k 2 = 1 | ω | i ω k I t i 2 μ k 2 .
a k = σ k 2 σ k 2 + ϵ .
SNR = m e a n ( q e d g e ) m e a n ( q b a c k ) ( v a r ( q b a c k ) ) 0.5 ,
PSNR = 10 × l o g 10 [ max V a l 2 M S E ] ,
MSE = 1 r × c i = 1 r j = 1 c [ O e d g e a k ] 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.