Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Three-dimensional rapid flame chemiluminescence tomography via deep learning

Open Access Open Access

Abstract

Flame chemiluminescence tomography (FCT) plays an important role in combustion monitoring and diagnostics due to the easy implementation and non-intrusion. However, on account of the high data throughput and the inefficiency of the conventional iteration methods, the 3D reconstructions in FCT are typically conducted off-line and time-consuming. In this work, we present a 3D rapid FCT reconstruction system based on convolutional neural networks (CNN) model for practical combustion measurement, which has the ability to reconstruct 3D flame distribution rapidly after training process. First, the numerical simulation has been performed by creating three cases of phantoms which are designed to mimic the 3D conical flame. Next, after the evaluation of loss function and training time, the optimal CNN architecture has been determined and certificated quantitatively. Finally, a real time FCT system consisting of 12 color CCD cameras is realized and multispectral separation algorithm is adopted to extract CH* and C2* components. Certificated by practical measurements testing, the proposed CNN model is able to reconstruct 3D flame structure from real time captured projections with credible accuracy and structure similarity. Furthermore, compared with conventional iteration reconstruction method, the proposed CNN model shows better performance on obviously improving reconstruction speed and it is expected to achieve 3D rapid monitoring of flames.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

As a kind of non-intrusiveness and instantaneity measurement method, chemiluminescence based combustion diagnostics plays important roles for active combustion control in practical industrial environments [13]. Especially for hydrocarbon flames, the radical chemiluminescence emissions of CH*, C2* and OH* can be treated as “fingerprints” for flame diagnostics [46], which reflect flame information as equivalence ratio [79], heat release [10,11] and level of additives [12,13], etc. Compared to those one- or two-dimensional detecting tactics, such as Photo Multiplier Tube (PMT) sensing [14,15], planar laser-induced fluorescence (PLIF) [1618], planar laser induced phosphorescence (PLIP) [19] and filtered Rayleigh scattering (FRS) [20], flame chemiluminescence tomography (FCT) which combines with optical computerized tomography and radical chemiluminescence emissions can retrieve three dimensional (3D) spatial distributions as well as provide more details in flames. With the advantage of easy implementation, which means no excitation light source is required, FCT attracts increasing interest in combustion research [2127]. Floyd et al. measured instantaneous 3D structure of CH* emission of matrix burner and turbulent opposed jet flame [28,29]. Mohri et al. used 24 projections to reconstruct turbulent gaseous flames with obvious highly wrinkled structures [30]. Cai et al. developed time-resolved endoscopic CTC system and multi-cameras system respectively to reduce the experimental costs as well as achieve both good spatial and temporal resolutions of a non-premixed turbulent swirl flame [23,3133]. Wiseman et al. [34] calculated flame surface area, curvature, thickness, and the normal component of the flame propagation velocity of a laminar flame by reconstructing 3D distribution of propane–air flames. Wang et al. took bokeh effect into account and reconstructed a steady diffusion flame over a large field angle [35].

In addition to the application of FCT, significant efforts have also been invested in the development of tomography reconstruction algorithms. Due to the limited projection data in combustion diagnostics, the well-established algebraic reconstruction technique (ART) [28,29] and multiplicative algebraic reconstruction technique (MART) [36,37] algorithms have been demonstrated to work effectively on the ill-posedness of the inversion problem. Meanwhile, new algorithms and methods have been developed to either address or exploit unique aspects such as the integration of a priori information [38]. For instance, Zhou and co-workers reconstructed 3D temperature distributions in a large-scale furnace numerically and experimentally based on Tikhonov’s regularization [39,40]. Daun et al. reconstructed the axisymmetric flame properties using Tikhonov’s regularization [41]. Besides Tikhonov’s regularization, the sparseness of the temperature field can be considered as priori information which is known as total variance (TV) [23,42,43]. Compared with Tikhonov’s regularization, TV regularization perform better in preserving sharp discontinuities between distinct regions of the domain [44]. Additionally, Bayesian inference is a kind of versatile tool which is adept at the combination of measurement data and prior information in a statistically-robust manner [42,45,46]. Grauer et al. reconstructed the instantaneous refractive index field of a turbulent flame based on Bayesian framework [47]. Unterberger et al. developed an evolutionary reconstruction technique (ERT) which is comprised of a genetic algorithm (GA) and a ray-tracing software for three different kinds of flame. Their results showed good agreement between FCT based on ART and ERT [48,30]. It is important to note that for ART reconstruction algorithm, the number of iteration and termination criterion are decided by experience as well as have an obvious effect on the accuracy of reconstruction results. Furthermore, the computed time consuming of iteration make FCT difficult to be applied for rapid combustion monitor and diagnostic. As to regularization algorithm, the selection of regularization parameters, which control the trade-off between data fidelity and solution, exerts a crucial effect on the solution and must be chosen carefully [49]. Though a number of methods have been developed to determine the optimal regularization parameter for inverse problems in mathematics [50,51], such as L-Curve criterion [52], discrepancy principle (DP) [53], generalized cross validations (GCV) [54] or typically selected by experience [55], it is still time-consuming to figure out the final suitable value.

In the last few years, deep learning has led to very good performance on a variety of problems, such as visual recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networks (CNN) have been most extensively studied [56]. Moreover, CNN has also been widely used in computed tomography imaging in medical field [57]. Wang et al. proposed a new noise reduction method for low-dose CT via deep learning which showed a substantial improvement on quantitative metrics and computed speed [58]. Furthermore, they combined the autoencoder, deconvolution network and shortcut connections into the residual encoder–decoder convolutional neural network (RED-CNN) for low-dose CT imaging [59]. CNN has a much better performance not only in medical field, but also for digital holography image [60,61] and ghost imaging [62]. In the field of combustion diagnostic, Cai et al. proposed inversion method based on the extreme learning machine (ELM) and CNN for nonlinear tomographic absorption spectroscopy [6365]. Qiu et al. proposed an unsupervised classification framework based on the convolutional auto-encoder (CAE), the principal component analysis (PCA), and the hidden Markov model (HMM), which was capable of identifying the combustion condition, changing when the combustion deteriorates as the coal feed rate falls [77]. Huang et al. developed a data-driven approach to predict 3-D flame evolution, which combines the state-of-the-art volumetric tomography technique with deep learning algorithms. In their research, 10 instantaneous projections from nine views were input to the trained CNN–LSTM model to obtain the prediction [78].

In order to achieve rapid 3D reconstruction for combustion diagnostic with simple measurement system, a 3D rapid FCT reconstruction system based on CNN model for practical flame measurement is proposed in this paper. First, the principle of FCT as well as the CNN architecture is detailed provided in Section 2. In Section 3, the numerical simulation has been performed by creating three cases of phantoms which are designed to mimic the 3D conical single and multimodal flame. Moreover, the optimal CNN architecture has been designed by comparing the loss function and training time under the condition of different number of convolutional layers and convolutional kernels. The performance of CNN model has been certificated by calculating the RMSE and SSIM of 3D reconstruction results according to the created phantoms. Finally, a real time FCT system consisting of 12 color CCD cameras covering ∼180° is realized for 3D candle flames measurement. Multispectral separation algorithm is adopted to extract CH* and C2* components of candle flame. A large set of simultaneous projections and 3D reconstruction results by ART algorithm are regarded as dataset for the CNN model training process. Verified by practical measurements testing, the proposed CNN model is able to reconstruct 3D candle flame structure from real time captured projections with credible accuracy and structure similarity. Compared with ART algorithms in FCT problem, it is believed the proposed CNN model in 3D FCT measurements show obvious potential in rapid combustion monitor and measurement benefit by the capability and high computational efficiency.

2. Principle

2.1 Three-dimensional rapid flame chemiluminescence tomography

3D projection model based on lens imaging theory has been presented to solve 3D reconstruction problem of flame [37] shown in Fig.   1.

 figure: Fig. 1.

Fig. 1. 3D projection model of reconstruction.

Download Full Size | PDF

The 3D distribution of the flame chemiluminescence species (e.g. CH* and C2*) to be measured can be denoted as F. F is divided into N discrete voxels in the measurement volume. An image system is applied to record the 2D projection of F. It is known that the projection is the superposition of the chemiluminescence intensity. Each projection contains one cameras with q pixels. Neglecting self-absorption by the flame, the relation between projection I and reconstruction field F is illustrated by Eq. (1).

$${I_j} = \sum\limits_{i = 1}^N {{w_{ij}}{f_i}\quad 1 \le j \le p \times q}$$
The voxel indices are represented by the single index i, j is defined as the index of projection direction. Ij is regarded as the projection of the direction j, and fi refers to the intensity of the voxel i. Weight factor wij can be considered as the contribution coefficient of the voxel i to the direction j.

Equation (1) shows that the reconstruction problem can be considered as a set of linear equations, as shown in Eq. (2).

$$\left\{ {\begin{array}{{l}} {{w_{11}} \cdot {f_{11}} + {w_{21}} \cdot {f_{21}} + {w_{31}} \cdot {f_{31}} + \ldots \ldots + {w_{N1}} \cdot {f_{N1}} = {I_1}}\\ {{w_{12}} \cdot {f_{12}} + {w_{22}} \cdot {f_{22}} + {w_{32}} \cdot {f_{32}} + \ldots \ldots + {w_{N2}} \cdot {f_{N2}} = {I_2}}\\ { \ldots \ldots }\\ {{w_{1j}} \cdot {f_{1j}} + {w_{2j}} \cdot {f_{2j}} + {w_{3j}} \cdot {f_{3j}} + \ldots \ldots + {w_{Nj}} \cdot {f_{Nj}} = {I_j}} \end{array}} \right.\quad 1 \le j \le p \times q$$
3D FCT problem can be performed by algebraic reconstruction technique (ART), as described in Eq. (3).
$$f_i^{(h + 1)} = f_i^{(h)}+ {\alpha {w_{ij}}\frac{{{I_j} - \sum\limits_{i = 1}^N {{w_{ij}}f_i^{(h)}} }}{{\sum\limits_{i = 1}^N {{{({w_{ij}})}^2}} }}}\quad{1 \le j \le p \times q} $$
Where α is a relaxation factor in improving convergence with sensor noise, h is the index of iteration. The reconstruction is considered converged once the absolute difference of the sum of f, from one iteration to the next, is below the threshold value Δc. In order to increase the speed of transfer of data from hard disk to memory, the memory mapping technology was adopted.

2.2 Convolutional neural networks

In this work, we propose a CNN-based method for 3D rapid FCT. As one of the most popular deep neural networks, CNN architectures have enabled superior performance in featuring local receptive fields, sharing weights and pooling [66,67] with the advantages of reducing the number of networks parameters and increasing the feature extraction ability. Generally, a CNN model includes an input layer, multiple hidden layers, and an output layer, as shown in Fig.   2. The hidden layer is basically composed of a convolutional layer, one batch normalization (BN) layer (which is a widely adopted technique that enables faster and more stable training of neural networks [68]), an activation layer, a pooling layer and fully connected layer. The training stage of CNN consists of both forward propagation process and back propagation process.

 figure: Fig. 2.

Fig. 2. Demonstration of convolutional natural networks (CNN).

Download Full Size | PDF

2.2.1 Forward propagation process

During the forward propagation process, the data from input layer are processed by multiple convolution kernels to extract the most salient features firstly. Then, the results after the convolution operation along with the biases are fed into the following activation function, and a series of feature maps are acquired ultimately, as illustrated in Eq. (4),

$$\overrightarrow A = {\mathop{\rm Re}\nolimits} \textrm{LU(}\overrightarrow M \otimes \overrightarrow z +\overrightarrow b \textrm{)}$$
Where $\overrightarrow M$ represents the weight matrix of each convolution kernel, $\overrightarrow z$ is the input data, and $\overrightarrow M \otimes \overrightarrow z$ means the convolution operation, $\overrightarrow b$ can be considered as bias term, $\overrightarrow A$ indicates the feature maps to be transferred to the next layer. The rectified linear units (ReLU) [69] is utilized as activation function in our 3D rapid FCT problem, which allow for faster and effective training of neural network architectures on large and complex datasets compared with the sigmoid function [70], as shown in Eq. (5):
$${\mathop{\rm Re}\nolimits} \textrm{LU(x) = }\left\{ {\begin{array}{{c}{c}} x&{if\;x \ge 0}\\ 0&{if\;x < 0} \end{array}} \right.$$
The pooling layer followed the convolutional layer down samples the feature maps of the incoming data and passes them to the next convolutional block. It could significantly reduce the spatial dimension of the representation and the number of internal weights [71].

The convolution and pooling operations are repeated and then the original input data are transformed to a one-dimensional feature column vector. Moreover, the characteristic column vector is multiplied by the coefficient matrix and added with the biases to produce the predicting results.

2.2.2 Back propagation process

The comparison between the prediction results and ground-truth results is utilized to calculate the difference of each pixel, which then propagates back from layer to layer. The error between the prediction and ground-truth are measured by loss function. The fundamental principle of the backpropagation algorithm is to gradually adjust the parameters and then train the model through the method of gradient descent. In the network training process, first set a learning rate α, and then update the parameters of each layer in each network from the back to the front according to the gradient descent principle, thus finding the value of all parameters in the network when the error between the expected value and the true value is the smallest [79,80]. In this context, we define the loss function as the mean square error (MSE):

$${L_{MSE}} = \frac{1}{T}\sum\limits_{t = 1}^T {\sum\limits_{u = 1}^U {\sum\limits_{v = 1}^V {{{({{\tilde{P}}_{u,v}} - {P_{u,v}})}^2}} } }$$
Where U and V are the width and height of the prediction image, u and v represent the pixel index of the prediction image respectively. T indicates the batch size of CNN model. $\tilde{P}$ is the predicted image and P is regarded as the corresponding ground-truth image. Back propagation algorithm [72] is applied to back propagate the error into the network.

Meanwhile, CNN architecture has two types of parameters. The first type should be determined before batch training, which consists of the total number of hidden layers, the number of convolutional kernels in each layer, the size of the convolutional kernel and so on. These kinds of parameters specify the structure of CNN model. Another type includes the internal weights of different convolutional kernels. They can be adjusted automatically during the training of the neural network [72]. Here we use the Adaptive Moment Estimation (Adam) [73] based optimization to optimize the weight parameters. This process can be considered as batch training. Batch training should be repeated until the loss function converges. Once the training is completed, the CNN structure parameters are determined and ready to be used for retrieving the images for rapid FCT directly.

2.3 3D rapid FCT based on CNN

In this paper, we propose a 3D rapid FCT reconstruction system based on CNN. The CNN model of 3D rapid FCT reconstruction procedures is composed of training stage and testing stage, as schematically shown in Fig.   3. For the numerical simulation, we generate 3D field and projections from different views. For the training part, the projections as well as the slices of simulated field which refer to the corresponding ground-truth images of projections are fed into CNN model together. After optimizing the weight parameters of CNN model, the structure of CNN is determined and it is able to give the predicting results according to the testing projections during the testing procedure. For the 3D FCT practical experiment, the multi-directional projections are captured simultaneously after camera calibration. The 3D reconstruction results are retrieved directly by ART algorithm. Afterwards the CNN model is then involved in the system. In the training part, the projections are fed into CNN model directly, meanwhile, the slices of reconstruction results from ART which can be considered as ground-truth are sent to CNN model. As a result, the CNN model has the ability to recover the slices of field from the instantaneous projections.

 figure: Fig. 3.

Fig. 3. The flowchart of 3D FCT using convolutional neural networks. The blue part represents the training stage and the yellow part represents the testing stage.

Download Full Size | PDF

3. Numerical simulation

In order to investigate the reliability and quality of CNN for 3D rapid FCT problems, simulative studies have been conducted in this section. Twelve projection directions were implemented with 12 CCD cameras evenly spaced over 180°, the angle between two adjacent cameras was 15°, as illustrated in Fig.   4. The focal length of the lenses was 8 mm and the maximum aperture is 5.7mm. Meanwhile, the object distance was defined as 340 mm.

 figure: Fig. 4.

Fig. 4. Locations of CCD array in numerical simulation.

Download Full Size | PDF

Three kinds of 3D distributed emission field were generated artificially as standard samples used in numerical simulations. All of the Phantoms cover a region of 5×5×5mm3 divided into 50×50×50 voxels, so that the width of grid was 0.1 mm. Phantom #1 attempts to mimic 3D single conical distribution, as expressed in Eq. (7):

$$\left\{ {\begin{array}{{c}{c}} {{F_s}(x,y,z) = \frac{{{V_{\max }} \cdot {r_{x,y}}}}{{{l_2}}}}&{if\, {{l_1} \ge 0,}\, {{l_1} \le z \le {l_2}} }\\ {{F_s}(x,y,z) = \frac{{{V_{mp}} \cdot {r_{x,y}}}}{{{l_2}}}}&{if\, {{l_1} < 0,}\, {z \le {l_2}} }\end{array}} \right. x,y,z \in [{1,50} ]$$
The parameters in Eq. (7) were defined in Eq. (8).
$$\left\{ {\begin{array}{{c}} {{r_{x,y}} = \sqrt {{{(x - {c_x})}^2} + {{(y - {c_y})}^2}} }\\ {{l_1} = {k_1} - (z - 1) \cdot {g_1}}\\ {{l_2} = {k_2} - (z - 1) \cdot {g_2}} \end{array}} \right.$$
Where Vmax is the maximum value of phantom. cx, cy represent the original point of each layer of x and y direction respectively. l1, l2 refer to the distribution boundary of each layer z. k1, k2 as well as g1, g2 control the distribution boundary according to the layer z and the inclination rate of change of phantoms. Moreover, Vmp is regards as the maximum value of the layer which l1 equals zero.

Phantom #2 features bimodal simulated fields which are similar to Phantom #1, with different maximum values and changing trends. Similarly, Phantom #3 is characterized with three artificial fields according to Phantom #1 with various maximum values and change rate. Additionally, all of the phantoms suffer from Gaussian filer with size of 5 × 5.

In order to create training dataset for CNN model, a total number of 750 samples were generated artificially, which contained three phantom cases with different Vmax, original points and various distributed randomly. One example for three phantom cases is demonstrated in Figs.   5(a)–5(c). Meanwhile, the horizontal slices according to the yellow dash lines (1.5 mm above the bottom of phantom) are shown in Figs.   5(d)–5(f).

 figure: Fig. 5.

Fig. 5. One example of three phantoms cases. (a)–(c) 3D distribution of three phantom cases, (d)–(f) the horizontal slices according to the yellow dash lines in (a)–(c).

Download Full Size | PDF

During the numerical simulation, 12 view projections of each phantom could be obtained with dimensionality of 100×100. For generating the dataset of CNN model training process, one horizontal slice of 3D simulation field and the corresponding projections of 12 views were considered as one output layer as well as input layer respectively, with the result that the size of input layer was 100×12. Moreover, 20 horizontal slices with different height of each phantom were selected and as a consequence a total number of 15000 input layers as well as output layers were created for CNN model training process.

According to the dimensionality of input layer, the design of CNN architecture is illustrated in Fig.   6. The hidden layers contained four convolutional layers and one max pooling layer. We set up 8#3 × 3 convolutional kernels in the first convolutional layer and 16#3 × 3 in the second one. For the last two layers, the convolutional kernels were selected as 32#3 × 3 and 64#2 × 2 respectively. The max pooling operation was performed with 2×2 filters. After flattening and fully connection, the feature vector was converted into a column with size of 2500, which then can be easily reshaped as 50 × 50 as the expected output. Meanwhile, all the parameter strides of convolution in this work were set to 1. Furthermore, random batch process was adopted before each epoch during the CNN model training. 15000 training samples were divided into 300 batches randomly which not only improve the convergence rate, but also the prediction accuracy.

 figure: Fig. 6.

Fig. 6. Architecture of CNN model for 3D FCT.

Download Full Size | PDF

To quantitatively evaluate the precision of CNN architecture, root-mean-square error (RMSE) was used in the numerical simulation as defined in Eq. (9).

$$RMSE = {[\frac{1}{{UV}}\sum\limits_{u = 1}^U {\sum\limits_{v = 1}^V {{{({{\tilde{P}}_{u,v}} - {P_{u,v}})}^2}} } ]^{\frac{1}{2}}}$$
Meanwhile, the structural similarity index (SSIM), as shown in Eq. (10), was utilized for measuring the structural similarity between two images and the best value was 1.
$$SSIM = \frac{{(2{\mu _P}{\mu _{\tilde{P}}}+{c_1})(2{\sigma _P}_{\tilde{P}}+{c_2})}}{{(\mu _P^2 + \mu _{\tilde{P}}^2+{c_1})(\sigma _P^2 + \sigma _{\tilde{P}}^2+{c_2})}}$$
Where ${\mu _P}$ and ${\mu _{\tilde{P}}}$ represent the mean of $P$ and $\tilde{P}$ respectively, ${\sigma _P}$ as well as ${\sigma _{\tilde{P}}}$ indicate the variance. Moreover, ${\sigma _P}{\sigma _{\tilde{P}}}$ is the covariance of $P$ and $\tilde{P}$, c1 and c2 are regularization parameters [71],
$$\left\{ {\begin{array}{{c}} {{c_1} = {{({k_1}L)}^2}}\\ {{c_2} = {{({k_2}L)}^2}} \end{array}} \right.$$
L is the dynamic range of pixel values, which generally equals to 255, k1 is 0.01and k2 is 0.03 [74].

The prediction accuracy of CNN model is affected by several parameters such as the number of convolution layers NL and the number of convolution kernels NK. The optimum value for each parameter is determined by modifying one parameter while keeping the others fixed and calculating the loss function during the training process.

For the number of convolution layers, four possible values were selected as below, 3, 4, 5 and 6. The corresponding results are demonstrated in Fig.   7. Figure   7(a) indicates that the loss functions of four configurations of convolution layers of CNN model converge within 300 epochs, which means the procedure of all the samples in training dataset once pass through the forward propagation process and back propagation process of the neural network [81]. NL=4 and NL=5 perform better comparing to NL=3 with a smaller convergence value and the profile of loss function of NL=6 shakes in the training process. Additionally, the training time of four cases with different NL are analyzed as shown in Fig.   7(b). It can be clearly seen that the training time of NL=6 (47.1 min) is almost 2.5 times as much as case of NL=4 (19.9 min). Furthermore, case of NL=5 costs almost 2 times for training CNN model than NL=4. Therefore, the number of convolution layers is determined as 4.

 figure: Fig. 7.

Fig. 7. Evolution of four configurations of convolution layers of CNN model. (a) the performance of loss function with different number of layers, (b) the training time with different number of layers.

Download Full Size | PDF

Similarly, a comparative trial on the number of convolution kernels NK has been carried out. Note that there is still no clear selection standard for the number of NK in each layer of the neural network. According to many classical CNN architectures, such as LeNet [79], VGG Net [80], AlexNet [82], GoogLeNet [83] and ResNet [84], the number of NK is generally selected as 2n. Four cases with different NK are tested. For Case 1, the arrangement of NK in four convolution layers was 8 + 16 + 16 + 16. In the same way, NK of Case 2 was 8 + 16 + 32 + 32, NK of Case 3 was 8 + 16 + 32 + 64 and NK of Case 4 was 16 + 32 + 64 + 128 respectively, all following the classical NK setting. The CNN model was trained under the same other conditions and results are illustrated in Fig.   8. The performance of loss function is improved as the number of convolution kernel increase. While when NK increases to arrangement of case 4, the performance of loss function is changed with obvious shakes. Meanwhile, the training time of case 4 evidently increase as the number of parameters in the neural network significantly increases. Finally, considering both the training time and convergence of the loss function, the CNN convolution kernels designed in this work is suggested to be NK = 8 + 16 + 32 + 64.

 figure: Fig. 8.

Fig. 8. Evolution of four cases with different number of convolution kernels. (a) the performance of loss function with different number of convolution kernels, (b) the training time with different number of convolution kernels.

Download Full Size | PDF

Based on the above simulation study, an optimal CNN architecture has been designed. 60 testing samples were created to validate the feasibility of proposed CNN model. Three representative ground-truth samples and corresponding 3D results retrieved from CNN model are demonstrated in Fig.   9. Meanwhile, the distributions of three horizontal slices (2.5 mm, 3.5 mm and 4 mm above the bottom of the field) in light of in Fig.   9 are depicted in Fig.   10.

 figure: Fig. 9.

Fig. 9. Comparisons between three representative phantoms (the first row) and the corresponding prediction results via CNN model (the second row).

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Comparisons of three horizontal slices between three representative phantoms and the corresponding prediction results.

Download Full Size | PDF

In addition to qualitative comparison, five phantoms of each sample (single field, bimodal field and trimodal field respectively) in testing dataset were randomly selected to evaluate the performance of the proposed CNN model for 3D FCT system. Quantitative analysis has been carried out according to Eq. (9) and Eq. (10) of the reconstruction results from CNN model and ground-truth samples. It can be seen from Table   1 that the proposed CNN model can retrieve the 3D distribution with credible accuracy and structural similarity, thus providing a useful technique for rapid flame measuring in combustion diagnostics.

Tables Icon

Table 1. The verification results of CNN model

4. 3D instantaneous candle flame rapid measurement based on CNN model

A FCT system consisting of 12 color CCD cameras (Allied Vision Technologies, Guppy Pro F-125C: 1292 × 964, pixel size 3.75 $\mu m$) covering ∼180° was established as shown in Fig.   11. Moreover, lenses (Computar M1214) with focal length of 12 mm and customized double-channel bandpass filters centered at 431.5nm and 516.5nm (the full width at half maximum (FWHM) is 35 nm) were placed between the lenses and the CCDs were utilized to obtain the light intensity signal emitted by CH* and C2* of candle flame [75,76]. All CCD cameras were connected to computer and triggered at the same time to capture the projections from 12 directions simultaneously. Here, the candle flame was used as the test sample.

 figure: Fig. 11.

Fig. 11. FCT system with 12 CCD cameras.

Download Full Size | PDF

Due to inevitable errors from installation of different optical elements such as distortions and location deviations, both accuracy and resolution in 3D candle flame reconstructions decreased. Therefore, FCT system calibration [27] should be performed before practical measurements, and those errors can be compensated using the calibration results. All camera parameters after calibration were determined as demonstrated in Table   2, where Euler angles (ψ, θ, Φ) (“yaw”, “pitch”, and “roll” respectively) and three translations (TX, Ty, Tz) uniquely defined the orientation and position of a CCD camera in space. Im.dis was regarded as the imaging distance of lens.

Tables Icon

Table 2. Camera parameters results

A uniform machine vision light was utilized to complete the radiation intensity calibration of multi-camera. The multispectral separation algorithm which has been proved in our previous work [37] was adopted to extract double-channel components. Four intensity cases were applied to calibrate multi-camera. Figure   12 demonstrates the normalized radiation intensity difference of double-channel of multi-camera.

 figure: Fig. 12.

Fig. 12. Normalized radiation intensity difference of multi-camera. (a) CH* channel, (b) C2* channel.

Download Full Size | PDF

According to the radiation intensity calibration of all cameras utilized in our 3D candle flame experiment, the intensity responses among these cameras are compensated before experiments.

Before using CNN model, the accuracy of the ART should be verified. Though our previous work [22] prove the adopted algorithm can provide accurate reconstruction, here, re-projection method was still adopted to quantitatively estimate the accuracy of the adopted ART. Eleven projections were used to reconstruct the 3D flame structure, which was then used to predict the twelfth projection. If the reconstruction is correct, then the estimation of the twelfth projection should be consistent with the measured one. The correlation coefficient (R) [85] between the estimate and the measurement of the twelfth projection was used to quantify the quality of reconstructions. The definition of R between two projections (X) and (Y) is shown in Eq. (12), where (•) represents the dot product between two vectors, || ||2 is the 2-norm of the vector.

$$R(X,Y) = \frac{{(X\cdot {Y^T})}}{{{{||X ||}_2} \times {{||Y ||}_2}}}$$
The re-projection validations of 10 flames of each sample case in the training dataset were implemented, and one example in each sample case is shown in Fig.   13. Furthermore, the correlation coefficients shown in Fig.   13 are constantly larger than 0.98 for all the selected 30 flames, suggesting a good reconstruction accuracy of the ART reconstruction. Thus, the reconstructions of the flames using the ART algorithm can be considered as the ground truths and used to train and test the proposed CNN model.

 figure: Fig. 13.

Fig. 13. Re-projection results and correlation coefficients of twelfth camera.

Download Full Size | PDF

FCT system was applied for 3D rapid candle flame reconstruction in practical diagnostics after calibration. In order to generate the training dataset of a suitable CNN model for 3D practical flame measurement, three cases of sample were utilized: Sample A was one candle flame, Sample B contained two candle flames and Sample C consisted of three candle flames. 1000 single-shot projections of each sample were captured. For the purpose of generating the training dataset and testing dataset respectively, 250 consecutive flame projections of each sample were randomly selected. Meanwhile, 250 single-shot projections of each sample (total 750 samples) from 12 views were captured with exposure time of 15 µs at a rate of 10 fps. Moreover, 20 horizontal slices with different height of each reconstruction result and corresponding position of projection were selected and therefore a total number of 15000 (750*20) input layers as well as output layers were created for CNN model training process. Based on the FCT reconstruction theory and ART algorithm as introduced in Section 2.1, 3D distributions of samples from instantaneous projections were retrieved directly. Here, the reconstructed volume was divided into 100 × 100 × 120 voxels in x, y, and z directions with spatial resolution of 0.55 mm. The averaged flame image based on all the 250 instantaneous shots have been placed next to the instantaneous image from the same camera view. The details are shown in Figs.   1416.

 figure: Fig. 14.

Fig. 14. 12 views instantaneous projections and averaged flame projections of Sample A. (A movie is available online. See Visualization 1)

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. 12 views instantaneous projections and averaged flame projections of Sample B. (A movie is available online. See Visualization 2)

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. 12 views instantaneous projections and averaged flame projections of Sample C. (A movie is available online. See Visualization 3)

Download Full Size | PDF

The projections and corresponding reconstruction results via FCT which can be regarded as ground-truth value were fed into CNN model to figure out the appropriate CNN architecture for 3D rapid flame diagnostics. The training process was implemented using Tensorflow by GPU-accelerated technology with a graphic card (NVIDIA GeForce GTX 1060). It took about 1.3 hours to optimize the CNN model. After the similar evaluation in numerical simulation, the optimal CNN framework has been selected as below: four convolutional layers with convolutional kernel arrangement NK = 8 + 16 + 32 + 32 as well as one max pooling layer.

For investigating the feasibility of CNN architecture, Meanwhile, 5 single-shot projections were selected randomly from the remaining 750 projections of sample A, B and C were considered as testing dataset. Figures   1719 demonstrates one example of full directional projections of sample A obtained at 40.5 sec and 12 directional projections of sample B captured at 60.7 sec as well as 12 views projections of sample C obtained at 64.2 sec. According to quantitative flame chemiluminescence multispectral separation algorithm [37], intensities corresponding to CH* and C2* can be separated as depicted in Fig.   17(a), Fig.   18(a), Fig.   19(a) and Fig.   17(b), Fig.   18(b), Fig.   19(b).

 figure: Fig. 17.

Fig. 17. Projections of instantaneous candle flame from 12 directions of sample A at 40.5s. (a) chemiluminescence emission intensity images of CH*, (b) chemiluminescence emission intensity images of C2*.

Download Full Size | PDF

 figure: Fig. 18.

Fig. 18. Projections of instantaneous candle flame from 12 directions of sample B at 60.7s. (a) chemiluminescence emission intensity images of CH*, (b) chemiluminescence emission intensity images of C2*.

Download Full Size | PDF

 figure: Fig. 19.

Fig. 19. Projections of instantaneous candle flame from 12 directions of sample C at 64.2s. (a) chemiluminescence emission intensity images of CH*, (b) chemiluminescence emission intensity images of C2*.

Download Full Size | PDF

3D CH* and C2* concentration distribution of candle flame of Fig.   17 were recovered via ART algorithm as shown in Figs.   20(a) and 20(c) respectively. Similarly, Figs.   20(e) and 20(g) illustrate the 3D CH* and C2* configuration of flame of Fig.   18 by ART algorithm. Figures   20(i) and 20(k) illustrate the 3D CH* and C2* configuration of flame of Fig.   19 by ART. Meanwhile, corresponding 3D reconstruction results predicted by proposed CNN model for 3D rapid flame measurement are depicted in Figs.   20(b), 20(d), 20(f), 20(h), 20(j) and 20(l). It can be clearly seen that the reconstruction results recovered by the proposed CNN model show similar distribution of flames to the reconstruction results from ART algorithm. The noise appeared in the reconstruction results via CNN model is caused by the noise in ART reconstruction results. As shown in Fig.   20, there was still noise in the reconstruction by ART, because the noise can hardly be completely reduced using ART [43]. Since the reconstruction results by ART were the training ground-truth data of the CNN model, the noise can hardly be avoided.

 figure: Fig. 20.

Fig. 20. Comparison of 3D candle flame reconstruction results with different methods. (a) and (c) the CH* and C2* concentration reconstruction results of sample A via ART algorithm, (b) and (d) the corresponding reconstruction results of sample A via CNN model, (e) and (g) the CH* and C2* concentration reconstruction results of sample B via ART algorithm, (f) and (h) the corresponding reconstruction results of sample B via CNN model, (i) and (k) the CH* and C2* concentration reconstruction results of sample C via ART algorithm, (j) and (l) the corresponding reconstruction results of sample C via CNN model

Download Full Size | PDF

Additionally, qualitative comparison of five cases of each sample in testing dataset can be determined to evaluate the performance of the proposed CNN model for 3D FCT system. According to Eq. (9) and Eq. (10), the RMSE and SSIM have been calculated between the reconstruction results of CH* as well as C2* concentration from ART algorithm and CNN model as listed in Table   3. Table   3 indicates the structure complexity of samples has effects on the reconstruction accuracy and structure similarity based on CNN model.

Tables Icon

Table 3. Evaluation results of 3D candle flames reconstruction based on CNN model

Additionally, to investigate the versatility of the proposed CNN method, two sets of comparative tests were implemented with different camera exposure times. The CNN model was trained with the data with the exposure time of 15 µs and the focal length of camera lens of 12 mm. Five single-shot projections of each sample obtained from 12 views with exposure time of 10 µs and 20 µs, respectively were considered as the testing dataset. The details are shown in Figs.   21 and 22. The RMSE and SSIM were calculated between the reconstruction results of CH* as well as C2* concentration from ART algorithm and CNN model as listed in Tables   4 and 5. According to the results in Figs.   2122 and Table 45, both the qualitative and quantitative analysis indicate that our proposed CNN model shows the versatility on the influence of different camera exposure times as an example.

 figure: Fig. 21.

Fig. 21. Comparison of 3D candle flame reconstruction results with different methods at 10µs. (a) and (c) the CH* and C2* concentration reconstruction results of sample A via ART algorithm, (b) and (d) the corresponding reconstruction results of sample A via CNN model, (e) and (g) the CH* and C2* concentration reconstruction results of sample B via ART algorithm, (f) and (h) the corresponding reconstruction results of sample B via CNN model, (i) and (k) the CH* and C2* concentration reconstruction results of sample C via ART algorithm, (j) and (l) the corresponding reconstruction results of sample C via CNN model

Download Full Size | PDF

 figure: Fig. 22.

Fig. 22. Comparison of 3D candle flame reconstruction results with different methods at 20µs. (a) and (c) the CH* and C2* concentration reconstruction results of sample A via ART algorithm, (b) and (d) the corresponding reconstruction results of sample A via CNN model, (e) and (g) the CH* and C2* concentration reconstruction results of sample B via ART algorithm, (f) and (h) the corresponding reconstruction results of sample B via CNN model, (i) and (k) the CH* and C2* concentration reconstruction results of sample C via ART algorithm, (j) and (l) the corresponding reconstruction results of sample C via CNN model

Download Full Size | PDF

Tables Icon

Table 4. Evaluation results of 3D candle flames reconstruction based on CNN model at10µs

Tables Icon

Table 5. Evaluation results of 3D candle flames reconstruction based on CNN model at 20µs

Furthermore, it is worth noting that the proposed CNN model shows prominent advantage in term of computational efficiency compared with ART algorithm, multiplicative ART (MART) [36,37] and total variation regularization with a projections-ontoconvex-sets (TV-POCS) algorithm [43]. Three kinds of samples from the testing dataset were considered as reconstruction field. They were divided into 100×100×120 voxels in x, y, and z directions with spatial resolution of 0.55 mm. Further, the iterations via different methods are implemented in the same termination criterion. Table   6 shows the time-consuming for three kinds of flame via different methods.

Tables Icon

Table 6. Time-Consuming of 3D FCT Reconstruction via different methods

In the current system, both algorithms were implemented on the same computer with an Intel Core i7-8750H-CPU of 2.20 GHz. Last but not least, once the CNN model architecture was established it can be applied continuously to process the data. Verified by both numerical simulations and practical measurements, it is believed the CNN model-based reconstruction technique can be promisingly applied in real time flame monitoring and testing in combustion diagnostics.

5. Summary

In summary, we proposed a 3D rapid FCT reconstruction system based on CNN model for practical flame measurement. First, the numerical simulation has been performed by generating three cases of phantoms which were designed to mimic the 3D conical single and multimodal flame. After the evaluation of the loss function and training time, the optimal CNN architecture has been designed and certificated by calculating the RMSE and SSIM of reconstruction results according to the created phantoms. Furthermore, a real time FCT system consisting of 12 color CCD cameras covering ∼180° was realized for 3D candle flames information reconstruction. Multispectral separation algorithm was adopted to extract CH* and C2* intensities of candle flames. ART algorithm was implemented firstly to retrieve 3D distribution of three cases of candle flame, which can be treated as the ground-truth value for the CNN model training process. With a large set of training samples, the proposed CNN model has the ability to reconstruct 3D flame structure of CH* and C2* components from real time captured projections with credible accuracy and structure similarity. Compared with conventional inversion algorithms in FCT problem, the capability and high computational efficiency of the proposed CNN model in 3D rapid FCT measurements make it possible to deal with the rapid data processing and rapid monitoring. We believe the CNN model-based reconstruction technique can be future applied in combustion monitoring and applications for crucial physical parameter measurements.

References

1. M. Bozkurt, M. Fikri, and C. Schulz, “Investigation of the kinetics of OH* and CH* chemiluminescence in hydrocarbon oxidation behind reflected shock waves,” Appl. Phys. B: Lasers Opt. 107(3), 515–527 (2012). [CrossRef]  

2. V. N. Nori and J. M. Seitzman, “CH* chemiluminescence modeling for combustion diagnostics,” Proc. Combust. Inst. 32(1), 895–903 (2009). [CrossRef]  

3. J. B. Michael, P. Venkateswaran, J. D. Miller, M. N. Slipchenko, J. R. Gord, S. Roy, and T. R. Meyer, “100 kHz thousand-frame burst-mode planar imaging in turbulent flames,” Opt. Lett. 39(4), 739–742 (2014). [CrossRef]  

4. A. G. Gaydon and H. G. Wolfhard, Flames, their structure, radiation, and temperature (Halsted Press, 1979).

5. A. G. Gaydon, The spectroscopy of flames (Science & Business Media, 2012).

6. P. Nau, J. Krüger, A. Lackner, M. Letzgus, and A. Brockhinke, “On the quantification of OH*, CH*, and C2* chemiluminescence in flames,” Appl. Phys. B: Lasers Opt. 107(3), 551–559 (2012). [CrossRef]  

7. Y. K. Jeong, C. H. Jeon, and Y. J. Chang, “Evaluation of the equivalence ratio of the reacting mixture using intensity ratio of chemiluminescence in laminar partially premixed CH 4-air flames,” Exp. Therm. Fluid Sci. 30(7), 663–673 (2006). [CrossRef]  

8. J. Kojima, Y. Ikeda, and T. Nakajima, “Basic aspects of OH(A), CH(A), and C2(d) chemiluminescence in the reaction zone of laminar methane-air premixed flames,” Combust. Flame 140(1-2), 34–45 (2005). [CrossRef]  

9. H. Ax and W. Meier, “Experimental investigation of the response of laminar premixed flames to equivalence ratio oscillations,” Combust. Flame 167, 172–183 (2016). [CrossRef]  

10. Y. Hardalupas and M. Orain, “Local measurements of the time-dependent heat release rate and equivalence ratio using chemiluminescent emission from a flame,” Combust. Flame 139(3), 188–207 (2004). [CrossRef]  

11. A. Hossain and Y. Nakamura, “A numerical study on the ability to predict the heat release rate using CH* chemiluminescence in non-sooting counterflow diffusion flames,” Combust. Flame 161(1), 162–172 (2014). [CrossRef]  

12. S. S. Shy, Y. C. Chen, C. H. Yang, C. C. Liu, and C. M. Huang, “Effects of H2 or CO2 addition, equivalence ratio, and turbulent straining on turbulent burning velocities for lean premixed methane combustion,” Combust. Flame 153(4), 510–524 (2008). [CrossRef]  

13. D. Sun, G. Lu, H. Zhou, Y. Yan, and S. Liu, “Quantitative assessment of flame stability through image processing and spectral analysis,” IEEE Trans. Instrum. Meas. 64(12), 3323–3333 (2015). [CrossRef]  

14. S. A. Farhat, W. B. Ng, and Y. Zhang, “Chemiluminescent emission measurement of a diffusion flame jet in a loudspeaker induced standing wave,” Fuel 84(14-15), 1760–1767 (2005). [CrossRef]  

15. A. Vandersickel, M. Hartmann, K. Vogel, Y. M. Wright, M. Fikri, R. Starke, C. Schulz, and K. Boulouchos, “The autoignition of practical fuels at HCCI conditions: high-pressure shock tube experiments and phenomenological modeling,” Fuel 93, 492–501 (2012). [CrossRef]  

16. Z. Li, B. Li, Z. Sun, X. Bai, and M. Aldén, “Turbulence and combustion interaction: High resolution local flame front structure visualization using simultaneous single-shot PLIF imaging of CH, OH, and CH2O in a piloted premixed jet flame,” Combust. Flame 157(6), 1087–1096 (2010). [CrossRef]  

17. J. Sjoholm, J. Rosell, B. Li, M. Richter, Z. Li, X. Bai, and M. Aldén, “Simultaneous visualization of OH, CH, CH2O and toluene PLIF in a methane jet flame with varying degrees of turbulence,” Proc. Combust. Inst. 34(1), 1475–1482 (2013). [CrossRef]  

18. J. Miller, S. Peltier, M. Slipchenko, J. Mance, T. Ombrello, J. Gord, and C. Carter, “Investigation of transient ignition processes in a model scramjet pilot cavity using simultaneous 100 kHz formaldehyde planar laser-induced fluorescence and CH* chemiluminescence imaging,” Proc. Combust. Inst. 36(2), 2865–2872 (2017). [CrossRef]  

19. A. Charogiannis and F. Beyrau, “Laser induced phosphorescence imaging for the investigation of evaporating liquid flows,” Exp. Fluids 54(5), 1518 (2013). [CrossRef]  

20. D. Most and A. Leipertz, “Simultaneous two-dimensional flow velocity and gas temperature measurements by use of a combined particle image velocimetry and filtered Rayleigh scattering technique,” Appl. Opt. 40(30), 5379–5387 (2001). [CrossRef]  

21. T. D. Upton, D. D. Verhoeven, and D. E. Hudgins, “High-resolution computedtomography of a turbulent reacting flow,” Exp. Fluids 50(1), 125–134 (2011). [CrossRef]  

22. Y. Jin, Y. Song, X. Qu, Z. Li, Y. Ji, and A. He, “Hybrid algorithm for three-dimensional flame chemiluminescence tomography based on imaging overexposure compensation,” Appl. Opt. 55(22), 5917–5923 (2016). [CrossRef]  

23. W. Cai, X. Li, F. Li, and L. Ma, “Numerical and experimental validation of a three-dimensional combustion diagnostic based on tomographic chemiluminescence,” Opt. Express 21(6), 7050–7064 (2013). [CrossRef]  

24. X. Li and L. Ma, “Volumetric imaging of turbulent reactive flows at kHz based on computed tomography,” Opt. Express 22(4), 4768–4778 (2014). [CrossRef]  

25. X. Li and L. Ma, “Capabilities and limitations of 3D flame measurements based on computed tomography of chemiluminescence,” Combust. Flame 162(3), 642–651 (2015). [CrossRef]  

26. Y. Ishino, K. Takeuchi, S. Shiga, and N. Ohiwa, “Measurement of Instantaneous 3D-Distribution of Local Burning Velocity on a Turbulent Premixed Flame by Non-Scanning 3D-CT Reconstruction,” 4th European Combustion Meeting, (2009), pp. 14–17.

27. T. Yu, H. Liu, and W. Cai, “On the quantification of spatial resolution for three-dimensional computed tomography of chemiluminescence,” Opt. Express 25(20), 24093–24108 (2017). [CrossRef]  

28. J. Floyd and A. M. Kempf, “Computed Tomography of Chemiluminescence (CTC): High resolution and instantaneous 3-D measurements of a Matrix burner,” Proc. Combust. Inst. 33(1), 751–758 (2011). [CrossRef]  

29. J. Floyd, P. Geipel, and A. Kempf, “Computed tomography of chemiluminescence (CTC): instantaneous 3D measurements and phantom studies of a turbulent opposed jet flame,” Combust. Flame 158(2), 376–391 (2011). [CrossRef]  

30. A. Unterberger, M. Röder, A. Giese, A. Al-Halbouni, A. Kempf, and K. Mohri, “3D instantaneous reconstruction of turbulent industrial flames using computed tomography of chemiluminescence (CTC),” J. Combust. 2018, 1–6 (2018). [CrossRef]  

31. H. Liu, J. Zhao, C. Shui, and W. Cai, “Reconstruction and analysis of non-premixed turbulent swirl flames based on kHz-rate multi-angular endoscopic volumetric tomography,” Aerosp. Sci. Technol. 91, 422–433 (2019). [CrossRef]  

32. C. Ruan, T. Yu, F. Chen, S. Wang, W. Cai, and X. Lu, “Experimental characterization of the spatiotemporal dynamics of a turbulent flame in a gas turbine model combustor using computed tomography of chemiluminescence,” Energy 170, 744–751 (2019). [CrossRef]  

33. T. Yu, H. Liu, J. Zhang, W. Cai, and F. Qi, “Toward real-time volumetric tomography for combustion diagnostics via dimension reduction,” Opt. Lett. 43(5), 1107–1110 (2018). [CrossRef]  

34. S. M. Wiseman, M. J. Brear, R. L. Gordon, and I. Marusic, “Measurements from flame chemiluminescence tomography of forced laminar premixed propane flames,” Combust. Flame 183, 1–14 (2017). [CrossRef]  

35. K. Wang, F. Li, H. Zeng, and X. Yu, “Three-dimensional flame measurements with large field angle,” Opt. Express 25(18), 21008–21018 (2017). [CrossRef]  

36. J. Wang, Y. Song, Z. Li, A. Kempf, and A. He, “Multi-directional 3D flame chemiluminescence tomography based on lens imaging,” Opt. Lett. 40(7), 1231–1234 (2015). [CrossRef]  

37. Y. Jin, Y. Song, X. Qu, Z. Li, Y. Ji, and A. He, “Three-dimensional dynamic measurements of CH* and C2* concentrations in flame using simultaneous chemiluminescence tomography,” Opt. Express 25(5), 4640–4654 (2017). [CrossRef]  

38. L. Ma and W. Cai, “Determination of the optimal regularization parameters in hyperspectral tomography,” Appl. Opt. 47(23), 4186–4192 (2008). [CrossRef]  

39. H. Zhou, S. Han, F. Sheng, and C. Zheng, “Visualization of threedimensional temperature distributions in a large-scale furnace via regularized reconstruction from radiative energy images: numerical studies,” J. Quant. Spectrosc. Radiat. Transfer 72(4), 361–383 (2002). [CrossRef]  

40. H. Zhou, C. Lou, Q. Cheng, Z. Jiang, J. He, B. Huang, Z. Pei, and C. Lu, “Experimental investigations on visualization of three-dimensional temperature distributions in a large-scale pulverized-coal-fired boiler furnace,” Proc. Combust. Inst. 30(1), 1699–1706 (2005). [CrossRef]  

41. K. J. Daun, K. A. Thomson, F. Liu, and G. J. Smallwood, “Deconvolution of axisymmetric flame properties using Tikhonov regularization,” Appl. Opt. 45(19), 4638–4646 (2006). [CrossRef]  

42. J. Dai, T. Yu, L. Xu, and W. Cai, “On the regularization for nonlinear tomographic absorption spectroscopy,” J. Quant. Spectrosc. Radiat. Transfer 206, 233–241 (2018). [CrossRef]  

43. T. Yu and W. Cai, “Benchmark evaluation of inversion algorithms for tomographic absorption spectroscopy,” Appl. Opt. 56(8), 2183–2194 (2017). [CrossRef]  

44. D. Strong and T. Chan, “Edge-preserving and scale-dependent properties of total variation regularization,” Inv. Probl. 19(6), S165–S187 (2003). [CrossRef]  

45. K. J. Daun, S. J. Grauer, and P. J. Hadwin, “Chemical species tomography of turbulent flows: Discrete ill-posed and rank deficient problems and the use of prior information,” J. Quant. Spectrosc. Radiat. Transfer 172, 58–74 (2016). [CrossRef]  

46. S. J. Grauer, P. J. Hadwin, and K. J. Daun, “Improving chemical species tomography of turbulent flows using covariance estimation,” Appl. Opt. 56(13), 3900–3912 (2017). [CrossRef]  

47. S. J. Grauer, A. Unterberger, A. Rittler, K. J. Daun, A. M. Kempf, and K. Mohri, “Instantaneous 3D flame imaging by background-oriented schlieren tomography,” Combust. Flame 196, 284–299 (2018). [CrossRef]  

48. A. Unterberger, A. Kempf, and K. Mohri, “3D Evolutionary Reconstruction of Scalar Fields in the Gas-Phase,” Energies 12(11), 2075 (2019). [CrossRef]  

49. R. Hou, Y. Xia, Y. Bao, and X. Zhou, “Selection of regularization parameter for l1-regularized damage detection,” J. Sound Vib. 423, 141–160 (2018). [CrossRef]  

50. R. Guo, W. Zhang, R. Liu, C. Duan, and F. Wang, “Phase unwrapping in dual-wavelength digital holographic microscopy with total variation regularization,” Opt. Lett. 43(14), 3449–3452 (2018). [CrossRef]  

51. W. Lu, D. Lighter, and I. B. Styles, “L1-norm based nonlinear reconstruction improves quantitative accuracy of spectral diffuse optical tomography,” Biomed. Opt. Express 9(4), 1423–1444 (2018). [CrossRef]  

52. E. O. Åkesson and K. J. Daun, “Parameter selection methods for axisymmetric flame tomography through Tikhonov regularization,” Appl. Opt. 47(3), 407–416 (2008). [CrossRef]  

53. Y. Wen and R. H. Chen, “Parameter selection for total-variation-based image restoration using discrepancy principle,” IEEE T,” Image Process. 21(4), 1770–1781 (2012). [CrossRef]  

54. X. Zhang, B. Javidi, and M. K. Ng, “Automatic regularization parameter selection by generalized cross-validation for total variational Poisson noise removal,” Appl. Opt. 56(9), D47–D51 (2017). [CrossRef]  

55. T. Liu, J. Rong, P. Gao, H. Pu, W. Zhang, X. Zhang, Z. Liang, and H. Lu, “Regularized reconstruction based on joint L1 and total variation for sparse-view cone-beam X-ray luminescence computed tomography,” Biomed. Opt. Express 10(1), 1–17 (2019). [CrossRef]  

56. J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu, X. Wang, G. Wang, J. Cai, and T. Chen, “Recent advances in convolutional neural networks,” Pattern Recogn. 77, 354–377 (2018). [CrossRef]  

57. G. Wang, “A Perspective on Deep Imaging,” IEEE Access 4, 8914–8924 (2016). [CrossRef]  

58. H. Chen, Y. Zhang, W. Zhang, P. Liao, K. Li, J. Zhou, and G. Wang, “Low-dose CT via convolutional neural network,” Biomed. Opt. Express 8(2), 679–694 (2017). [CrossRef]  

59. H. Chen, Y. Zhang, M. K. Kalra, F. Lin, Y. Chen, P. Liao, J. Zhou, and G. Wang, “Low-Dose CT With a residual encoder-decoder convolutional neural network,” IEEE T. Med. Imaging 36(12), 2524–2535 (2017). [CrossRef]  

60. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018). [CrossRef]  

61. Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018). [CrossRef]  

62. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017). [CrossRef]  

63. J. Huang, H. Liu, J. Dai, and W. Cai, “Reconstruction for limited-data nonlinear tomographic absorption spectroscopy via deep learning,” J. Quant. Spectrosc. Radiat. Transfer 218, 187–193 (2018). [CrossRef]  

64. T. Yu, W. Cai, and Y. Liu, “Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics,” Rev. Sci. Instrum. 89(4), 043101 (2018). [CrossRef]  

65. J. Huang, J. Zhao, and W. Cai, “Compressing convolutional neural networks using POD for the reconstruction of nonlinear tomographic absorption spectroscopy,” Comput. Phys. Commun. 241, 33–39 (2019). [CrossRef]  

66. S. M. Ahn, “Deep learning architectures and applications,” J. Intell. Inf. Syst. 22(2), 127–142 (2016). [CrossRef]  

67. S. Colburn, Y. Chu, E. Shilzerman, and A. Majumdar, “Optical frontend for a convolutional neural network,” Appl. Opt. 58(12), 3179–3186 (2019). [CrossRef]  

68. S. Santurkar, D. Tsipras, A. Ilyas, and A. Madry, “How does batch normalization help optimization?” in Advances in Neural Information Processing Systems 31, (NIPS2018), pp. 2483–2493.

69. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

70. V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” Proc. Int. Conf. Mach. Learn., 807–814 (2010).

71. H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018). [CrossRef]  

72. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” in Cognitive modeling, T. A. Polk and C. M. Seifert, eds. (The MIT press, 1988).

73. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

74. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]  

75. M. C. Thomsen, A. Fuentes, R. Demarco, C. Volkwein, J.-L. Consalvi, and P. Reszka, “Soot measurements in candle flames,” Exp. Therm. Fluid Sci. 82, 116–123 (2017). [CrossRef]  

76. D. H. Cuttler and N. S. Girgis, “The use of high speed photography as a diagnostic tool for advanced combustion research in S.I. engines,” 16th Intl Congress on High Speed Photography and Photonics, 1985, pp. 316–323.

77. T. Qiu, M. Liu, G. Zhou, L. Wang, and K. Gao, “An unsupervised classification method for flame image of pulverized coal combustion based on convolutional auto-encoder and hidden Markov model,” Energies 12(13), 2585 (2019). [CrossRef]  

78. A. J. Huang, H. Liu, and W. Cai, “Online in situ prediction of 3-D fame evolution from its history 2-D projections via deep learning,” J. Fluid Mech. 875, R2 (2019). [CrossRef]  

79. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998). [CrossRef]  

80. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).

81. G. E. Hinton, S. Osindero, and Y. W. Teh, “A Fast Learning Algorithm for Deep Belief Nets,” The MIT Press Journal 18(7), 1527–1554 (2006). [CrossRef]  

82. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems (2012), pp. 1097–1105.

83. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.

84. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778 (2016).

85. L. Shi, Y. Liu, and J. Yu, “PIV measurement of separated flow over a blunt plate with different chord-to-thickness ratios,” J. Fluid. Struct. 26(4), 644–657 (2010). [CrossRef]  

Supplementary Material (3)

NameDescription
Visualization 1       3D radical concentration distribution of CH* and C2* of candle flame (Sample A) at different moments
Visualization 2       3D radical concentration distribution of CH* and C2* of candle flame (Sample B) at different moments
Visualization 3       3D radical concentration distribution of CH* and C2* of candle flame (Sample C) at different moments

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (22)

Fig. 1.
Fig. 1. 3D projection model of reconstruction.
Fig. 2.
Fig. 2. Demonstration of convolutional natural networks (CNN).
Fig. 3.
Fig. 3. The flowchart of 3D FCT using convolutional neural networks. The blue part represents the training stage and the yellow part represents the testing stage.
Fig. 4.
Fig. 4. Locations of CCD array in numerical simulation.
Fig. 5.
Fig. 5. One example of three phantoms cases. (a)–(c) 3D distribution of three phantom cases, (d)–(f) the horizontal slices according to the yellow dash lines in (a)–(c).
Fig. 6.
Fig. 6. Architecture of CNN model for 3D FCT.
Fig. 7.
Fig. 7. Evolution of four configurations of convolution layers of CNN model. (a) the performance of loss function with different number of layers, (b) the training time with different number of layers.
Fig. 8.
Fig. 8. Evolution of four cases with different number of convolution kernels. (a) the performance of loss function with different number of convolution kernels, (b) the training time with different number of convolution kernels.
Fig. 9.
Fig. 9. Comparisons between three representative phantoms (the first row) and the corresponding prediction results via CNN model (the second row).
Fig. 10.
Fig. 10. Comparisons of three horizontal slices between three representative phantoms and the corresponding prediction results.
Fig. 11.
Fig. 11. FCT system with 12 CCD cameras.
Fig. 12.
Fig. 12. Normalized radiation intensity difference of multi-camera. (a) CH* channel, (b) C2* channel.
Fig. 13.
Fig. 13. Re-projection results and correlation coefficients of twelfth camera.
Fig. 14.
Fig. 14. 12 views instantaneous projections and averaged flame projections of Sample A. (A movie is available online. See Visualization 1)
Fig. 15.
Fig. 15. 12 views instantaneous projections and averaged flame projections of Sample B. (A movie is available online. See Visualization 2)
Fig. 16.
Fig. 16. 12 views instantaneous projections and averaged flame projections of Sample C. (A movie is available online. See Visualization 3)
Fig. 17.
Fig. 17. Projections of instantaneous candle flame from 12 directions of sample A at 40.5s. (a) chemiluminescence emission intensity images of CH*, (b) chemiluminescence emission intensity images of C2*.
Fig. 18.
Fig. 18. Projections of instantaneous candle flame from 12 directions of sample B at 60.7s. (a) chemiluminescence emission intensity images of CH*, (b) chemiluminescence emission intensity images of C2*.
Fig. 19.
Fig. 19. Projections of instantaneous candle flame from 12 directions of sample C at 64.2s. (a) chemiluminescence emission intensity images of CH*, (b) chemiluminescence emission intensity images of C2*.
Fig. 20.
Fig. 20. Comparison of 3D candle flame reconstruction results with different methods. (a) and (c) the CH* and C2* concentration reconstruction results of sample A via ART algorithm, (b) and (d) the corresponding reconstruction results of sample A via CNN model, (e) and (g) the CH* and C2* concentration reconstruction results of sample B via ART algorithm, (f) and (h) the corresponding reconstruction results of sample B via CNN model, (i) and (k) the CH* and C2* concentration reconstruction results of sample C via ART algorithm, (j) and (l) the corresponding reconstruction results of sample C via CNN model
Fig. 21.
Fig. 21. Comparison of 3D candle flame reconstruction results with different methods at 10µs. (a) and (c) the CH* and C2* concentration reconstruction results of sample A via ART algorithm, (b) and (d) the corresponding reconstruction results of sample A via CNN model, (e) and (g) the CH* and C2* concentration reconstruction results of sample B via ART algorithm, (f) and (h) the corresponding reconstruction results of sample B via CNN model, (i) and (k) the CH* and C2* concentration reconstruction results of sample C via ART algorithm, (j) and (l) the corresponding reconstruction results of sample C via CNN model
Fig. 22.
Fig. 22. Comparison of 3D candle flame reconstruction results with different methods at 20µs. (a) and (c) the CH* and C2* concentration reconstruction results of sample A via ART algorithm, (b) and (d) the corresponding reconstruction results of sample A via CNN model, (e) and (g) the CH* and C2* concentration reconstruction results of sample B via ART algorithm, (f) and (h) the corresponding reconstruction results of sample B via CNN model, (i) and (k) the CH* and C2* concentration reconstruction results of sample C via ART algorithm, (j) and (l) the corresponding reconstruction results of sample C via CNN model

Tables (6)

Tables Icon

Table 1. The verification results of CNN model

Tables Icon

Table 2. Camera parameters results

Tables Icon

Table 3. Evaluation results of 3D candle flames reconstruction based on CNN model

Tables Icon

Table 4. Evaluation results of 3D candle flames reconstruction based on CNN model at10µs

Tables Icon

Table 5. Evaluation results of 3D candle flames reconstruction based on CNN model at 20µs

Tables Icon

Table 6. Time-Consuming of 3D FCT Reconstruction via different methods

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

I j = i = 1 N w i j f i 1 j p × q
{ w 11 f 11 + w 21 f 21 + w 31 f 31 + + w N 1 f N 1 = I 1 w 12 f 12 + w 22 f 22 + w 32 f 32 + + w N 2 f N 2 = I 2 w 1 j f 1 j + w 2 j f 2 j + w 3 j f 3 j + + w N j f N j = I j 1 j p × q
f i ( h + 1 ) = f i ( h ) + α w i j I j i = 1 N w i j f i ( h ) i = 1 N ( w i j ) 2 1 j p × q
A = Re LU( M z + b )
Re LU(x) =  { x i f x 0 0 i f x < 0
L M S E = 1 T t = 1 T u = 1 U v = 1 V ( P ~ u , v P u , v ) 2
{ F s ( x , y , z ) = V max r x , y l 2 i f l 1 0 , l 1 z l 2 F s ( x , y , z ) = V m p r x , y l 2 i f l 1 < 0 , z l 2 x , y , z [ 1 , 50 ]
{ r x , y = ( x c x ) 2 + ( y c y ) 2 l 1 = k 1 ( z 1 ) g 1 l 2 = k 2 ( z 1 ) g 2
R M S E = [ 1 U V u = 1 U v = 1 V ( P ~ u , v P u , v ) 2 ] 1 2
S S I M = ( 2 μ P μ P ~ + c 1 ) ( 2 σ P P ~ + c 2 ) ( μ P 2 + μ P ~ 2 + c 1 ) ( σ P 2 + σ P ~ 2 + c 2 )
{ c 1 = ( k 1 L ) 2 c 2 = ( k 2 L ) 2
R ( X , Y ) = ( X Y T ) | | X | | 2 × | | Y | | 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.