Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Implementation of energy-efficient convolutional neural networks based on kernel-pruned silicon photonics

Open Access Open Access

Abstract

Silicon-based optical neural networks offer the prospect of high-performance computing on integrated photonic circuits. However, the scalability of on-chip optical depth networks is restricted by the limited energy and space resources. Here, we present a silicon-based photonic convolutional neural network (PCNN) combined with the kernel pruning, in which the optical convolutional computing core of PCNN is a tunable micro-ring weight bank. Our numerical simulation demonstrates the effect of weight mapping accuracy on PCNN performance and we find that the performance of PCNN decreases significantly when the weight mapping accuracy is less than 4.3 bits. Additionally, the experimental demonstration shows that the accuracy of the PCNN on the MNIST dataset has a slight loss compared to the original CNN when 93.75 % of the convolutional kernels are pruned. By making use of kernel pruning, the energy saved by a convolutional kernel removal is about 202.3 mW, and the overall energy saved has a linear relationship with the number of kernels removed. The methodology is scalable and provides a feasible solution for implementing faster and more energy-efficient large-scale optical convolutional neural networks on photonic integrated circuits.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Artificial neural network has emerged as a powerful tool for information processing and deeper and larger-scale models are being proposed to figure out increasingly complex tasks. As a typical representative of deep neural network architectures, convolutional neural networks (CNN) have become one of the most crucial neural network models for enormous image data processing, such as image classification and segmentation [13], target detection [4,5] and face recognition [6,7]. The excellent learning ability of deep learning models is attributed to their complex network structure and huge parameters. However, the implementation of ultra-high computational power requires greater energy consumption and cost [8]. This progress has motivated research of deep learning accelerators. Although novel microelectronics-based hardware architectures dedicated to deep learning develop rapidly, they still do not get rid of the inherent bottlenecks of electronic processing and information transfer, such as GPUs [9], ASICs [10], and FPGAs [11]. Photonics has emerged as a promising method for achieving intensive linear computation in neural networks for its enormous advantages in parallel computing, ultra-low energy consumption, and high transmission speed in comparison to electronics. Moreover, silicon-based photonic integrated circuits compatible with complementary metal oxide semiconductor (CMOS) technologies are the subject of intense interest and research. Because they have the promise to provide the high arithmetic power and low power consumption required for modern deep learning. After decades of research, many results have been achieved in high-performance computing based on photonic integrated circuits, for instance, programmable silicon photonic neural networks using reconfigurable optical devices including coherent Mach-Zehndel interferometer [12,13] and microring resonator array [1416], implementation of a deep neural network based on free space diffraction [1719], Optical vector convolution accelerator based on an integrated microcomb source combined with multiplexing techniques [20,21] and All-optical neurosynaptic networks by means of special phase change materials (PCM) [22].

The number of convolutional neural networks’ layer has grown 25-fold over the past 30 years, expanding from the original 6-layer network structure [23], which included two convolutional layers, to 152 layers [24]. Although there has been much research demonstrating the feasibility of optical convolutional neural network architectures [15,20,21,2527], the implementation of large-scale photonic convolutional neural networks remains extremely challenging. Three main reasons are as follows:

  • • There is a limit to the scale of neural network models that can be implemented on integrated optical platforms due to the loss of light propagation in the optical path.
  • • On-chip space resources are insufficient, therefore it is tough to integrate too many optoelectronic devices.
  • • Excessive integrated optical components would make system tuning more complicated.

Pruning is a typical method of model compression, which can reduce the size of deep learning models and reduce redundant parameters in modern convolutional neural networks [28]. Therefore, model compression is of great value in the field of optical neuromorphic computing.

In this paper, a wavelength division multiplexing optical convolutional neural network based on convolution kernel pruning model is demonstrated, in which parallel microring resonators (MRRs) are central to the implementation of convolutional operations. Utilizing the physical mapping relationship between MRR arrays and convolutional kernels, we pruned the convolutional kernels to accomplish scale and number compression of devices in optical convolutional neural network architecture. To the best of our knowledge, this paper is the first method to realize the structure of optical convolutional neural network by using convolution kernel pruning method. The results show that the PCNN with a 31.25% kernel pruning rate has the highest prediction accuracy of 98.44% on the MNIST dataset. We also investigate the energy consumption savings of pruned optical convolutional neural networks. In addition, we adopted numerical simulation to demonstrate the effect of weight expression accuracy on the performance of PCNN. The results of simulation experiments have important guiding significance for the actual MRRs accuracy control. Although we focus on microring weight-bank-based convolutional neural networks, this work can be extended to other large-scale optical neural networks, contributing to the implementation of large-scale integration of silicon-based photonic chips.

This paper is organized as follows. In Sec. 2, we have investigated the propagation characteristics of parallel microring resonators (MRRs) and proposed a silicon-based photonic convolutional neural network (PCNN) combined with the kernel pruning. In Sec. 3, We demonstrated the pruned PCNN and analyzed the change in energy consumption. In addition, we adopted numerical simulation to verify the influence of weight expression accuracy on PCNN performance. Finally, we give a brief conclusion in Sec. 4.

2. Method and architecture

2.1 Convolutional neural network

Deep convolutional neural networks (DCNNs) are constructed from several convolutional layers, pooling layers and fully connected layers. Compared to fully connected neural networks, CNNs are able to learn features of the target image with fewer weight connections. CNNs have the properties of weight sharing and connection sparsity benefiting from the convolutional kernel in the convolutional layer, which connects and processes the data in the receptive fields (the area of the input image to be convoluted by the convolution kernel during the sliding convolution kernel window) one-to-one and equalizes to all receptive field. The convolutional calculations of convolutional layers are the most intensive operations, accounting for 86% to 94% of computing duty for CNN [29]. In the convolutional layer, each convolution kernel window slid sequentially to perform the convolution operation on the perceptual field and produce the corresponding output results, which represented the element values at the related spatial locations of the output feature map. When the sliding convolution operation is finished, an output feature map is generated.

Figure 1 illustrates the 2D image convolution operations in the case of one convolution kernel. Assume the images and kernels are square-shaped. The input image is represented as a feature mapping group with D feature maps of numbers with dimensionality $\mathrm{H} \times \mathrm{W}$. Each element $P_{i, j}$ indicates the intensity of a pixel. The convolutional kernel is expressed by a tensor with dimensionality $\mathrm{D} \times \mathrm{KH} \times \mathrm{KW}$. Where in this particular case, $\mathrm{H}=\mathrm{W}=3$, $\mathrm{kH}=\mathrm{kW}=2$, the patching stride is set to 1. The input image is divided into four kernel-sized sub-images, and the kernel is arranged to perform convolution calculation with each sub-image successively, producing the output feature. As shown at the bottom of Fig. 1, the above convolution extraction process of a complete feature map can be converted into a vector-matrix multiplication $O=F * P$ [30]. Matrix multiplication essentially operates the dot product between the convolution kernel vector and the receptive field vectors of multiple input images in parallel.

 figure: Fig. 1.

Fig. 1. Illustration of convolution operations when one kernel within a convolutional layer. At the top of the figure, an input image is represented as a feature mapping group with C feature maps of numbers with dimensionality $\mathrm{H} \times \mathrm{W}$, where H, W, and D are the height, width, and channel of the image, respectively. Each element $P_{i, j}$ indicates the intensity of a pixel. The convolutional kernel is expressed by a feature mapping group with D feature maps of numbers with dimensionality $\mathrm{KH} \times \mathrm{KW}$, where each element $F_{i, j}$ is defined as a real number. The input tensor is divided into kernel-sized small patches with a fixed stride, and the number of patches is equivalent to $\mathrm{OH} \times \mathrm{OW}$. The convolution kernel slides sequentially with these patches to perform the convolution operation and then obtains the output result. The bottom of the figure shows a single matrix-vector multiplication which is equivalent to the above convolution process. where the kernel is transformed into a vector F with $\mathrm{D} \times \mathrm{KH} \times \mathrm{KW}$ elements, and the image is transformed into a matrix P of dimensionality $(\mathrm{D} \times \mathrm{KH} \times \mathrm{KW}) \times (\mathrm{OH} \times \mathrm{OW})$.

Download Full Size | PDF

Each output symbol $O_{i, j}$ is the result of multiply-accumulate (MAC) operations.

$$O_{i, j}=\sum_{k=1}^{K W} \sum_{l=1}^{K H} F_{k, l} P_{i+k-1, j+l-1}$$

When the feature map contains multiple channels (assuming D channels), the convolution operation of the input data and the convolution kernel is performed correspondingly by channel, and the results are summed to obtain the output value. The output feature map element values can be expressed by the following equation.

$$O_{i, j}=\sum_{d=1}^D \sum_{k=1}^{K W} \sum_{l=1}^{K H} F_{d, k, l} P_{d, i+k-1, j+l-1}$$

The output feature mapping size is determined by both the input image and kernel size,

$$O W=W-K W+1$$
$$O H=H-K H+1$$

The total number of operations for multiply-accumulate operations performed by each convolution kernel in layer i is:

$$M A C s=D_i \times K W \times K H \times O H \times O W$$
where ${D_i}$ denotes input image channel i.

A single convolutional kernel can capture one specific feature from the input data merely. Thus, the number of different convolutional kernels increases with the number of feature descriptions to be extracted. In the meanwhile, it also means heavier computing tasks.

2.2 Micro-ring resonator

Microring resonators (MRR) based on silicon-on-insulator (SOI) materials play an essential role in the success of our optical convolutional neural networks, which are fabricated to be compatible with proven CMOS manufacturing technology [3133]. The generic configuration of the Add-Drop mirroring resonator is illustrated in Fig. 2(a), which is formed by the coupling of a looped waveguide and two parallel straight waveguides, and its four ports are called input port, through port, drop port, and add port respectively. ${k_1}$ and ${k_2}$ are the coupling coefficients between the upper and lower straight waveguides and the ring waveguide, respectively. Additionally, ${t_1}$ and ${t_2}$ are the propagation coefficients of light in the upper and lower straight waveguides, respectively. In the case of non-destructive coupling, $\left |k_1\right |^2+\left |t_1\right |^2=1,\left |k_2\right |^2+\left |t_2\right |^2=1$.

 figure: Fig. 2.

Fig. 2. (a) The general configuration of an Add-Drop micro-resonator. (b) The propagating optical field of light waves in the wavelength range of 1.46557$\mu \mathrm{m}$-1.56557 $\mu \mathrm{m}$ within the add-drop MRR, the light waves enter the MRR from the input port, part of the light propagates forward along the bus waveguide and finally exits from the through port, and the other part of the light couples into the ring waveguide and propagates around the waveguide and finally exits from the drop port.

Download Full Size | PDF

The optical response of MRR in different wavelengths was carried out using Lumerical MODE simulation software, as shown in Fig. 2(b). The light waves enter the bus waveguide from the input port and propagate forward. The input light waves are divided into two groups when they pass through the coupling region between the straight waveguide and the ring waveguide. That is, part of the light continues to propagate forward and the other part of the light is transferred to the ring cavity caused by the swift field coupling effect.

For the input optical wave whose wavelength satisfies the resonance condition expressed in Eq. (6), it constantly interferes constructively with the new optical wave of the self-wavelength within the ring, thus forming resonance [34]. Such wavelength is called the resonant wavelength of the MRR. It is worth noting that the resonant wavelength is not exclusive but periodic.

$$m \lambda=2 \pi r n_{e f f}, \quad m=1,2,3 \ldots$$
where m is the number of resonance levels, the parameters $r, \lambda$, ${n_eff}$ represent the radius of the ring waveguide, the wavelength of light, and the effective refractive index between the direct waveguide and the ring waveguide, respectively.

A phase shift ${\theta }$ is caused by the propagation of light in the ring waveguide, which results in a change in optical wave power [34].

$$\theta=\beta L=\frac{4 \pi^2 n_{e f f} r}{\lambda}$$
where ${\beta }$ is the propagation coefficient of light and ${L}$ is the circumference of the ring cavity.

In general, the optical power of resonant wavelengths inside the ring cavity accumulates and strengthens, and outputs completely from the drop port. The light wave at the non-resonant state light will export from the through port instead of the drop port. However, some of the wavelengths near the resonance peak are not entirely output from one port.

The micro-ring resonator has different transmission responses for different wavelengths of light. Figure 3 shows the transmission spectrum of a single silicon-based add-drop micro-ring resonator (radius was taken 6 $\mu \mathrm{m}$) when the optical wavelength varies from 1420 nm to 1520 nm. In this case, the incident power is taken to the unity. It can be clearly seen that 4-resonance peaks of the ring are observed, and the input light of the wavelength corresponding to the resonance peaks is completely output from the drop port. Comparing the two curves, they are complementary to each other, and the light intensity from the two ports is complementary. In addition, although the transmission response is very steep near the resonance peak, the variation of the transmission response with wavelength shift can still be observed upon magnification (as seen in the black curve in Fig. 4). The intensity transfer function of the output light from the drop and through ports with respect to the original light from the input port can be derived from CW operation and matching the fields as Eq. (8) and Eq. (9) [34].

$$T_{\text{drop }}=\frac{P_{\text{drop }}}{P_{\text{input }}}=\frac{\left(1-t_1^2\right)\left(1-t_2^2\right) \alpha}{1-2 t_1 t_2 \alpha \cos (\theta)+\left(t_1 t_2 \alpha\right)^2}$$
$$T_{\text{through }}=\frac{P_{\text{through }}}{P_{\text{input }}}=\frac{t_2^2 \alpha^2-2 t_1 t_2 \alpha \cos (\theta)+t_1^2}{1-2 t_1 t_2 \alpha \cos (\theta)+\left(t_1 t_2 \alpha\right)^2}$$
where $P_{\text {input }}, P_{\text {drop }}, P_{\text {through }}$ are the power of the incident field and the transmitted power of drop and through port, respectively. $\alpha$ is propagation loss within the ring and loss in the couplers.

 figure: Fig. 3.

Fig. 3. The transfer function of $T_{\text {drop}}$ and $T_{\text {through }}$ in different wavelengths. The red curve represents the drop port, and the blue one is the through port.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. (a) optical weight unit configuration. MRR, silicon microring resonator; BPD, balanced photodetector; TIA, linear transimpedance amplifier. (b) For the case that the add-drop micro-ring is connected with a balanced photodetector, the functional function $T_{\text {drop }}-T_{\text {through }}$ is as shown in the graph.

Download Full Size | PDF

Add-drop micro-ring in SOI possess wavelength-selective transmission, which can be used to control the intensity modulation of the input light wave at a specific wavelength, optically implementing the multiplication between the input data and the weights [35]. In this paper, we exploit the value of the transfer function of the micro-ring as the weight element ${W_i}$ of the convolution kernel. Observing the amplified transfer function (red and blue curves in Fig. 4(b)), it can be found that the transfer response of either the drop port or the through port ranges from 0 to 1. In view of the possibility of negative weight elements in the convolution kernel with sizes exceeding the range of 0 to 1, we connect a balanced photodetector and an amplifier after the add-drop micro-ring, as shown in Fig. 4(a). It is to provide a wider dynamic gain range when both absorption and amplification mechanisms are used [14,36]. The photodetector converts the light intensity output from the drop and through ports into a current ${I_\text {(D-T)}}$ through the photoelectric effect, which is then amplified by a linear amplifier. The functional function of the structure shown in Fig. 4(a) can be described by Eq. (10).

$$y=g I_{D-T }=g h_{o e}\left|E_0\right|^2 F$$
$$F=T_{d r o p}-T_{t h r o u g h}$$
where $\left |E_{0}\right |^{2}$ is incident power, ${F}$ represents the function of an add-drop micro-ring, ${g}$ and ${h_\text {oe}}$ are the coefficient of photoelectric conversion and the gain of an amplifier (TIA), respectively.

2.3 Silicon photonic kernel structure

In the premise that adjacent micro-rings are coupling-free, and their resonant peaks do not overlap each other, multiple ring resonators connected in parallel have the property of controlling the transmission of a single wavelength of light intensity without affecting the transmission of other wavelengths. As the photoelectric effect of the detector is optical wavelength independent and only related to the light intensity, in addition to the fact that we do not need to keep the wavelength information of each channel independently, the balanced detector can also be used to implement weighted addition. As a result, the optical convolutional neuron structure shown in Fig. 5 allows optical implementation of the vector-vector multiplication calculation during the convolution operation, and its function can be described as Eq. (12).

$$y_{\text{out }}=g h_{\text{oe }} \sum_{n=1}^N\left|E_0\right|_n^2 F_n$$
where $y_{\text {out }}$ is the output of the optical convolutional calculation, ${N}$ is the number of ring resonators within the MRR weight bank, which also equals the size of the convolution kernel. $\left |E_0\right |_n^2$ represents the intensity of the input light signal at different wavelengths and ${F_n}$ represents the transfer function of the corresponding optical weight unit.

 figure: Fig. 5.

Fig. 5. The schematic of an implemented N-input photonic convolutional neuron, in which the weights of N-input optical signals are calibrated using N-parallel silicon micro-rings and summed after photodetection using a BPD. The photocurrent $i_{\text {sum }}$ is amplified using a TIA.

Download Full Size | PDF

Given a microresonator geometry, the transmission is related to the wavelength of the light and the difference in effective refractive index between the corresponding straight and ring waveguides at that wavelength. In recent years, several studies have confirmed and demonstrated that techniques such as thermal modulation could alter the transmission response of the microring to the input optical wave, thus allowing the microring weights to be reconfigurable [37,38].

Inspired by the work by Tait and Bangari et al. [25,36], the framework comprised of multiple silicon-based micro-resonator photonic convolutional neurons can implement multiple convolutional operations (matrix multiplication) simultaneously with reconfigurable weights when combined with the wavelength-division multiplexing (WDM) technique. It should be noted that the crosstalk and cross-gain between channels intensify with the number of micro-rings in the system, making the precise control of optical weights more complicated and challenging [39].

2.4 Kernel pruning

Although research in recent years has demonstrated the ability to implement optical convolutional neural networks on silicon-based photonic platforms and various studies have confirmed the massive advantages of implementing multiplicative accumulation operations optically, large-scale optical convolutional neural network implementation is still exceedingly challenging in the presence of the limits of resources in the on-chip area and the channel density in non-ideal MRR weight bank [40], which becomes more notable for more complex networks with more convolutional operations or convolutional layers. Typically, large convolutional neural networks inevitably have a large amount of parameter redundancy, such as convolutional kernels and feature mapping channels. The model size can be reduced or even improved in terms of model accuracy by choosing the appropriate pruning method and pruning rate. In recent years, a significant amount of work on improving the computational efficiency of network models, reducing computational cost, and shorting inference time through model compression has been done [41,42].

Model pruning provides greater accessibility for deploying large convolutional neural networks on silicon-based photonic platforms. Compared with magnitude-based weight pruning schemes that would lead to sparse connections, convolutional kernel pruning shows superior performance in reducing computational cost and inference time because the former removes most of the parameters mainly from the fully connected layer, while the latter focus on the convolutional layer in deep convolutional neural networks that contributes most to computational cost.

As shown in Fig. 6, The blue square indicates the set of output feature mapping for each convolutional layer, and it is available to indicate the input feature mapping of the next layer, which we define as $Z_i \in R\left (C_i \times H_i \times W_i\right )(i \in [1, M])$, where C denotes the number of channels, H and W denote the height and weight of a single feature mapping, respectively. We use $Z_i^j$ to describe the single channel mapping j of the output feature mapping set of the convolutional layer i. Besides, the convolutional layer i contains ${C_\text {i}}$ convolutional kernels, and we define the convolutional kernel k in the layer i as $K_i^j$. The red squares indicate the convolutional kernels pruned. We can find from Fig. 6, removing a kernel in the convolutional layer i causes one channel of the output feature mapping of that layer to disappear. Accordingly, the corresponding channels of the convolutional kernels in the next convolutional layer are removed because the number of channels of the convolutional kernels must be equal to the number of channels of the output feature mapping of the previous layer.

 figure: Fig. 6.

Fig. 6. Interpretation of convolution kernel pruning principle.

Download Full Size | PDF

The criterion which determines which kernel need to be removed has a lot of selective solutions. For our experiments, our criterion for measuring the importance of the kernel is to handle the absolute difference in the variation of the network loss function by Taylor expansion (see Eq. (13) and Eq. (14)) since this ranking method has the highest spearman correlation with the best ranking method for the variation of the network loss function, namely the oracle ranking proposed by Nvidia [43].

Let D denote the dataset which equals $\left \{\mathrm{X}=\left \{x_0, x_1, \ldots, x_N\right \}, \mathrm{Y}=\left \{y_0, y_1, \ldots, y_N\right \}\right \}$, where ${X}$ and ${Y}$ represent a set of input and a set of target output, ${W}$ represents the parameters of the network, which contains weight and bias.

$$\left|\Delta C\left(m_i\right)\right|={\mid} C\left(D\left|W-C\left(D \mid W^{\prime}\right)\right|\right.$$
$$\left|\Delta C\left(m_i\right)\right|=\left|\frac{\delta C}{\delta m_i} m_i\right|$$
where the cost value can be expressed by $\Delta C\left (m_i\right )$, ${m_i}$ is one of the output feature maps, $m=\left \{\mathrm{Z}_0^{(1)}, \mathrm{Z}_0^{(2)}, \ldots, \mathrm{Z}_L^{\left (C_i\right )}\right \}$. $C\left (D \mid W\right )$ denotes the cost function of the model before pruning and $C\left (D \mid W^{\prime }\right )$ denotes the cost function of the model after pruning.

The procedure of pruning filters is as Fig. 7 shown. In this work, we adopt the greedy strategy. It is demonstrated that for large-scale convolutional neural networks, a one-time multilayer pruning and retraining scheme based on greedy computation achieves higher accuracy compared with the layer-by-layer pruning and retraining strategy [44]. Finally, the relatively unimportant convolutional kernels are removed according to the set pruning rate, we obtain a simplified pruned neural network. As shown in Fig. 8, the pruning of the convolutional kernel also affects the removal of the optical weight units in the optical convolutional neural network.

 figure: Fig. 7.

Fig. 7. Scheme of convolution kernel pruning.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Illustration of pruning on MRR weight banks.

Download Full Size | PDF

3. Experiments and results

3.1 Experimental demonstration

In our experiments, we demonstrated a custom convolutional neural network model with a convolutional layer, a pooling layer, and a fully connected layer for the MNIST datasets. In the first experiment, we focused on the optimization of the model by training the custom convolutional neural network offline, and the classification accuracy during training is shown in Fig. 9(a). We can find that the accuracy of the model prediction improves significantly with the increase of the training times in the early stage. When the number of training reaches 90, the accuracy curve tends to level off, and the network prediction accuracy stays around 97.62% from then on. We optimized the trained model using the convolutional kernel pruning algorithm proposed in 2.4. Focusing on the effect of removing the convolutional kernel during pruning and fine-tuning operations on the prediction ability of the network, we tested the network after each pruning and fine-tuning iteration and recorded its prediction accuracy without and after performing fine-tuning, as shown in Fig. 9(b). In this experiment, we set the number of pruning-trimming iterations to 6 and removed 10 convolutional kernels each time. The experimental data indicates that fine-tuning plays a significant role in restoring the accuracy of the network after removing most of the number of convolutional kernels. when the accuracy plummets to 17.62% after a pruning rate as high as 6.25%, the accuracy of the fine-tuned network is still as high as 96.79%.

 figure: Fig. 9.

Fig. 9. (a) The model is trained before pruning and the prediction accuracy changes with the increase in training times during training. (b) The performance of filter pruning. The light blue curve represents the prediction accuracy after pruning ten filters at a time during the pruning process. The ink-blue curve represents the accuracy of fine-tuning after trimming.

Download Full Size | PDF

To judge the ground truth performance of the pruned convolutional network with various pruning rates, we tested the pruned models with various pruning rates on the test set of the MNIST dataset. The experimental data contains 10,000 instances with 10 numeric labels from 0 to 9. The number of instances for each number is shown in Table 1. By setting different pruning rates, we can obtain the accuracy of network inference after pruning, as shown in Table 2. The corresponding confusion matrixes for both cases are depicted in Fig. 10. It is worth noting that the final accuracy of the pruned network after kernel removal and fine-tuning is slightly better than the accuracy of the network before pruning, which is consistent with the trend of previous work. From the Table 2, we can also find that the highest prediction accuracy of 98.44% was observed when pruning was performed in 2 iterations out of 64 convolutional kernels.

 figure: Fig. 10.

Fig. 10. The confusion matrix table for the pruned model. (a) pruning rate is 93.75%. (b) pruning rate is 79.12%. (c) pruning rate is 62.5%. (d) pruning rate is 46.87%.

Download Full Size | PDF

Tables Icon

Table 1. The number of instances of each number of 0-9 in the MNIST test set.

Tables Icon

Table 2. The accuracy of pruned networks with different pruning rates.

Then, we demonstrated a 3-layer pruned optical convolutional network architecture (input layer, convolutional layer, and fully connected layer), which consists of multiple photonic convolutional computational core units as depicted in Fig. 5, to perform image recognition on the MNIST dataset. The overview of the proposed PCNN accelerator architecture is shown in Fig. 11. To determine the required parameters of the optical components, we compressed the Convnets based on kernel pruning and performed weights and biases download operations after training in the digital computer (PC). The parameters were fed into the controllers of the optical device to fulfill the configuration of optical calculation units.

 figure: Fig. 11.

Fig. 11. Schematic of the implemented on-chip photonic convolutional neural network (PCNN) performing handwritten digital recognition on the MNIST, realized using on-chip MRR weight banks combined with WDM technology. M: Mach-Zehnder modulator.

Download Full Size | PDF

The target image of the MNIST dataset was pre-processed on the computer, in which the image (28 $\times$ 28) was divided into m sub-images according to the configured convolution kernel size and vectorized. Then we attained m vectors with n elements. Subsequently, we fed them into the modulators sequentially and generated the optical signal, which was able to reflect the corresponding value of the input data vector by modulating it with the optical carrier signals with unique wavelengths from the lasers. The WDM concentrated the modulated optical signal from the electro-optical modulator, and N-delay lines were arranged to route input light signals to N-MRR weight banks with n-MRRs, forming a convolutional layer. Where N, m and n correspond to the number of varieties of convolutional kernels, the convolved feature map size and convolutional kernel size, respectively. When repeated m times, we can get N convolutional feature mappings. In our experiments, n was taken 9, which satisfied the requirement of the maximum number of WDM channels for micro-ring weight reorganization in [40], and the accuracy of the weight mapping was 10.96 bits. The optical signal after multiplication and accumulation calculation through the optical convolutional computational core unit was output to a digital computer for nonlinear activation and pooling calculations. Finally, the transformed vector was transmitted to the full connection layer with ten nodes, in which the result of the MNIST classification appeared. From our experimental results, the recognition accuracy of the optical convolutional neural network based on the pruned convolutional kernel on the MNIST dataset is comparable to that of the electronic neural network shown above.

3.2 Numerical simulation of MRR weight mapping accuracy

The accuracy of microring tuning is constrained by the actual tuning process level, so we also made comparative simulation experiments for the effect of different accuracy of microring weight configurations on the recognition performance of the optical convolutional neural network. From our experimental results (see Fig. 12), the recognition accuracy of the optical convolutional neural network on the MNIST dataset decreases significantly when the weight accuracy reaches only decile (i.e.,4.3 bits), compared to the corresponding post-pruning model. When the weight accuracy reaches the percentile (i.e., 7.6 bits), the recognition performance of the network has been greatly improved, which is comparable to or even improves upon the post-pruning model. Additionally, we can find that the higher the pruning rate of the model, the more sensitive it is to changes in weight accuracy. This is related to the robustness of the model after compression. These results indicate that the accuracy of the optical convolutional neural network can be improved by improving the process of micro-ring weight control. However, it is worth noting that even with the improvement of the weight mapping accuracy, the prediction accuracy of the PCNN architecture no longer has any significant change when the weight mapping accuracy reaches the thousands (i.e.,10.96 bits). In this case, the enhancement of weight tuning has a negligible effect on the improvement of the prediction accuracy of the PCNN. It is noteworthy that in this paper our handwritten digit recognition is a relatively simple inference task, and when the inference task becomes more complicated, the accuracy of the PCNN will be more sensitive to the mapping accuracy of the microring weight.

 figure: Fig. 12.

Fig. 12. Comparative results of modeling based on different decimal digits.

Download Full Size | PDF

3.3 Energy analysis

In general, the energy consumed by the proposed PCNN architecture is mainly dependent on the power consumption of each component and the number of them. Thus, total power consumption is strongly correlated with the number of parameters of PCNN. As shown in Table 3, convolution kernel pruning reduces the number of convolution kernels and correspondingly reduces the hardware to perform convolution operations in POCNN, thus reducing the energy consumption. Figure 13 shows the variation in the number of devices (MRR, BHD, TIA) required for the architecture during pruning. As introduced above in the proposed architecture, the number of kernel weight channels is set to 9. From the schematic shown in Fig. 5, it can be found that each photonic convolution kernel consists of 9 MRRS, one TIA and one BPD. Thus, with 2.8 mW per photodetector [45], 19.5 mW per MRR [46], and 24 mW per TIA [47], we get an energy saving of 202.3 mW for each convolutional kernel pruned. This means that the deletion of 10 kernels in each pruned iteration will bring 2023 mw of energy conservation. For larger deep convolutional neural networks, the more redundant convolutional kernels and prunable convolutional kernels there are, the more energy consumption is saved.

Tables Icon

Table 3. Total number of kernels and parameters of Fully-connected layer required in POCNN for different iterations of pruning.

 figure: Fig. 13.

Fig. 13. Total number of components required in POCNN for different iterations of pruning.

Download Full Size | PDF

4. Conclusion

In this article, we used on-chip tunable MRR weight bank to perform optical convolution calculation, and achieved efficient photonic CNN through convolution kernel pruning. The experimental demonstration of pruned-based single-layer PCNN achieved an average 97.97% prediction accuracy on the MNIST dataset while the 60 kernels in the convolutional layer are pruned. For energy consumption, the energy saved per convolutional kernel removal is estimated to be 202.3 mW. Additionally, we simulate and compare the recognition performance of PCNN with different MRR weight configuration mapping accuracy, and the results show that the accuracy of optical convolutional neural networks can be improved by improving the MRR weight control process. We note that the weight mapping accuracy of the architecture is equal or less than 4.3 bits will result in significant inference accuracy losses. After the accuracy of the weight mapping reaches thousands (i.e., 10.96 bits), the enhancement of the weight adjustment has a negligible effect on the improvement of the PCNN prediction accuracy. Extensions to this work could include larger-scale integration, on-chip parameter training and nonlinear activation operations on the chip.

In summary, this work opens up the possibility of integrating and deploying compact, low-power silicon optical neural networks on-chip, especially deep neural networks on a large scale. And it has important guidance on how to improve the inference performance of integrated optical neural networks.

Funding

Natural Science Foundation of Hunan Province (2019JJ40352); National Natural Science Foundation of China (61801522).

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are available in Ref. [48].

References

1. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), pp. 1492–1500.

2. Y. Chen, J. Li, H. Xiao, X. Jin, S. Yan, and J. Feng, “Dual path networks,” Advances in neural information processing systems 30 (2017).

3. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018). [CrossRef]  

4. W. Chu, Y. Liu, C. Shen, D. Cai, and X.-S. Hua, “Multi-task vehicle detection with region-of-interest voting,” IEEE Trans. on Image Process. 27(1), 432–441 (2018). [CrossRef]  

5. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” Advances in Neural Information Processing Systems 28, 1 (2015).

6. J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization,” Journal of Machine Learning Research 12(61), 2121–2159 (2011).

7. H. Ben Fredj, S. Bouguezzi, and C. Souani, “Face recognition in unconstrained environment with cnn,” Vis. Comput. 37(2), 217–226 (2021). [CrossRef]  

8. OpenAI, “Ai and compute,” [Online]. Available:https://openai.com/research/ai-and-compute.

9. A. Shafiee, A. Nag, N. Muralimanohar, R. Balasubramonian, J. P. Strachan, M. Hu, R. S. Williams, and V. Srikumar, “Isaac: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars,” SIGARCH Comput. Archit. News 44(3), 14–26 (2016). [CrossRef]  

10. T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” SIGARCH Comput. Archit. News 42(1), 269–284 (2014). [CrossRef]  

11. A. Aimar, H. Mostafa, E. Calabrese, A. Rios-Navarro, R. Tapiador-Morales, I.-A. Lungu, M. B. Milde, F. Corradi, A. Linares-Barranco, S.-C. Liu, and T. Delbruck, “Nullhop: A flexible convolutional neural network accelerator based on sparse representations of feature maps,” IEEE Trans. Neural Netw. Learning Syst. 30(3), 644–656 (2019). [CrossRef]  

12. Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017). [CrossRef]  

13. S. Pai, B. Bartlett, O. Solgaard, and D. A. Miller, “Matrix optimization on universal unitary photonic devices,” Phys. Rev. Appl. 11(6), 064044 (2019). [CrossRef]  

14. A. N. Tait, A. X. Wu, T. F. De Lima, E. Zhou, B. J. Shastri, M. A. Nahmias, and P. R. Prucnal, “Microring weight banks,” IEEE J. Sel. Top. Quantum Electron. 22(6), 312–325 (2016). [CrossRef]  

15. S. Xu, J. Wang, and W. Zou, “Optical convolutional neural network with wdm-based optical patching and microring weighting banks,” IEEE Photonics Technol. Lett. 33(2), 89–92 (2021). [CrossRef]  

16. A. N. Tait, T. F. De Lima, E. Zhou, A. X. Wu, M. A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Neuromorphic photonic networks using silicon photonic weight banks,” Sci. Rep. 7(1), 7430 (2017). [CrossRef]  

17. T. Zhou, X. Lin, J. Wu, Y. Chen, H. Xie, Y. Li, J. Fan, H. Wu, L. Fang, and Q. Dai, “Large-scale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit,” Nat. Photonics 15(5), 367–373 (2021). [CrossRef]  

18. J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8(1), 12324 (2018). [CrossRef]  

19. X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018). [CrossRef]  

20. X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 tops photonic convolutional accelerator for optical neural networks,” Nature 589(7840), 44–51 (2021). [CrossRef]  

21. J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589(7840), 52–58 (2021). [CrossRef]  

22. J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran, and W. H. Pernice, “All-optical spiking neurosynaptic networks with self-learning capabilities,” Nature 569(7755), 208–214 (2019). [CrossRef]  

23. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998). [CrossRef]  

24. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), pp. 770–778.

25. V. Bangari, B. A. Marquez, H. Miller, A. N. Tait, M. A. Nahmias, T. F. De Lima, H.-T. Peng, P. R. Prucnal, and B. J. Shastri, “Digital electronics and analog photonics for convolutional neural networks (deap-cnns),” IEEE J. Sel. Top. Quantum Electron. 26(1), 1–13 (2020). [CrossRef]  

26. F. Sunny, M. Nikdast, and S. Pasricha, “Sonic: A sparse neural network inference accelerator with silicon photonics for energy-efficient deep learning,” in 2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC), (IEEE, 2022), pp. 214–219.

27. A. Mehrabian, Y. Al-Kabani, V. J. Sorger, and T. El-Ghazawi, “Pcnna: A photonic convolutional neural network accelerator,” in 2018 31st IEEE International System-on-Chip Conference (SOCC), (IEEE, 2018), pp. 169–173.

28. J.-H. Luo, H. Zhang, H.-Y. Zhou, C.-W. Xie, J. Wu, and W. Lin, “Thinet: pruning cnn filters for a thinner net,” IEEE Trans. Pattern Anal. Mach. Intell. 41(10), 2525–2538 (2019). [CrossRef]  

29. X. Li, G. Zhang, H. H. Huang, Z. Wang, and W. Zheng, “Performance analysis of gpu-based convolutional neural networks,” in 2016 45th International conference on parallel processing (ICPP), (IEEE, 2016), pp. 67–76.

30. A. Vasudevan, A. Anderson, and D. Gregg, “Parallel multi channel convolution using general matrix multiplication,” in 2017 IEEE 28th international conference on application-specific systems, architectures and processors (ASAP), (IEEE, 2017), pp. 19–24.

31. L. Kimerling, D. Ahn, A. Apsel, et al., “Electronic-photonic integrated circuits on the cmos platform,” in Silicon photonics, vol. 6125 (SPIE, 2006), pp. 6–15.

32. W. Bogaerts, S. K. Selvaraja, P. Dumon, J. Brouckaert, K. De Vos, D. Van Thourhout, and R. Baets, “Silicon-on-insulator spectral filters fabricated with cmos technology,” IEEE J. Sel. Top. Quantum Electron. 16(1), 33–44 (2010). [CrossRef]  

33. W. Bogaerts, P. Dumon, D. Van Thourhout, D. Taillaert, P. Jaenen, J. Wouters, S. Beckx, V. Wiaux, and R. G. Baets, “Compact wavelength-selective functions in silicon-on-insulator photonic wires,” IEEE J. Sel. Top. Quantum Electron. 12(6), 1394–1401 (2006). [CrossRef]  

34. W. Bogaerts, P. De Heyn, T. Van Vaerenbergh, K. De Vos, S. Kumar Selvaraja, T. Claes, P. Dumon, P. Bienstman, D. Van Thourhout, and R. Baets, “Silicon microring resonators,” Laser Photonics Rev. 6(1), 47–73 (2012). [CrossRef]  

35. S. Feng, T. Lei, H. Chen, H. Cai, X. Luo, and A. W. Poon, “Silicon photonics: from a microresonator perspective,” Laser Photonics Rev. 6(2), 145–177 (2012). [CrossRef]  

36. A. N. Tait, M. A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Broadcast and weight: an integrated network for scalable photonic spike processing,” J. Lightwave Technol. 32(21), 4029–4041 (2014). [CrossRef]  

37. P. Pintus, M. Hofbauer, C. L. Manganelli, M. Fournier, S. Gundavarapu, O. Lemonnier, F. Gambini, L. Adelmini, C. Meinhart, C. Kopp, F. Testa, H. Zimmermann, and C. J. Oton, “Pwm-driven thermally tunable silicon microring resonators: design, fabrication, and characterization,” Laser Photonics Rev. 13(9), 1800275 (2019). [CrossRef]  

38. C. Huang, S. Bilodeau, T. Ferreira de Lima, A. N. Tait, P. Y. Ma, E. C. Blow, A. Jha, H.-T. Peng, B. J. Shastri, and P. R. Prucnal, “Demonstration of scalable microring weight bank control for large-scale photonic integrated circuits,” APL Photonics 5(4), 040803 (2020). [CrossRef]  

39. A. N. Tait, T. F. De Lima, M. A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Multi-channel control for microring weight banks,” Opt. Express 24(8), 8895–8906 (2016). [CrossRef]  

40. A. N. Tait, “Silicon photonic neural networks,” Ph.D. thesis, Princeton University (2018).

41. T. Choudhary, V. Mishra, A. Goswami, and J. Sarangapani, “A comprehensive survey on model compression and acceleration,” Artif. Intell. Rev. 53(7), 5113–5155 (2020). [CrossRef]  

42. A. Berthelier, T. Chateau, S. Duffner, C. Garcia, and C. Blanc, “Deep model compression and architecture optimization for embedded systems: A survey,” J. Sign. Process. Syst. 93(8), 863–878 (2021). [CrossRef]  

43. P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz, “Pruning convolutional neural networks for resource efficient inference,” arXiv, arXiv:1611.06440 (2016). [CrossRef]  

44. H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, “Pruning filters for efficient convnets,” arXivarXiv:1608.08710 (2016). [CrossRef]  

45. B. Wang, Z. Huang, W. V. Sorin, X. Zeng, D. Liang, M. Fiorentino, and R. G. Beausoleil, “A low-voltage si-ge avalanche photodiode for high-speed and energy efficient silicon photonic links,” J. Lightwave Technol. 38(12), 3156–3163 (2020). [CrossRef]  

46. J. Sun, R. Kumar, M. Sakib, J. B. Driscoll, H. Jayatilleka, and H. Rong, “A 128 gb/s pam4 silicon microring modulator with integrated thermo-optic resonance tuning,” J. Lightwave Technol. 37(1), 110–115 (2019). [CrossRef]  

47. A. Karimi-Bidhendi, H. Mohammadnezhad, M. M. Green, and P. Heydari, “A silicon-based low-power broadband transimpedance amplifier,” IEEE Trans. Circuits Syst. I 65(2), 498–509 (2018). [CrossRef]  

48. Y. LeCun, C. Cortes, and C. J. C. Burges, “The mnist database of handwritten digits,” [Online]. Available: http://yann.lecun.com/exdb/mnist/.

Data Availability

Data underlying the results presented in this paper are available in Ref. [48].

48. Y. LeCun, C. Cortes, and C. J. C. Burges, “The mnist database of handwritten digits,” [Online]. Available: http://yann.lecun.com/exdb/mnist/.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Illustration of convolution operations when one kernel within a convolutional layer. At the top of the figure, an input image is represented as a feature mapping group with C feature maps of numbers with dimensionality $\mathrm{H} \times \mathrm{W}$, where H, W, and D are the height, width, and channel of the image, respectively. Each element $P_{i, j}$ indicates the intensity of a pixel. The convolutional kernel is expressed by a feature mapping group with D feature maps of numbers with dimensionality $\mathrm{KH} \times \mathrm{KW}$, where each element $F_{i, j}$ is defined as a real number. The input tensor is divided into kernel-sized small patches with a fixed stride, and the number of patches is equivalent to $\mathrm{OH} \times \mathrm{OW}$. The convolution kernel slides sequentially with these patches to perform the convolution operation and then obtains the output result. The bottom of the figure shows a single matrix-vector multiplication which is equivalent to the above convolution process. where the kernel is transformed into a vector F with $\mathrm{D} \times \mathrm{KH} \times \mathrm{KW}$ elements, and the image is transformed into a matrix P of dimensionality $(\mathrm{D} \times \mathrm{KH} \times \mathrm{KW}) \times (\mathrm{OH} \times \mathrm{OW})$.
Fig. 2.
Fig. 2. (a) The general configuration of an Add-Drop micro-resonator. (b) The propagating optical field of light waves in the wavelength range of 1.46557$\mu \mathrm{m}$-1.56557 $\mu \mathrm{m}$ within the add-drop MRR, the light waves enter the MRR from the input port, part of the light propagates forward along the bus waveguide and finally exits from the through port, and the other part of the light couples into the ring waveguide and propagates around the waveguide and finally exits from the drop port.
Fig. 3.
Fig. 3. The transfer function of $T_{\text {drop}}$ and $T_{\text {through }}$ in different wavelengths. The red curve represents the drop port, and the blue one is the through port.
Fig. 4.
Fig. 4. (a) optical weight unit configuration. MRR, silicon microring resonator; BPD, balanced photodetector; TIA, linear transimpedance amplifier. (b) For the case that the add-drop micro-ring is connected with a balanced photodetector, the functional function $T_{\text {drop }}-T_{\text {through }}$ is as shown in the graph.
Fig. 5.
Fig. 5. The schematic of an implemented N-input photonic convolutional neuron, in which the weights of N-input optical signals are calibrated using N-parallel silicon micro-rings and summed after photodetection using a BPD. The photocurrent $i_{\text {sum }}$ is amplified using a TIA.
Fig. 6.
Fig. 6. Interpretation of convolution kernel pruning principle.
Fig. 7.
Fig. 7. Scheme of convolution kernel pruning.
Fig. 8.
Fig. 8. Illustration of pruning on MRR weight banks.
Fig. 9.
Fig. 9. (a) The model is trained before pruning and the prediction accuracy changes with the increase in training times during training. (b) The performance of filter pruning. The light blue curve represents the prediction accuracy after pruning ten filters at a time during the pruning process. The ink-blue curve represents the accuracy of fine-tuning after trimming.
Fig. 10.
Fig. 10. The confusion matrix table for the pruned model. (a) pruning rate is 93.75%. (b) pruning rate is 79.12%. (c) pruning rate is 62.5%. (d) pruning rate is 46.87%.
Fig. 11.
Fig. 11. Schematic of the implemented on-chip photonic convolutional neural network (PCNN) performing handwritten digital recognition on the MNIST, realized using on-chip MRR weight banks combined with WDM technology. M: Mach-Zehnder modulator.
Fig. 12.
Fig. 12. Comparative results of modeling based on different decimal digits.
Fig. 13.
Fig. 13. Total number of components required in POCNN for different iterations of pruning.

Tables (3)

Tables Icon

Table 1. The number of instances of each number of 0-9 in the MNIST test set.

Tables Icon

Table 2. The accuracy of pruned networks with different pruning rates.

Tables Icon

Table 3. Total number of kernels and parameters of Fully-connected layer required in POCNN for different iterations of pruning.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

O i , j = k = 1 K W l = 1 K H F k , l P i + k 1 , j + l 1
O i , j = d = 1 D k = 1 K W l = 1 K H F d , k , l P d , i + k 1 , j + l 1
O W = W K W + 1
O H = H K H + 1
M A C s = D i × K W × K H × O H × O W
m λ = 2 π r n e f f , m = 1 , 2 , 3
θ = β L = 4 π 2 n e f f r λ
T drop  = P drop  P input  = ( 1 t 1 2 ) ( 1 t 2 2 ) α 1 2 t 1 t 2 α cos ( θ ) + ( t 1 t 2 α ) 2
T through  = P through  P input  = t 2 2 α 2 2 t 1 t 2 α cos ( θ ) + t 1 2 1 2 t 1 t 2 α cos ( θ ) + ( t 1 t 2 α ) 2
y = g I D T = g h o e | E 0 | 2 F
F = T d r o p T t h r o u g h
y out  = g h oe  n = 1 N | E 0 | n 2 F n
| Δ C ( m i ) | = C ( D | W C ( D W ) |
| Δ C ( m i ) | = | δ C δ m i m i |
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.