Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compensation schemes for uneven illumination and LED light-emitting instability in optical camera communication system

Open Access Open Access

Abstract

In order to increase the data rate of the optical camera communication (OCC) system, the 8-composite-amplitude-shift-keying modulation (8CASK) OCC system is used in this work. However, if the static decision thresholds are employed to demodulate multi-level ASK signal, uneven illumination of LED lamps and LED light-emitting instability lead to the fluctuation of the gray range in the picture and degrade the bit-error-rate (BER) performance. In this work, we propose and demonstrate a demodulation scheme, using the uneven illumination compensation algorithm, the pixel matrix threshold overall update algorithm and the secondary decision algorithm, to mitigate the impact of illumination unevenness and LED light-emitting instability. The BER performance is evaluated and compared with other demodulation schemes. The experimental results demonstrate that the communication rate of our proposed scheme can reach 9kbit/s at a distance of 250 cm where the illumination is 135lux, and the BER is 8.01 × 10−5.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

As a complementary technology to 5G wireless communication, visible light communication has attracted much attention. It can be classified into two types based on the receiver: photodiode-based visible light communication [1] and camera-based visible light communication [2,3]. High-resolution CMOS cameras have been embedded in smartphones at a lower cost recently, in which the rolling shutter effect can make the data rate recognized by the camera higher than the frame rate [4,5]. Therefore, the OCC system holds promising prospects and is suitable for various scenarios, such as indoor positioning [6], vehicle communication [7], underwater-to-air communication [8], etc.

Recently, the OCC systems have been focused on low-order modulation [911]. At the 7% pre-forward error correction (pre-FEC BER = $3.8 \times {10^{ - 3}}$) limit, communication rates reached 4.08 kbit/s at a distance of 30 cm in [9], data rates of 3.6 kbit/s were achieved at a transmission distance of 100 cm in [10], and in [11] data rates of 4.5 kbit/s (2.5 kbit/s) were achieved at transmission distances of 50 cm (175 cm). It can be seen that low-order modulations such as On-Off Keying (OOK) limit the communication rate($\sim {\rm {4kbit/s}}$). To improve the communication rate of the OCC system, some higher-order modulations have been considered, such as pulse width modulation (PWM) [12], color shift keying (CSK) [13], and amplitude-shift keying (ASK) [14]. Nevertheless, PWM and CSK modulations are more complex and may lead to noticeable flickering of LED lights, and traditional ASK modulation is not feasible due to the nonlinear characteristics of LEDs. The composite amplitude-shift Keying (CASK) modulation was proposed in [15] to generate ASK symbols through physical light composition.

It is well-known that the illumination of LED follows the Lambertian radiation model. The light intensity of visible light signal received by a mobile phone camera is uneven, leading to the fluctuation of gray ranges in pixel matrix. Furthermore, the light intensity emitted by the LED may vary at different moments even under a fixed driving current [16], causing fluctuation in the pixel matrix. In cases of shorter transmission distances or low-order modulation, such as OOK or 4ASK modulation, the differences in gray values corresponding to adjacent light levels are relatively high, and the impact of gray fluctuation is not significant. If higher order modulation and longer distances($\sim {\rm {250cm}}$) are considered, compensation for uneven illumination and unstable LED emissions is necessary. In [15], a method was employed where continuous brightest symbols were inserted in the data packet headers. The data packet and its gray value envelope were determined based on two consecutive packet headers. Detection thresholds for all symbols were empirically set to achieve data demodulation using this envelope. The demodulation algorithm and the use of LED light strips compensated for uneven illumination to some extent. However, it operated at a relatively short transmission distance (100cm), and the insertion of continuous brightest symbols in the frame header limited the transmission rate (2kbps). In [17], the K-MEANS algorithm was proposed for demodulating data, and the algorithm’s initial mean values were pre-set, called the pilot-aided K-means (PAK) algorithm. The algorithm provided a certain compensation for the instability of LED emissions and performed well at a short transmission distance (60cm). In [18], the training threshold matrix (TTM) algorithm was proposed to achieve the thresholds for different light levels. The TTM algorithm can compensate for uneven illumination and performed well with a longer transmission distance (200cm) under the 4PWM intensity modulation scheme. However, the algorithm didn’t take into account the impact of the time-varying emission of LED light. Consequently, its performance may decline when higher-order modulation is employed. In recent years, neural networks have also been applied in OCC systems. In [19], an LED light panel and rolling shutter image sensor-based OCC system were proposed, using frame-averaging background removal (FABR) technique, Z-score normalization, and neural network (NN). This scheme employed neural networks to adapt to the changing OCC channels, and a data rate of 1200 bit/s fulfilling the pre-FEC requirement can be achieved at a free-space transmission of 250-300 cm. A rolling shutter 4-level pulse amplitude modulation (PAM4) demodulation scheme for OCC systems using a pixel-per-symbol labeling neural network (PPSL-NN) was proposed in [20], which performed well at a transmission distance of 200cm. However, if considering environmental changes or issues like LED jittering over time, neural network-based systems may require periodic retraining.

In this work, to mitigate the impact of uneven LED emissions caused by the Lambertian radiation model in space, we use the brightest and darkest matrices to compensate for the unevenness of initial data gray values, stabilizing the data gray values. On the other hand, to reduce the impact of the light intensity fluctuation generated by the LED at different times, we use the gray value determined at the current moment as a reference for the next decision point within a single picture when making decisions for each payload point. In addition, to mitigate the combined effects of uneven illumination and unstable LED emission, we save reliable data from the demodulated pictures as a reference for demodulating subsequent pictures. The performance of the proposed algorithm is evaluated in an 8CASK-based OCC system and compared with some alternative demodulation methods. The experimental results demonstrate that our proposed algorithm outperforms other demodulation methods under long-distance transmission and with higher-order modulation schemes.

2. Impact of the uneven illumination and LED light-emitting instability

In visible light communication, LEDs serve as signal transmitters and can be considered as point light sources emitting visible light signal outward. The channel model can be established based on the Lambertian radiation model, and the receiver is characterized by its effective signal collection area [21]. For PD-based visible light communication, the relatively small photosensitive area of PD makes the impact of uneven lighting not significant when the transmission location is fixed. Therefore, in PD-based visible light communication, more attention is paid to other issues such as LED nonlinearity, and various equalization techniques are employed to address them [22,23]. In OCC systems, however, due to the much larger photosensitive area of CMOS cameras compared to PDs, the pixel matrix obtained by CMOS cameras exhibits significant fluctuation in magnitude even at fixed positions. As shown in Fig. 1, the uneven illumination causes all modulation formats to exhibit higher amplitudes in the middle curve compared to the edge amplitudes. Additionally, automatic intensity adjustment functionality is embedded in the firmware of image sensors [20], and the response of camera pixels to changes in light intensity is nonlinear [24]. When capturing images with consumer-grade cameras or cameras on mobile devices, there is a nonlinear camera response function (CRF) that maps scene irradiance to intensity. Due to design factors such as compressing the scene’s dynamic range or emulating the conventional irradiance response of film, the CRF varies among different camera manufacturers and models [25,26]. These factors make it difficult to achieve precise compensation for uneven illumination through simple equalization methods. For OCC systems employing OOK modulation, which only carries binary data of "0" and "1" with visible light signal, demodulation can be relatively straightforward using algorithms such as polynomial fitting [27], moving exponential averaging [10], or based on boundary pixels of stripes [11]. However, for OCC systems using higher-order modulation formats, the uneven illumination issue poses a significant challenge in retrieving multiple intensity levels.

 figure: Fig. 1.

Fig. 1. Pixel matrix curves collected by different modulation formats, (a) OOK, (b) 4ASK, (c) 8ASK, (d) 16ASK.

Download Full Size | PDF

On the other hand, the instability of LED emission also complicates demodulation. In order to investigate the impact of LED emission instability on different modulation formats, we modulate the visible light with OOK, 4ASK, 8ASK, and 16ASK by cyclically transmitting "01" bits, "0123" bits, "${\rm {012}} \cdots {\rm {7}}$" bits, and "${\rm {012}} \cdots {\rm {F}}$" bits of data, respectively. Figure 1 is the pixel matrix curves. We selected edge and middle data and compared the quantization intervals of these two regions in Fig. 2(a). It can be observed that with the increase in the modulation order of amplitude keying, the quantization interval gradually decreases. Taking the 8ASK modulation format as an example, Fig. 2(b) shows the gray values of symbols "2" and "3" in the 200 received images. It can be seen that the gray values of the same symbol and pixel are jittery ($\sigma \sim {\rm {3}}$) because of the instability of LED emission. When the OCC system employs low-order modulation formats such as OOK, the impact of LED emission instability on the system can be neglected due to the relatively high quantization interval. However, when OCC systems employ high-order modulation formats such as 8ASK, the quantization interval is very low, especially at the edges, as shown in Fig. 2(a). In this case, the fluctuation of gray values will result in decision errors.

 figure: Fig. 2.

Fig. 2. (a) Quantization intervals for mid and edge regions, (b)gray values of symbols "2" and "3" obtained at 250 cm under 8ASK modulation.

Download Full Size | PDF

In summary, it is necessary to compensate for both uneven illumination and LED emission instability to make the high-order ASK OCC system transmission longer distance. Therefore, this work proposes compensation algorithms to mitigate the impact of uneven illumination and LED light-emitting instability.

3. Experiment and algorithm

In our work, we adopt 8CASK modulation [15] and LED chips are used as light sources. The experimental setup of the 8CASK-based OCC system with LED arrays is shown in Fig. 3(a). The messages are transformed through an FPGA (ALTERA, EP4CE10F17C8) and applied to LED lights (1W, white LED) via an LED driver board. Every 2 LED lights represent one amplitude level. In the experiment, there are a total of 15 LEDs, with one LED serving as a backup. The on-off status of each LED corresponding to the different amplitude levels is shown in Table 1.

 figure: Fig. 3.

Fig. 3. (a) Experimental setup of the 8CASK-based OCC system, (b) structure of data packet, where NRZ is non-return-to-zero format, RZ is return-to-zero format, (c) flow diagram of the picture collection and demodulation processes.

Download Full Size | PDF

Tables Icon

Table 1. ASK symbols produced by LEDs

After a free-space transmission, the visible light signal is received by a mobile phone image sensor (HUAWEI, Honor 10) with a resolution of $4608 \times 3456$ pixels. The exposure time of the CMOS camera is $10\mu s$ and the focal length is a fixed value. In our experiments, the offline pictures captured by the mobile phone are transferred to the computer for demodulation. The imaging principle of CMOS cameras is illustrated in Fig. 4(a). The row pixels in the image sensor are exposed sequentially, filling different pixel rows at different times. When the frequency of visible light signal is lower than the exposure frequency of a single row pixel but higher than the camera’s frame rate, the visible light signal can be recorded as bright and dark stripes. Since the image sensor of a smartphone camera is typically arranged horizontally, the actual columns in images captured vertically are the rows of the CMOS camera sensor, as can be seen in Fig. 4(b). Similar to the row matrix selection in [8], we sort the row pixel values in descending order, and select the row with the 85% highest gray values to represent the received light intensity, and avoid pixel saturation generated error. The sorted data image and selected row are shown in Fig. 4(c). Figure 4(d) depicts the gray values of the selected row.

 figure: Fig. 4.

Fig. 4. Matrix acquisition process, (a) rolling-shutter operation in CMOS camera, (b) captured image, (c) sorted data image and selected row(red line), (d) gray values of the selected row.

Download Full Size | PDF

The structure of the data packet is shown in Fig. 3(b). Each packet consists of headers and payloads, and the headers consist of the sync data and the pilot data. The sync data symbols consist of "0, 7, 7, 0", used for data synchronization. The pilot symbols consist of "7, 1, 4", provided to pixel matrix threshold overall update algorithm for initial decided gray values and decision thresholds calculations. The multiple experiments are conducted under different conditions, in which each experiment collected about 2,000 offline pictures for BER calculation of the system. Figure 3(c). shows the flow diagram of the picture collection and demodulation processes and its detailed processing procedure is as follows.

3.1 Uneven illumination compensation algorithm

During the experiment, the pictures are collected when the LEDs emit all-ON, all-OFF, and data-containing visible light respectively, and then converted into the brightest matrix, the darkest matrix, and the data matrix, as shown in Fig. 5(a). It can be seen from the figure that, the gray value of the data matrix is fluctuating. The gray value of the middle part is higher than that of the edge part due to the uneven characteristics of illumination. The fluctuation of the gray value leads to the calculation error of the decision threshold, which in turn leads to the wrong data decision, especially when the 8CASK modulation is used in the OCC system. To address this issue, we propose the uneven illumination compensation algorithm based on the difference between the brightest matrix and the darkest matrix to normalize the initial data gray values. The ratio of the data matrix ${S}$ to the difference between the brightest matrix ${{S}(\max )}$ and the darkest matrix ${{S}(\min )}$ is first calculated and then the ratio is expanded by multiplying it with the amplification factor ${G_{{\rm {P\ }\hbox{-}{\rm \ P}}}}$ (${\rm {P}\hbox{-}{\rm P}}$ is peak-to-peak). The compensated data matrix ${S}^\prime$ is expressed as:

$${S_i}^\prime = \frac{{{S_i} - {S_i}(\min )}}{{{S_i}(\max ) - {S_i}(\min )}} \times {G_{P - P}}\ \ i = 1,2,3{\ldots}N$$
where $i$ is the symbol index, $N$ is the number of symbols contained in a picture. We set the value of ${G_{{\rm {P\ }\hbox{-}{\rm \ P}}}}$ to be the maximum value in the brightest matrix. Figure 5(b) depicts the compensated data matrix curves. If ${G_{{\rm {P\ }\hbox{-}{\rm \ P}}}}$ is 200, it can be seen that the gray range of the compensated data matrix $S'$ is stable and in our setting range: $[0,200]$.

 figure: Fig. 5.

Fig. 5. Curves of the data matrix, the brightest matrix and the darkest matrix, (a) initial curves (b) curves after algorithm compensation.

Download Full Size | PDF

3.2 Pixel matrix threshold overall update algorithm

In the 8CASK-based OCC system, the actual light intensity is unstable even if the same LED in the LED array is ON, and the response of the CMOS image sensor to the light intensity is nonlinear [16,28]. Therefore, the ratio of the same light intensity level to the difference between the brightest and the darkest matrix is unstable. To solve this problem, we proposed pixel matrix threshold overall update algorithm, in which the decision threshold and the decided gray value are dynamically updated with the running of the OCC system to reduce the impact of the instability on the data decision.

In our experiment, each packet of data consists of synchronous data, pilot data, and payload data. We can find the packet header based on the synchronized data, and obtain the value of the pilot data, since the synchronized data is immediately followed by the pilot data. The relationship among the pilot data ${{(P^7},{P^1},{P^4)}}$ , decision threshold matrix ${T}$ and decided gray value matrix ${G}$ is shown in Fig. 6. To assume matrix ${G}$ and ${T}$ are expressed as:

$$G = \left[ \begin{array}{@{}cccccc@{}} {g_0}(0) &{g_1}(0) & \ldots &{g_i}(0) & \ldots &{g_N}(0)\\ {g_0}(1) &{g_1}(1) & \ldots &{g_i}(1) & \ldots &{g_N}(1)\\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ {g_0}(l) &{g_1}(l) & \ldots &{g_i}(l) & \ldots &{g_N}(l)\\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ {g_0}(7) &{g_1}(7) & \ldots &{g_i}(7) & \ldots &{g_N}(7) \end{array}\right] \; \; \; T = \left[ \begin{array}{@{}cccccc@{}} {t_0}(1) &{t_1}(1) & \ldots &{t_i}(1) & \ldots &{t_N}(1)\\ {t_0}(2) &{t_1}(2) & \ldots &{t_i}(2) & \ldots &{t_N}(2)\\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ {t_0}(l) &{t_1}(l) & \ldots &{t_i}(l) & \ldots &{t_N}(l)\\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ {t_0}(7) &{t_1}(7) & \ldots &{t_i}(7) & \ldots &{t_N}(7) \end{array} \right]$$

 figure: Fig. 6.

Fig. 6. Process for determining payload data. The right axis shows relationship among the pilot data, decision threshold matrix and decided gray value matrix.

Download Full Size | PDF

The initial decided gray values ${g_0}$ and decision thresholds ${t_0}$ of each illumination amplitude level $l$ are calculated by:

$$\left\{ \begin{array}{ll} {g_0}(l) = {P^4} + (l - 4) \times \frac{{{P^7} - {P^4}}}{3} &l = 4,5,6,7\\ {g_0}(l) = {P^1} + (l - 1) \times \frac{{{P^4} - {P^1}}}{3} &l = 1,2,3\\ {g_0}(l) = 0 &l = 0\\ {t_0}(l) = [\frac{{{g_0}(l - 1) + {g_0}(l)}}{2}] &l = 1,2 \ldots 7 \end{array} \right.$$

Then the decided level ${l_i}$ is obtained by comparing gray value ${S_i}^\prime$ with thresholds $t_{i - 1}$.

$${l_i} = \left\{ {\begin{array}{ll} 7 &{S_i}^\prime > {t_{i - 1}}(7)\\ l &{t_{i - 1}}(l) \le {S_i}^\prime \le {t_{i - 1}}(l + 1)\\ 0 &{S_i}^\prime < {t_{i - 1}}(1) \end{array}} \right.$$

Afterwards, we update the decided gray values ${g_i}$ for each amplitude level based on the current point’s gray value ${S_i}^\prime$. First, we update the decided gray value ${g_i}({l_i})$ of the decided level ${l_i}$ and calculate the difference ${d_i}$ between its current gray value and its previous symbol gray value:

$$\left\{ \begin{array}{l} {g_i}({l_i}) = {S_i}^\prime \\ d_i = {S_i}^\prime - {g_{i - 1}}({l_i}) \end{array} \right.$$

In the process of updating the decided gray value matrix, we ensure that the decided gray values for each amplitude level always satisfy the following relationship:

$$\left\{ \begin{array}{l} {g_i}(6) = \frac{{2{g_i}(7) + {g_i}(4)}}{3}\ \ \ {g_i}(3) = \frac{{2{g_i}(4) + {g_i}(1)}}{3}\\ {g_i}(5) = \frac{{2{g_i}(4) + {g_i}(7)}}{3}\ \ \ {g_i}(2) = \frac{{2{g_i}(1) + {g_i}(4)}}{3}\\ {g_i}(0) = 0 \end{array} \right.$$

In this way, we update the decided gray values of the payload data in four different cases based on the value of ${l_i}$:

  • • If ${l_i}$ is equal to 1, 4, or 7, we update the decided gray values using the following equation:
    $${g_i}(l) =\left\{ \begin{array}{lllllll} {S_i}^\prime & l = {l_i}\\ {g_{i - 1}}(l) &l = 1,4 &or &7 &and &l \ne {l_i}\\ \frac{{2{g_i}(7) + {g_i}(4)}}{3}\ \ \ \ \ & l = 6\\ \frac{{{g_i}(7) + 2{g_i}(4)}}{3} & l = 5\\ \frac{{2{g_i}(4) + {g_i}(1)}}{3} & l = 3\\ \frac{{{g_i}(4) + 2{g_i}(1)}}{3} &l = 2\\ 0 & l = 0 \end{array} \right.$$
  • • If ${l_i}$ is equal to 5 or 6, we update the decided gray values using the following equation:
    $${g_i}(l) = \left\{ \begin{array}{ll} {g_{i - 1}}(l) + {d_i} &4 \le l \le 7\\ {g_{i - 1}}(l) & l = 1\\ \frac{{2{g_i}(4) + {g_i}(1)}}{3} & l = 3\\ \frac{{{g_i}(4) + 2{g_i}(1)}}{3} &l = 2\\ 0 & l = 0 \end{array} \right.$$
  • • If ${l_i}$ is equal to 2 or 3, we update the decided gray values using the following equation:
    $${g_i}(l) = \left\{ \begin{array}{ll} {g_{i - 1}}(l) + {d_i} &1 \le l \le 4\\ {g_{i - 1}}(l) & l = 7\\ \frac{{2{g_i}(7) + {g_i}(4)}}{3} &l = 6\\ \frac{{{g_i}(7) + 2{g_i}(4)}}{3} & l = 5\\ 0 &l = 0 \end{array} \right.$$
  • • If ${l_i}$ is equal to 0, we update the decided gray values using the following equation:
    $${g_i}(l) = {g_{i - 1}}(l) \qquad 0 \le l \le 7$$

Finally we recalculate the decision threshold matrix ${t_i}$ according to the updated ${g_i}$ to update all threshold values in the threshold matrix:

$${t_i}(l) = [\frac{{{g_i}(l - 1) + {g_i}(l)}}{2}] \qquad l = 1,2 \ldots 7$$

By comparing ${S_{i + 1}}^\prime$ and ${t_i}(l)$, we can determine the symbol ${l_{i + 1}}$ from the gray value of (i+1)th payload data, until the last gray value of payload data.

3.3 Secondary decision algorithm

Due to the simultaneous presence of uneven illumination of LED lamps and LED light-emitting instability, we have introduced a secondary decision process based on the first decision employing the algorithms described in sections 3.1 and 3.2. Figure 7 is the magnified part of Fig. 6. From the figure, we can see that some payload points are near the threshold. The closer the point is to the threshold, the more possibility the decision error occurs. We define the points close to the threshold as unreliable data.

 figure: Fig. 7.

Fig. 7. The magnified part of Fig. 6. $ML(6) - ML(9)$ are the mid lines between the decision threshold and decided gray values for level ”4”.

Download Full Size | PDF

To quantitatively distinguish between reliable and unreliable data, we added mid lines $ML$ between the decision thresholds and the decided gray values. Assume $ML$ is expressed as:

$$ML = \left[ \begin{array}{cccccc} m{l_0}(1) &m{l_1}(1) & \ldots & m{l_i}(1) & \ldots &m{l_N}(1)\\ m{l_0}(2) &m{l_1}(2) & \ldots & m{l_i}(2) & \ldots &m{l_N}(2)\\ \vdots & \vdots &\ddots & \vdots & \ddots & \vdots \\ {l} m{l_0}(2l - 1) &m{l_1}(2l - 1) & \ldots &m{l_i}(2l - 1) & \ldots &m{l_N}(2l - 1)\\ m{l_0}(2l) &m{l_1}(2l) &\ldots &m{l_i}(2l) &\ldots &m{l_N}(2l) \\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ m{l_0}(14) &m{l_1}(14) &\ldots &m{l_i}(14) & \ldots &m{l_N}(14) \end{array}\right]$$

And the element values in $ML$ can be calculated using the following formula:

$$\left\{ \begin{array}{l} m{l_i}(2l - 1) = ({g_i}(l - 1) + {t_i}(l))/2\\ m{l_i}(2l) = ({g_i}(l) + {t_i}(l))/2 \end{array} \right. \qquad l = 1,2,\ldots,7$$
where $l$ represents the amplitude level. If the gray value of ${S_i}^\prime$ is decided as ${l_i}$, it is classified as reliable data if ${S_i}^\prime$ satisfies the following:
$$\left\{ \begin{array}{l} {g_{i - 1}}(0)< {S_i}^\prime < {ml_{i - 1}}(2{l_i} + 1)\ \ \ \ \ \ \ if \ {l_i} = 0\\ {ml_{i - 1}}(2{l_i})< {S_i}^\prime < {ml_{i - 1}}(2{l_i} + 1)\ \ \ \ if \ 0 < {l_i} < 7\\ {ml_{i - 1}}(2{l_i})< {S_i}^\prime < {g_{i - 1}}(7)\ \ \ \ \ \ \ \ \ \ \ \ if \ {l_i} = 7 \end{array} \right.$$

We define the data that does not satisfy the above formula as unreliable data. For example, the green box point in Fig. 7 was decided as level of 4, but its gray value ${S_i}^\prime$ does not satisfy ${ml_{i - 1}}(8) < {S_i}^\prime < {ml_{i - 1}}(9)$, i.e., it does not fall between the mid lines of $ML(8)$ and $ML(9)$. Therefore, we define it as unreliable data.

During the first decision process, we continuously save reliable data, forming the set of reliable data points, and store them in matrix ${M_{\rm {1}}}$, which is used for the secondary decision of unreliable points. Assume ${M_{\rm {1}}}$ is expressed as:

$${M_{\rm{1}}} = \left[ {\begin{array}{llcccc} {m_{1,0}}(0) &{m_{1,1}}(0)&\ldots&{m_{1,k}}(0)&\ldots &{m_{1,C}}(0)\\ {m_{1,0}}(1) &{m_{1,1}}(1) &\ldots &{m_{1,k}}(1)& \ldots&{m_{1,C}}(1)\\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ {m_{1,0}}(l) &{m_{1,1}}(l)& \ldots &{m_{1,k}}(l)& \ldots&{m_{1,C}}(l)\\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ {m_{1,0}}(7) &{m_{1,1}}(7)&\ldots &{m_{1,k}}(7)& \ldots &{m_{1,C}}(7) \end{array}} \right]$$
where $k$ represents the column where reliable data is located, $l$ represents its decided level, $C$ is the total number of columns of the picture, and in this work, this value is 3456. The elements in ${M_{\rm {1}}}$ represent the gray values of reliable data. At the beginning, all elements in ${M_{\rm {1}}}$ are null.

The secondary decision algorithm is interspersed in the pixel matrix threshold overall update algorithm, which can be seen in Fig. 8. After using the pixel matrix threshold overall update algorithm to calculate the decided level, we assess the reliability of this data. If the data is reliable, its payload data is directly output, and its gray value ${S_i}^\prime$ is stored in the corresponding element of matrix ${M_{\rm {1}}}$, based on the column the reliable data is located and its decided level obtained from the first decision. To ensure that ${M_{\rm {1}}}$ retains reliable data from only the latest demodulated set of pictures, we introduce ${M_{\rm {2}}}$ to assist in updating ${M_{\rm {1}}}$. The form of ${M_{\rm {2}}}$ is the same as ${M_{\rm {1}}}$. Each time reliable data is stored in ${M_{\rm {1}}}$, a copy is also stored in ${M_{\rm {2}}}$. Every time we completed decision for $n$ pictures, we replace the data in ${M_{\rm {1}}}$ with the data in ${M_{\rm {2}}}$, and then set all data in ${M_{\rm {2}}}$ to empty. Using this method, the number ${n_1}$ of pictures contained in ${M_{\rm {1}}}$ and the number ${n_2}$ of pictures contained in ${M_{\rm {2}}}$ is shown by the following formula when demodulating the $j$-th picture:

$${n_1} = \left\{ \begin{array}{l} j - 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ j \le n + 1\\ n + (j - 1){\%} n\ \ \ j > n + 1 \end{array} \right.$$
$${n_2} = (j - 1){\%} n$$

 figure: Fig. 8.

Fig. 8. The main process of the proposed algorithms.

Download Full Size | PDF

From the formula, it can be seen that the number of pictures contained in ${M_{\rm {1}}}$ is between $0$ and $n$ when we make decisions for the first n pictures. Apart from this, the number of pictures contained in ${M_{\rm {1}}}$ is between $n$ and $2n-1$. The ${M_{\rm {1}}}$ formed by reliable data in different numbers of pictures are shown in Fig. 9(a)–(c), and (d), (e) are the magnified parts of (b), (c). The red triangles and green diamonds represent the payload data in the picture to be demodulated, black dots represent reliable data stored in ${M_{\rm {1}}}$. From the figure, we can see that the ${M_{\rm {1}}}$ formed by 30 pictures has fewer points, while the ${M_{\rm {1}}}$ formed by 200 pictures contains many outdated points. Both situations can lead to decision errors.

 figure: Fig. 9.

Fig. 9. Payload data in the picture to be demodulated and ${M_{\rm {1}}}$ formed by reliable data in different numbers of pictures. (a) formed by 30 pictures, (b) formed by 50 pictures, (c) formed by 200 pictures. (d), (e) are the magnified parts of (b), (c).

Download Full Size | PDF

From Fig. 9, it can be seen that unreliable points are located in the edge region. To determine a reasonable value for $n$, we calculate the quantization interval $\Delta$ and rooted mean square error $\sigma$ of gray values for the left and right 10% edge regions, considering the compensated gray values ${S_i}^\prime$. Figure 10 shows the ratio of the rooted mean square error $\sigma$ to the average quantization interval $\Delta$ in the edge region of the reliable dataset ${M_{\rm {1}}}$, which is formed by different numbers of images. It can be observed that when the number of images forming ${M_{\rm {1}}}$ is less than 50, the ratio $\sigma$/$\Delta$ is relatively small. This is because in such case, the fewer data in ${M_{\rm {1}}}$ results in less noticeable fluctuation in the gray values. In the case of ${M_{\rm {1}}}$ formed by 50 images, the ratio $\sigma$/$\Delta$ is relatively large, indicating that ${M_{\rm {1}}}$ formed by 50 images is sufficient for secondary decision. Therefore, we set $n$ to be 50. This ensures the stored reliable data in ${M_{\rm {1}}}$ is enough, while also avoiding an excessive amount of outdated data in ${M_{\rm {1}}}$. We incorporate the calculation of the $n$ into the algorithm. In this way, when there are changes in the experimental setup or camera usage, the value of $n$ will be automatically adjusted.

 figure: Fig. 10.

Fig. 10. Ratio of the rooted mean square error $\sigma$ to the average quantization interval $\Delta$ in the edge region of the reliable dataset ${M_{\rm {1}}}$, formed by different numbers of images.

Download Full Size | PDF

If unreliable data appears, with the picture’s column being ${c}$, gray value being ${S_i}^\prime$, and preliminarily decided as ${l_i}$, then we use ${M_{\rm {1}}}$ for its secondary decision. First, we search for the elements existing in ${M_{\rm {1}}}$ and perform linear interpolation on them to calculate ${m_{1,c}}({l_i} - 1)$, ${m_{1,c}}({l_i})$ and ${m_{1,c}}({l_i} + 1)$. If there are not enough elements in ${M_{\rm {1}}}$, making it impossible to calculate any of the three values mentioned above, we skip the secondary decision process for this unreliable point. Generally, this situation occurs only for the initial few pictures in the decision process. Next, if we successfully calculate ${m_{1,c}}({l_i} - 1)$, ${m_{1,c}}({l_i})$ and ${m_{1,c}}({l_i} + 1)$, we calculate the differences between the gray value of this unreliable data and the three values:

$$\left\{ \begin{array}{l} {d_{l - 1}} = \left| {{S_i}' - {m_{1,c}}({l_i} - 1)} \right|\\ {d_l} = \left| {{S_i}' - {m_{1,c}}({l_i})} \right|\\ {d_{l + 1}} = \left| {{S_i}' - {m_{1,c}}({l_i} + 1)} \right| \end{array} \right.$$

Finally, we compare the magnitudes of these differences to obtain the final output decided level ${l_i}$:

$${l_i} = \left\{ \begin{array}{ll} {l_i} - 1 &if \ \ {d_{l - 1}} = \min ({d_{l - 1}},{d_l},{d_{l + 1}})\\ {l_i} & if \ \ {d_l} = \min ({d_{l - 1}},{d_l},{d_{l + 1}})\\ {l_i} + 1 &if \ \ {d_{l + 1}} = \min ({d_{l - 1}},{d_l},{d_{l + 1}}) \end{array} \right.$$

4. Results and discussions

In the experimental process, we use ${M_{\rm {2}}}$ to update ${M_{\rm {1}}}$ every time we completed decision for $n$ pictures. The BER results obtained with different values of $n$ at different distances are shown in Fig. 11. It can be seen that the value of $n$ has no effect on the experimental results when the transmission distance is shorter than 175cm. This is because the illumination is high and the quantization interval is high and the data can be demodulated correctly based on the first decision algorithms described in sections 3.1 and 3.2. However, the value of $n$ has a relatively obvious impact on the experimental results when the transmission distance increases. If the value of $n$ is 30, ${M_{\rm {1}}}$ may contain relatively few reliable data, which could lead to error decisions, as shown in Fig. 9(a). If the the value of $n$ is 100 or larger, there will be a considerable amount of outdated data in ${M_{\rm {1}}}$, leading to error decisions too.

 figure: Fig. 11.

Fig. 11. BER results obtained with different values of $n$ at 100-250 cm.

Download Full Size | PDF

In [17], the K-MEANS algorithm is proposed to demodulate data in which the initial mean value is set in advance, called the pilot-aided K-means (PAK) algorithm, which achieved good results at a transmission distance of 10-60 cm in its 4ASK system. In this 8CASK experiment, we calculate the gray values of each level through the pilot and use them as the initial mean value of the K-MEANS algorithm to demodulate the payload data. The training threshold matrix (TTM) algorithm is proposed in [18], in which the thresholds for different light levels are pre-trained by utilizing the training data. In order to prove the performance of the uneven illumination compensation algorithm, the pixel matrix threshold overall update algorithm, and the secondary decision algorithm, our scheme was compared with the PAK and the TTM methods in the 8CASK-OCC system.

Figure 12 depicts the BER obtained by these algorithms and the illuminance at the corresponding distance. It is shown by Fig. 12, that the BER is high when we only use the uneven illumination compensation algorithm (UIC) to obtain a fixed decision threshold through pilot for demodulation. When the transmission distance exceeds 175 cm, the BER exceeds the 7% pre-FEC limit, shown by the red dotted line. BER obtained with the PAK algorithm is always higher than the 7% pre-FEC limit if the transmission length is longer than 100cm. With the TTM algorithm, the BER is lower than ${\rm {2}} \times {\rm {1}}{{\rm {0}}^{{\rm {\ }\hbox{-}{\rm \ 5}}}}$ when the communication distance is within 175 cm. As the distance continues to increase, the BER begins to rise significantly and reaches ${\rm {1.01}} \times {\rm {1}}{{\rm {0}}^{{\rm {\ }\hbox{-}{\rm \ 3}}}}$, when the communication distance reaches 250 cm and the illumination is only 135 lux. The BER obtained with our proposed algorithm (UIC-PMTOU-SDA) is $8.01 \times {10^{ - 5}}$ when the communication distance is 250 cm. The results demonstrated that the algorithm proposed in this paper has a magnitude improvement compared with the TTM algorithm.

 figure: Fig. 12.

Fig. 12. BER curves obtained by using different algorithms in the OCC system under different distances (left axis) and the corresponding illuminance values(right axis).

Download Full Size | PDF

Table 2 displays the time consumption of different algorithms. The offline processing is conducted on a laptop with 16GB of memory and an AMD Ryzen 9 5900H processor. The algorithms, on the whole, have relatively long processing times, with the majority of time spent on reading picture data and converting it into gray values. Since capturing photos with a smartphone camera involves first obtaining the three-color data of the picture and then forming the picture from the data, in the future, we can consider skipping the step of generating the picture and directly acquiring the three-color data to reduce the time required to obtain the gray values of the picture. The core algorithm’s execution time is relatively short, as shown in Table 2. In [18], the TTM algorithm has a relatively low time consumption due to pretraining. The algorithm proposed in this work employs a secondary decision logic, leading to longer processing times compared to the PAK and TTM algorithms. Nevertheless, the processing time for a single picture still meets the real-time communication requirements. When the camera frame rate is 30fps, it corresponds to a processing time of 1/30 seconds per picture. In contrast, our proposed algorithm has a processing time of $6.5 \times {10^{ - 3}}$ seconds, which is significantly lower than this threshold.

Tables Icon

Table 2. Time consumption of different algorithms

In the 8CASK-based OCC system, each picture contains about 100 payload data, that is, the data volume of one picture is $100 \times 3$ bits. Therefore, when the camera frame rate is 30fps, the data rate of the 8CASK-based OCC system is 9kbps, and the BER is $8.01 \times {10^{ - 5}}$ when the communication distance is 250cm and the illumination is 135 lux.

5. Conclusion

In this work, to mitigate the impact of uneven LED emissions caused by the Lambertian radiation model in space, we proposed the uneven illumination compensation algorithm to stabilize the data gray values. To reduce the impact of the light intensity fluctuation generated by the LED at different times, we proposed the pixel matrix threshold overall update algorithm. Besides, we proposed the secondary decision algorithm to mitigate the combined effects of uneven illumination and unstable LED emission. By evaluating the BER performance obtained with our proposed algorithms, PAK, and TTM algorithms in the 8CASK-based OCC system, our proposed scheme has better BER performance. The experimental results demonstrate that the communication rate of our proposed scheme can reach 9kb/s at a distance of 250 cm with the illumination of 135 lux, and the BER is $8.01 \times {10^{ - 5}}$. The proposed scheme has the advantage for high-order amplitude modulation in OCC systems.

Funding

Science and Technology Major Project of Guangxi (2020AA21077007); Guangdong Guangxi Joint Science Key Foundation (2021GXNSFDA076001); National Natural Science Foundation of China (61971046, 62075012).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. X. Huang, Z. Wang, J. Shi, et al., “1.6 Gbit/s phosphorescent white LED based VLC transmission using a cascaded pre-equalization circuit and a differential outputs PIN peceiver,” Opt. Express 23(17), 22034–22042 (2015). [CrossRef]  

2. Y. Liu, C. W. Chow, K. Liang, et al., “Comparison of thresholding schemes for visible light communication using mobile-phone image sensor,” Opt. Express 24(3), 1973–1979 (2016). [CrossRef]  

3. W. C. Wang, C. W. Chow, L. Y. Wei, et al., “Long distance non-line-of-sight (NLOS) visible light signal detection based on rolling-shutter-patterning of mobile-phone camera,” Opt. Express 25(9), 10103–10108 (2017). [CrossRef]  

4. C. Danakis, M. Afgani, G. Povey, et al., “Using a CMOS camera sensor for visible light communication,” the 31th IEEE GLOBECOM Workshop, 1244–1248 (2012).

5. C. W. Chow, C. Y. Chen, and S. H. Chen, “Visible light communication using mobile-phone camera with data rate higher than frame rate,” Opt. Express 23(20), 26080–26085 (2015). [CrossRef]  

6. R. Zhang, W.-D. Zhong, Q. Kemao, et al., “A single LED positioning system based on circle projection,” IEEE Photonics J. 9(4), 1–9 (2017). [CrossRef]  

7. M. K. Hasan, “Optical camera communication in vehicular applications: A review,” IEEE Trans. Intell. Transport. Syst. 23(7), 6260–6281 (2022). [CrossRef]  

8. S.-Y. Tsai, Y.-H. Chang, and C.-W. Chow, “Wavy water-to-air optical camera communication system using rolling shutter image sensor and long short term memory neural network,” Opt. Express 32(5), 6814–6822 (2024). [CrossRef]  

9. J. He, Z. Jiang, J. Shi, et al., “A Novel Column Matrix Selection Scheme for VLC System With Mobile Phone Camera,” IEEE Photonics Technol. Lett. 31(2), 149–152 (2019). [CrossRef]  

10. Z. Zhang, T. Zhang, J. Zhou, et al., “Performance Enhancement Scheme for Mobile-Phone Based VLC Using Moving Exponent Average Algorithm,” IEEE Photonics J. 9(2), 1–7 (2017). [CrossRef]  

11. Z. Zhang, T. Zhang, J. Zhou, et al., “Thresholding Scheme Based on Boundary Pixels of Stripes for Visible Light Communication With Mobile-Phone Camera,” IEEE Access 6, 53053–53061 (2018). [CrossRef]  

12. N. Jiang, B. Lin, and Q. Lai, “Non-line-of-sight WDM-MIMO optical camera communications with the DBPWR algorithm[J],” Opt. Commun. 518, 128371 (2022). [CrossRef]  

13. H. Chen, X. Z. Lai, P. Chen, et al., “Quadrichromatic LED based mobile phone camera visible light communication,” Opt. Express 26(13), 17132–17144 (2018). [CrossRef]  

14. Y. Yang, J. Nie, and J. Luo, “ReflexCode: Coding with Superposed Reflection Light for LED-Camera Communication,” International Conference on Mobile Computing & Networking ACM, 193–205 (2017).

15. Y. Yang and J. Luo, “Composite amplitude-shift keying for effective LED-camera VLC,” IEEE Trans. on Mobile Comput. 19(3), 528–539 (2020). [CrossRef]  

16. H. Lv, L. Feng, A. Yang, et al., “High accuracy vlc indoor positioning system with differential detection,” IEEE Photonics J. 9(3), 1–13 (2017). [CrossRef]  

17. J. Wu, X. Chi, and F. Ji, “Multi-level modulation scheme based on PAK algorithm for optical camera communications[J],” Optoelectron. Lett. 18(3), 152–157 (2022). [CrossRef]  

18. V. P. Rachim and W. Y. Chung, “Multilevel Intensity-Modulation for Rolling Shutter-Based Optical Camera Communication,” IEEE Photonics Technol. Lett. 30(10), 903–906 (2018). [CrossRef]  

19. C. W. Chow, Y. Liu, C. H. Yeh, et al., “Display light panel and rolling shutter image sensor based optical camera communication (OCC) using frame-averaging background removal and neural network[J],” J. Lightwave Technol. 39(13), 4360–4366 (2021). [CrossRef]  

20. Y.-S. Lin, “PAM4 rolling-shutter demodulation using a pixel-per-symbol labeling neural network for optical camera communications,” Opt. Express 29(20), 31680–31688 (2021). [CrossRef]  

21. M. V. Bhalerao, M. Sumathi, and S. S. Sonavane, “Line of sight model for visible light communication using Lambertian radiation pattern of LED,” Int. J. Commun. 30(11), e3250 (2017). [CrossRef]  

22. C.-W. Hsu, C.-H. Yeh, and C.-W. Chow, “Using adaptive equalization and polarization-multiplexing technology for gigabit-per-second phosphor-LED wireless visible light communication,” Opt. Laser Technol. 104, 206–209 (2018). [CrossRef]  

23. Y. Wang, “4.5-Gb/s RGB-LED based WDM visible light communication system employing CAP modulation and RLS based adaptive equalization,” Opt. Express 23(10), 13626–13633 (2015). [CrossRef]  

24. Y.-W. Tai, X. Chen, S. Kim, et al., “Nonlinear camera response functions and image deblurring: Theoretical analysis and practice,” IEEE Trans. Pattern Anal. Mach. Intell. 35(10), 2498–2512 (2013). [CrossRef]  

25. M. D. Grossberg and S. K. Nayar, “Modeling the space of camera response functions,” IEEE Trans. Pattern Anal. Machine Intell. 26(10), 1272–1282 (2004). [CrossRef]  

26. S. Lin and L. Zhang, “Determining the radiometric response function from a single grayscale image,” 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), Vol. 2, (IEEE, 2005).

27. C. Danakis, M. Afgani, G. Povey, et al., “Using a CMOS camera sensor for visible light communication,” 2012 IEEE Globecom Workshops, (IEEE, 2012).

28. A. El Gamal and H. Eltoukhy, “CMOS image sensors,” IEEE Circuits Devices Mag. 21(3), 6–20 (2005). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Pixel matrix curves collected by different modulation formats, (a) OOK, (b) 4ASK, (c) 8ASK, (d) 16ASK.
Fig. 2.
Fig. 2. (a) Quantization intervals for mid and edge regions, (b)gray values of symbols "2" and "3" obtained at 250 cm under 8ASK modulation.
Fig. 3.
Fig. 3. (a) Experimental setup of the 8CASK-based OCC system, (b) structure of data packet, where NRZ is non-return-to-zero format, RZ is return-to-zero format, (c) flow diagram of the picture collection and demodulation processes.
Fig. 4.
Fig. 4. Matrix acquisition process, (a) rolling-shutter operation in CMOS camera, (b) captured image, (c) sorted data image and selected row(red line), (d) gray values of the selected row.
Fig. 5.
Fig. 5. Curves of the data matrix, the brightest matrix and the darkest matrix, (a) initial curves (b) curves after algorithm compensation.
Fig. 6.
Fig. 6. Process for determining payload data. The right axis shows relationship among the pilot data, decision threshold matrix and decided gray value matrix.
Fig. 7.
Fig. 7. The magnified part of Fig. 6. $ML(6) - ML(9)$ are the mid lines between the decision threshold and decided gray values for level ”4”.
Fig. 8.
Fig. 8. The main process of the proposed algorithms.
Fig. 9.
Fig. 9. Payload data in the picture to be demodulated and ${M_{\rm {1}}}$ formed by reliable data in different numbers of pictures. (a) formed by 30 pictures, (b) formed by 50 pictures, (c) formed by 200 pictures. (d), (e) are the magnified parts of (b), (c).
Fig. 10.
Fig. 10. Ratio of the rooted mean square error $\sigma$ to the average quantization interval $\Delta$ in the edge region of the reliable dataset ${M_{\rm {1}}}$, formed by different numbers of images.
Fig. 11.
Fig. 11. BER results obtained with different values of $n$ at 100-250 cm.
Fig. 12.
Fig. 12. BER curves obtained by using different algorithms in the OCC system under different distances (left axis) and the corresponding illuminance values(right axis).

Tables (2)

Tables Icon

Table 1. ASK symbols produced by LEDs

Tables Icon

Table 2. Time consumption of different algorithms

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

S i = S i S i ( min ) S i ( max ) S i ( min ) × G P P     i = 1 , 2 , 3 N
G = [ g 0 ( 0 ) g 1 ( 0 ) g i ( 0 ) g N ( 0 ) g 0 ( 1 ) g 1 ( 1 ) g i ( 1 ) g N ( 1 ) g 0 ( l ) g 1 ( l ) g i ( l ) g N ( l ) g 0 ( 7 ) g 1 ( 7 ) g i ( 7 ) g N ( 7 ) ] T = [ t 0 ( 1 ) t 1 ( 1 ) t i ( 1 ) t N ( 1 ) t 0 ( 2 ) t 1 ( 2 ) t i ( 2 ) t N ( 2 ) t 0 ( l ) t 1 ( l ) t i ( l ) t N ( l ) t 0 ( 7 ) t 1 ( 7 ) t i ( 7 ) t N ( 7 ) ]
{ g 0 ( l ) = P 4 + ( l 4 ) × P 7 P 4 3 l = 4 , 5 , 6 , 7 g 0 ( l ) = P 1 + ( l 1 ) × P 4 P 1 3 l = 1 , 2 , 3 g 0 ( l ) = 0 l = 0 t 0 ( l ) = [ g 0 ( l 1 ) + g 0 ( l ) 2 ] l = 1 , 2 7
l i = { 7 S i > t i 1 ( 7 ) l t i 1 ( l ) S i t i 1 ( l + 1 ) 0 S i < t i 1 ( 1 )
{ g i ( l i ) = S i d i = S i g i 1 ( l i )
{ g i ( 6 ) = 2 g i ( 7 ) + g i ( 4 ) 3       g i ( 3 ) = 2 g i ( 4 ) + g i ( 1 ) 3 g i ( 5 ) = 2 g i ( 4 ) + g i ( 7 ) 3       g i ( 2 ) = 2 g i ( 1 ) + g i ( 4 ) 3 g i ( 0 ) = 0
g i ( l ) = { S i l = l i g i 1 ( l ) l = 1 , 4 o r 7 a n d l l i 2 g i ( 7 ) + g i ( 4 ) 3           l = 6 g i ( 7 ) + 2 g i ( 4 ) 3 l = 5 2 g i ( 4 ) + g i ( 1 ) 3 l = 3 g i ( 4 ) + 2 g i ( 1 ) 3 l = 2 0 l = 0
g i ( l ) = { g i 1 ( l ) + d i 4 l 7 g i 1 ( l ) l = 1 2 g i ( 4 ) + g i ( 1 ) 3 l = 3 g i ( 4 ) + 2 g i ( 1 ) 3 l = 2 0 l = 0
g i ( l ) = { g i 1 ( l ) + d i 1 l 4 g i 1 ( l ) l = 7 2 g i ( 7 ) + g i ( 4 ) 3 l = 6 g i ( 7 ) + 2 g i ( 4 ) 3 l = 5 0 l = 0
g i ( l ) = g i 1 ( l ) 0 l 7
t i ( l ) = [ g i ( l 1 ) + g i ( l ) 2 ] l = 1 , 2 7
M L = [ m l 0 ( 1 ) m l 1 ( 1 ) m l i ( 1 ) m l N ( 1 ) m l 0 ( 2 ) m l 1 ( 2 ) m l i ( 2 ) m l N ( 2 ) l m l 0 ( 2 l 1 ) m l 1 ( 2 l 1 ) m l i ( 2 l 1 ) m l N ( 2 l 1 ) m l 0 ( 2 l ) m l 1 ( 2 l ) m l i ( 2 l ) m l N ( 2 l ) m l 0 ( 14 ) m l 1 ( 14 ) m l i ( 14 ) m l N ( 14 ) ]
{ m l i ( 2 l 1 ) = ( g i ( l 1 ) + t i ( l ) ) / 2 m l i ( 2 l ) = ( g i ( l ) + t i ( l ) ) / 2 l = 1 , 2 , , 7
{ g i 1 ( 0 ) < S i < m l i 1 ( 2 l i + 1 )               i f   l i = 0 m l i 1 ( 2 l i ) < S i < m l i 1 ( 2 l i + 1 )         i f   0 < l i < 7 m l i 1 ( 2 l i ) < S i < g i 1 ( 7 )                         i f   l i = 7
M 1 = [ m 1 , 0 ( 0 ) m 1 , 1 ( 0 ) m 1 , k ( 0 ) m 1 , C ( 0 ) m 1 , 0 ( 1 ) m 1 , 1 ( 1 ) m 1 , k ( 1 ) m 1 , C ( 1 ) m 1 , 0 ( l ) m 1 , 1 ( l ) m 1 , k ( l ) m 1 , C ( l ) m 1 , 0 ( 7 ) m 1 , 1 ( 7 ) m 1 , k ( 7 ) m 1 , C ( 7 ) ]
n 1 = { j 1                               j n + 1 n + ( j 1 ) % n       j > n + 1
n 2 = ( j 1 ) % n
{ d l 1 = | S i m 1 , c ( l i 1 ) | d l = | S i m 1 , c ( l i ) | d l + 1 = | S i m 1 , c ( l i + 1 ) |
l i = { l i 1 i f     d l 1 = min ( d l 1 , d l , d l + 1 ) l i i f     d l = min ( d l 1 , d l , d l + 1 ) l i + 1 i f     d l + 1 = min ( d l 1 , d l , d l + 1 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.