Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Randomness evaluation of LD phase noise for use as a continuous-variable random-number generator

Open Access Open Access

Abstract

Binary random numbers with an occurrence ratio of p:(1-p) (0 < p < 1) are needed in some applications. They are obtainable if the phase noise of a laser diode (LD) is used as a continuous-variable random-number generator. The problem is the existence of extra noise in the measuring system. However, most of this noise can be removed by taking the difference between consecutively measured values. This report confirms this by evaluating the randomness of the phase noise of an LD as a continuous quantity. Our findings show that we can easily obtain spontaneous-emission-based continuous-variable random numbers that allow one to set any p.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

A random number generator is a basic IT tool, and it is an important component in cryptography, Monte Carlo simulations, gaming, etc. Among a wide variety of methods for achieving a random number generator, using quantum fluctuations of light is advantageous from the viewpoints of its theoretical foundation and practicality. There are two methods for directly using quantum fluctuations of light; one is to divide single photons into two ports with a beamsplitter [1]; the other is to measure vacuum fluctuations with the homodyne method [2,3]. However, because of the difficulty of making these methods fast enough, an alternative, i.e., using amplified spontaneous emissions originating from quantum fluctuations, has actively been studied; it can be accomplished by using the phase noise of a continuous-wave (cw) laser diode (LD) [46], phase noise of a gain-switched LD [7,8], a super-luminescent diode [9], a fiber amplifier [10], etc.

A random number generator usually generates binary random numbers with a 1:1 ratio. However, there are applications that use binary random numbers with an occurrence ratio of p:(1-p) (0 < p < 1). For example, there is a secure communication method, where noise is intentionally added to make eavesdropping difficult, and a noise source with probability p is used, where p corresponds to a bit-error rate [11]. In this system, p is relatively small, e.g., p = 0.01 to 0.1, and it needs to be finely controlled. Binary random numbers with an arbitrary ratio can be made from continuous-variable random numbers by appropriately setting the threshold for judging 0 and 1 for binary random numbers. Thankfully, quantum fluctuations are continuous. Thus, this report evaluates the randomness of the phase noise of a cw LD as a continuous quantity. Because phase noise is Gaussian distributed, the tail part is less distributed. Therefore, if the threshold of 0 and 1 is set in the tail part, p slowly varies as the threshold setting is changed; i.e., p can be finely controlled. In other words, the resolution of p is less than that of an analog-digital (AD) converter. This property is particularly advantageous in applications such as the above-mentioned secure communications. Actually, the prototype of the secure communication unit utilized this binary random-number generator with an occurrence ratio of p:(1-p) to intentionally set a bit-error rate of p [12].

Recent random number generators that take advantage of amplified spontaneous emission use an extractor that consists of algorithm-based post processing to make the randomness complete [13,14]. The extractor removes any regularity added by the measuring system, and it outputs random numbers with a 1:1 ratio. In addition, the extractor simplifies the measuring system. The strategy of using the extractor is effective when random numbers with a 1:1 ratio are required. However, when the required random numbers are those with an occurrence ratio of p:(1-p) (p ≠ 1/2), we must use n-bits of binary random numbers (n > 1) and the resolution of p is limited to 1/2n. This limitation is troublesome. Thus, it would be better to use phase noise as a continuous quantity, because even though the phase noise is digitally measured, p can be finely controlled when p is small, as mentioned above. However, when the phase noise is used as a continuous quantity, the extractor cannot simply be used because it is an algorithm for binary random numbers with a 1:1 ratio. We need to devise a method of ensuring ideal randomness without the ordinary extractor.

The noise added by the measuring system has been discussed by Mitchell et al. [15]. It originates from the digitization and limited bandwidth of the measuring system. However, if the signal intensity is much greater than the intensity of the additional noise and the bandwidth is much larger than the sampling rate, these influences can be made negligible. Accordingly, this report evaluates the randomness of the phase noise of a cw LD by using raw data under these conditions. However, there is a correlation between consecutively measured values due mainly to low-frequency components that come from the extra noise in the measuring system. For this reason, we take the difference between consecutively measured values as an output to remove the correlation. Thanks to the subtraction, we can judge 0 and 1 without the extra noise in the stage of raw data reflecting the probability distribution of the phase noise of the LD, making the ordinary extractor unnecessary. That is the point of this report. Here, we note that our subtraction is digitally processed to prevent the extra noise from entering there.

The differential method is widely used in signal processing. It is also used in random-number generators using a chaotic LD to make the operation robust against environmental change and to make output symmetric [1620]. However, removing the low-frequency components of the extra noise in the measuring system has not been specially noted there. This is probably because the spike-like behavior of the chaotic LD makes the low-frequency extra noise less important. In contrast, our system is sensitive to the low-frequency extra noise, as shown in Section 4; we should pay attention to removing it. The novelty of our method is to use the differential method to remove it instead of the ordinary extractor.

To show the effectiveness of taking the difference between consecutively measured values, we evaluated outputs from the phase noise of a cw LD by using the autocorrelation and a statistical test suite (NIST SP800-22 [21]). Because the latter is for evaluating binary random numbers with a 1:1 ratio, it might not sound like an appropriate way of treating the occurrence ratio of p:(1-p). However, the important point here is whether the measured phase noise is random or not. That does not depend on p. Thus, we used this test suite for one of evaluations. In using it, the outputs from the Gaussian phase noise of the LD are mapped to an equally distributed one before testing. Because the test suite statistically judges the randomness as a pass or failure at a significance level, there may be events that judge the randomness of true random numbers as a failure, or inversely judge an incomplete randomness as a pass. For this reason, we do not judge the randomness simply, but do so by comparing the number of failure events with that for ideal random numbers. If there is no large difference between them, the results suggest that the test is a pass and that we can exclude an accidental misjudgment related to the significance level. The results of the autocorrelation test and the test suite suggest that the outputs for the phase noise of an LD are randomly distributed as a continuous quantity. In other words, the extra noise added by the measuring system can almost completely be removed at the accuracy level of the measurements and evaluations.

2. Principle of phase measuring

The phase of light can be measured with an asymmetric interferometer. Figure 1 schematically shows a phase-noise-measuring system. Let $\hat{a}(t )$ and $\hat{b}(t )$ be the annihilation operators of the signal and vacuum inputs into the beamsplitter 1 (BS1); let $\hat{c}(t )$ and $\hat{d}(t )$ be those of the outputs from BS1; let $\hat{e}(t )$ and $\hat{f}(t )$ be those of the outputs from the beamsplitter 2 (BS2); and let ξ and η be the conversion efficiencies of the photo-detector parts. The detector output is described as $\hat{I}(t )= \xi \;{\hat{e}^{\dagger }}(t )\hat{e}(t )- \eta \;{\hat{f}^{\dagger }}(t )\hat{f}(t )$, where $\hat{e}(t )$ and $\hat{f}(t )$ are described in terms of $\hat{a}(t )$ and $\hat{b}(t )$ through the transformation of the beam splitters [22]. Let us assume ξ = η and that the splitting ratio of BS1 and BS2 is 50:50. Then, terms of ${\hat{a}^{\dagger }}(t )\hat{a}(t )$ and ${\hat{a}^{\dagger }}({t + \tau } )\hat{a}({t + \tau } )$ in $\hat{I}(t )$ cancel, where τ is the delay time in the asymmetric interferometer. The effect of incomplete satisfaction of ξ = η and the 50:50 ratio is mentioned in section 3.1. We assume the signal intensity is sufficiently larger than quantum fluctuations. Then, terms with $\hat{b}(t )$ and ${\hat{b}^{\dagger }}(t )$ can be neglected, and

$$\hat{I}(t )\simeq - \frac{1}{2}\xi [{{{\hat{a}}^{\dagger }}({t + \tau } )\hat{a}(t )+ {{\hat{a}}^{\dagger }}(t )\hat{a}({t + \tau } )} ]$$
is obtained. Equation (1) consists of interfering terms for the two arms, and it contains phase information. In the following, we treat $\hat{a}(t )$ classically. Let a(t) = a0(t)exp[-(t)–iωt], where a0(t) is selected to be real. We tune the interferometer such that ωτ = (n + 1/2)π, where n is an integer. Then,
$$I(t )\simeq {({ - 1} )^n}\xi {a_0}({t + \tau } ){a_0}(t )\sin \delta \varphi ({t,\tau } ).$$

 figure: Fig. 1.

Fig. 1. Model of phase-noise measuring system.

Download Full Size | PDF

Here, δφ(t,τ) = φ(t + τ) – φ(t) is phase noise. Because we treat a cw LD, a0(t) is constant. The remaining parameter in Eq. (2) is δφ(t,τ), and I(t) is directly related to δφ(t,τ) through the sine function.

3. Experimental method

3.1 Measuring system

Figure 2 shows the block diagram of the measuring system. Output light from a constant current-driven cw distributed feedback (DFB) LD (Opnext LE7602LAP350S, λ = 1549 nm) was passed through a bandpass filter (Santec OTF-30M-06S2) and a variable attenuator; it was led to an asymmetric Michelson-Morley interferometer (AMM) with a 400-ps delay line (Optoplex) and received by balanced detectors. The balanced detectors consisted of two photodiodes (PD) (Eudyna FRM3Z232BS) with a preamplifier having a bandwidth of 2.45 GHz and a differential amplifier (OKI KGL4142KD) with a bandwidth of ∼11.3 GHz. The output from the balanced detectors was measured with a spectrum analyzer (HP 8563E) and digital oscilloscopes (Tektronix TDS784D with a 1-GHz bandwidth and MSO71604C with a 16-GHz bandwidth). The sampling rate fr was 0.1, 1, and 2.5 GSps. The TDS784D was used for the 0.1- and 1-GSps sampling, while the MSO71604C was used for the 2.5-GSps sampling. Because the MSO71604C cannot work at 2.5 GSps, data were first sampled at 12.5 GSps and then 1/5th of them were periodically selected.

 figure: Fig. 2.

Fig. 2. Block diagram of experimental setup. The right figure shows an example of phase noise measured at 100 MSps.

Download Full Size | PDF

The threshold current of the LD was ∼9 mA. The drive current was set to be Id = 12 mA as a near-threshold condition and Id = 70 mA as a far-threshold condition. The input intensity to the PD was set as ∼−12 dBm, and the output of the differential amplifier was set to be about half of the maximum output power while the gain was finely adjusted so that the output would just cover the range of the oscilloscope. These conditions assured linearity. The ωτ = (n + 1/2)π condition was adjusted by monitoring the tail of the output distribution; i.e., the interferometer was tuned to obtain a symmetric distribution.

In the process of obtaining Eq. (2), we assumed ξ = η and a 50:50 splitting ratio of BS1 and BS2. When the satisfaction of those assumptions is incomplete, extra terms, i.e., constant DC and amplitude fluctuation terms, are added to Eq. (2). However, the DC term is cut through DC-blocks. The amplitude fluctuations are not only sufficiently smaller than the phase fluctuations [22]; they only have a tiny effect in Eq. (2) because the term is based on the incompleteness of the asymmetric interferometer. Thus, they are negligible.

3.2 Autocorrelation

The randomness of the measured values xj can be evaluated through their autocorrelation. The autocorrelation is defined as s(d) =${{\sum\nolimits_{j = 1}^N {({{x_j} - \overline {{x_j}} } )({{x_{j + d}} - \overline {{x_{j + d}}} } )} } \mathord{\left/ {\vphantom {{\sum\nolimits_{j = 1}^N {({{x_j} - \overline {{x_j}} } )({{x_{j + d}} - \overline {{x_{j + d}}} } )} } {\sum\nolimits_{j = 1}^N {{{({{x_j} - \overline {{x_j}} } )}^2}} }}} \right. } {\sum\nolimits_{j = 1}^N {{{({{x_j} - \overline {{x_j}} } )}^2}} }}$, where d is the delay, the bar denotes the average for infinite samples, and we assume N ≫ 1. Because the DC term is cut through DC-blocks, we assume $\overline {{x_j}} $=$\overline {{x_{j + d}}} $ = 0. In this case, s(d) simplifies to ${{\sum\nolimits_{j = 1}^N {{x_j}{x_{j + d}}} } \mathord{\left/ {\vphantom {{\sum\nolimits_{j = 1}^N {{x_j}{x_{j + d}}} } {\sum\nolimits_{j = 1}^N {x_j^2} }}} \right. } {\sum\nolimits_{j = 1}^N {x_j^2} }}$. Hereafter, we will assume $\overline {{x_j}} $ = 0 and omit the term. When s(d) is near zero and there is no dependence on d, the measured values are expected to be random. Whether s(d) is near zero or not can be checked by whether the fluctuations of s(d) are within the theoretical statistical fluctuations. Here, in addition to the theoretical statistical fluctuations, the mutual correlation, which is defined as m(d) =${{\sum\nolimits_{j = 1}^N {{x_j}{y_j}} } \mathord{\left/ {\vphantom {{\sum\nolimits_{j = 1}^N {{x_j}{y_j}} } {{{\left( {\sum\nolimits_{j = 1}^N {x_j^2} \sum\nolimits_{j = 1}^N {y_j^2} } \right)}^{{1 \mathord{\left/ {\vphantom {1 2}} \right.} 2}}}}}} \right. } {{{\left( {\sum\nolimits_{j = 1}^N {x_j^2} \sum\nolimits_{j = 1}^N {y_j^2} } \right)}^{{1 \mathord{\left/ {\vphantom {1 2}} \right.} 2}}}}}$ using independently measured xj and yj, is a good reference. Even though there is no correlation between xj and yj, m(d) statistically fluctuates, i.e., the statistical fluctuations can experimentally be obtained. Thus, we can check whether the fluctuations of s(d) are simply the statistical fluctuations or not by comparing s(d) with m(d).

As shown in section 4.2, there is 1/f noise in the amplifiers of the balanced detectors. In addition, the amplifiers have a cut-off frequency on the low-frequency side, and DC-blocks are placed at the input and output terminals of the amplifiers. If the low-frequency components of the random noise are cut off, even the originally uncorrelated consecutive random-noise signals will have a correlation in accordance with the lack of low-frequency components. The correlation and the low-frequency components due to the 1/f noise can be removed by taking the difference between consecutive outputs, because the sampling rate is sufficiently larger than the frequency of the concerned components (see Fig. 6). Thus, we also tested the autocorrelation of δxj = xj+1xj. Here, we will consider the case of d = 1. Let us suppose the ideal case of $\overline {{x_{j + d}}{x_j}} $ = 0 (d ≠ 0) and $\overline {x_{j + d}^2} $=$\overline {x_j^2} $, where we assume fr ≤ 1/τ. Then,

$$\overline {\sum\limits_j {\delta x_j^2} } = \overline {\sum\limits_j {{{({{x_{j + 1}} - {x_j}} )}^2}} } = \overline {\sum\limits_j {({x_{j + 1}^2 - 2{x_{j + 1}}{x_j} + x_j^2} )} } = 2\sum\limits_j {\overline {x_j^2} } ,$$
$$\overline {\sum\limits_j {\delta {x_j}\delta {x_{j + 1}}} } = \overline {\sum\limits_j {({{x_{j + 1}} - {x_j}} )({{x_{j + 2}} - {x_{j + 1}}} )} } = \overline {\sum\limits_j {({{x_{j + 1}}{x_{j + 2}} - x_{j + 1}^2 - {x_j}{x_{j + 2}} + {x_j}{x_{j + 1}}} )} } = - \sum\limits_j {\overline {x_{j + 1}^2} } .$$

Thus,

$${{s(d )= \overline {{{\sum\limits_j {\delta {x_j}\delta {x_{j + 1}}} } \mathord{\left/ {\vphantom {{\sum\limits_j {\delta {x_j}\delta {x_{j + 1}}} } {\sum\limits_j {\delta x_j^2} }}} \right. } {\sum\limits_j {\delta x_j^2} }}} = \overline {\sum\limits_j {\delta {x_j}\delta {x_{j + 1}}} } } \mathord{\left/ {\vphantom {{s(d )= \overline {{{\sum\limits_j {\delta {x_j}\delta {x_{j + 1}}} } \mathord{\left/ {\vphantom {{\sum\limits_j {\delta {x_j}\delta {x_{j + 1}}} } {\sum\limits_j {\delta x_j^2} }}} \right. } {\sum\limits_j {\delta x_j^2} }}} = \overline {\sum\limits_j {\delta {x_j}\delta {x_{j + 1}}} } } {\overline {\sum\limits_j {\delta x_j^2} } }}} \right. } {\overline {\sum\limits_j {\delta x_j^2} } }} = - 0.5.$$
Here, because the denominator is simply a normalization factor, we treated the denominator and numerator independently in the averaging. Equation (5) indicates that there is an offset in s(d) even if the outputs are random when d = 1. This offset should be removed from the autocorrelation evaluation, and it was removed from the experimental results (see Fig. 7). There is no offset related to Eq. (4) for d >1.

Statistical fluctuations of s(d) are estimated as follows for the ideal case by assuming that xj and xj+d are uncorrelated and the average is $\overline {{x_j}} $ = 0. Then, the variance of s(d) is σs2 = $\overline {{{{{\left( {\sum\nolimits_{j = 1}^N {{x_j}{x_{j + d}}} } \right)}^2}} \mathord{\left/ {\vphantom {{{{\left( {\sum\nolimits_{j = 1}^N {{x_j}{x_{j + d}}} } \right)}^2}} {{{\left( {\sum\nolimits_{j = 1}^N {x_j^2} } \right)}^2}}}} \right. } {{{\left( {\sum\nolimits_{j = 1}^N {x_j^2} } \right)}^2}}}} $. We again treat the denominator and numerator independently. Because the average of $\sum\nolimits_{j = 1}^N {x_j^2} $ is O(N) and its standard deviation is O(√N), the fluctuations of the denominator can be neglected. Then, $\sum\nolimits_{j = 1}^N {x_j^2} $$N\overline {x_j^2} $, and σs2${{\overline {{{\left( {\sum\nolimits_{j = 1}^N {{x_j}{x_{j + d}}} } \right)}^2}} } \mathord{\left/ {\vphantom {{\overline {{{\left( {\sum\nolimits_{j = 1}^N {{x_j}{x_{j + d}}} } \right)}^2}} } {{{({N\overline {x_j^2} } )}^2}}}} \right. } {{{({N\overline {x_j^2} } )}^2}}}$=${{N\overline {x_j^2} \cdot \overline {x_{j + d}^2} } \mathord{\left/ {\vphantom {{N\overline {x_j^2} \cdot \overline {x_{j + d}^2} } {{{({N\overline {x_j^2} } )}^2}}}} \right.} {{{({N\overline {x_j^2} } )}^2}}}$=1/N.

The case of differential data can be similarly calculated as follows. First, let us treat the case of d ≥ 2. The variance of autocorrelation is σsd2 = $\overline {{{{{\left( {\sum\nolimits_{j = 1}^N {\delta {x_j}\delta {x_{j + d}}} } \right)}^2}} \mathord{\left/ {\vphantom {{{{\left( {\sum\nolimits_{j = 1}^N {\delta {x_j}\delta {x_{j + d}}} } \right)}^2}} {{{\left( {\sum\nolimits_{j = 1}^N {\delta x_j^2} } \right)}^2}}}} \right. } {{{\left( {\sum\nolimits_{j = 1}^N {\delta x_j^2} } \right)}^2}}}} $. We again independently treat the denominator and numerator and neglect the fluctuations of the denominator. Then, σsd2${{\overline {{{\left( {\sum\nolimits_{j = 1}^N {\delta {x_j}\delta {x_{j + d}}} } \right)}^2}} } \mathord{\left/ {\vphantom {{\overline {{{\left( {\sum\nolimits_{j = 1}^N {\delta {x_j}\delta {x_{j + d}}} } \right)}^2}} } {{{({2N\overline {x_j^2} } )}^2}}}} \right. } {{{({2N\overline {x_j^2} } )}^2}}}$. The numerator is

$$\begin{array}{l} \overline {{{\left( {\sum\limits_{j = 1}^N {\delta {x_j}\delta {x_{j + d}}} } \right)}^2}} = \overline {{{\left[ {\sum\limits_{j = 1}^N {({{x_{j + 1}}{x_{j + 1 + d}} - {x_{j + 1}}{x_{j + d}} - {x_j}{x_{j + 1 + d}} + {x_j}{x_{j + d}}} )} } \right]}^2}} \\ = \sum\limits_{j = 1}^N {({\overline {x_{j + 1}^2x_{j + 1 + d}^2} + \overline {x_{j + 1}^2x_{j + d}^2} + \overline {x_j^2x_{j + 1 + d}^2} + \overline {x_j^2x_{j + d}^2} } )} + 2\sum\limits_{j = 2}^N {\overline {x_j^2x_{j + d}^2} } \\ = 4N{\overline {x_j^2} ^2} + 2({N - 1} ){\overline {x_j^2} ^2} \simeq 6N{\overline {x_j^2} ^2}. \end{array}$$
Thus, σsd2 ≃ 3/(2N). The case of d = 1 is similarly calculated while there is the offset in accordance with Eq. (4), i.e., σsd2${{\left[ {\overline {{{\left( {\sum\nolimits_{j = 1}^N {\delta {x_j}\delta {x_{j + 1}}} } \right)}^2}} - {{\left( {\overline {\sum\nolimits_{j = 1}^N {\delta {x_j}\delta {x_{j + 1}}} } } \right)}^2}} \right]} \mathord{\left/ {\vphantom {{\left[ {\overline {{{\left( {\sum\nolimits_{j = 1}^N {\delta {x_j}\delta {x_{j + 1}}} } \right)}^2}} - {{\left( {\overline {\sum\nolimits_{j = 1}^N {\delta {x_j}\delta {x_{j + 1}}} } } \right)}^2}} \right]} {{{({2N\overline {x_j^2} } )}^2}}}} \right. } {{{({2N\overline {x_j^2} } )}^2}}}$. Here, $\overline {\sum\nolimits_{j = 1}^N {\delta {x_j}\delta {x_{j + 1}}} } $=$- N\overline {x_j^2}$, and
$$\begin{array}{l} \overline {{{\left( {\sum\limits_{j = 1}^N {\delta {x_j}\delta {x_{j + 1}}} } \right)}^2}} = \overline {{{\left[ {\sum\limits_{j = 1}^N {({{x_{j + 1}}{x_{j + 2}} - x_{j + 1}^2 - {x_j}{x_{j + 2}} + {x_j}{x_{j + 1}}} )} } \right]}^2}} \\ = \sum\limits_{j = 1}^N {({\overline {x_{j + 1}^2x_{j + 2}^2} + \overline {x_j^2x_{j + 2}^2} + \overline {x_j^2x_{j + 1}^2} } )} + 2\sum\limits_{j = 2}^N {\overline {x_j^2x_{j + 1}^2} } + \overline {\sum\limits_{j = 1}^N {x_{j + 1}^2} \sum\limits_{j = 1}^N {x_{j + 1}^2} } \\ = 3N{\overline {x_j^2} ^2} + 2({N - 1} ){\overline {x_j^2} ^2} + N({\overline {x_j^4} - {{\overline {x_j^2} }^2}} )+ {N^2}{\overline {x_j^2} ^2}\\ \simeq 7N{\overline {x_j^2} ^2} + {N^2}{\overline {x_j^2} ^2}, \end{array}$$
where NN – 1 and $\overline {x_j^4} $=$3{\overline {x_j^2} ^2}$ for the Gaussian distribution were used at the last line. Thus, σsd2 ≃ 7/(4N). For the mutual correlation, the variance is simply σm2 ≃ 1/N; moreover, it is the same for the differential data, i.e., σmd2 ≃ 1/N.

3.3 Evaluation with test suite

Several test suites for evaluating binary random numbers have been developed, and NIST SP800-22 is one of the best known among them [21]. Because the test suite is for evaluating binary random numbers with a 1:1 ratio, it cannot be used directly to evaluate the intended random numbers with the occurrence ratio of p:(1-p). However, the important point is whether the phase noise as a whole is random or not. That does not depend on p. Thus, the test suite can be used by appropriately transforming the measured phase noise into binary values. We used the following transformation for that purpose.

Values measured with the oscilloscope consist of 8 bits. However, the amount of information is not 8 bits because the distribution is not uniform. The entropy of information Ie = -Σpilog2pi obtained from the probability distribution of the sampled data is about 4.5 bits. Therefore, we transformed the 8-bit data into 4-bit data. Because the 4 bits are almost just enough entropy for the measured 8-bit data, the evaluation almost correctly reflects the whole characteristics of the phase noise.

The most direct method of transforming the data is to divide the probability distribution into 16 regions, as shown in Fig. 3(a), where the boundaries of the regions are determined such that the integrated probability of each region is equally distributed. However, it is difficult to obtain a uniform distribution for the transformed 4-bit values owing to the insufficient resolution in digitization. For this reason, we mapped the 8-bit values to 4-bit values on the basis of the probability distribution shown in Fig. 3(b). In SP800-22, 106 random numbers compose one unit in a test, and 1000 or 2000 units are used in one test. Thus, we first calculated the probability distribution for 1000×106/4 or 2000×106/4 samples. Next, the 8-bit values were mapped to different 4-bit values in descending order until the 16th value, and the 17th-to-32th values were mapped to 4-bit values in the reverse direction to equalize the probability distribution. After that, the 8-bit values were mapped in descending order to the 4-bit value of the temporary minimum probability. Through this mapping, the 4-bit values became almost uniformly distributed. Although this mapping process is valid for infinite samples, we performed the process for 1000×106/4 or 2000×106/4 samples. Therefore, the process was an approximation. However, 1000×106/4 or 2000×106/4 samples are sufficiently large for this approximation. Here, the nonuniformity of the AD converter in the oscilloscope was automatically compensated for in the transformation of the 8-bit values into 4 bit values.

 figure: Fig. 3.

Fig. 3. How to translate 256-value fluctuations into 16-value random numbers. (a) Simple way. (b) Way of finely equalizing the probability distribution of 4-value random numbers.

Download Full Size | PDF

As mentioned above, because there is a correlation between consecutively measured values owing to the 1/f noise and the cut off at low frequency in the amplifiers, the randomness of the raw data is insufficient. Therefore, the correlation was removed by taking the difference between consecutively measured values, similar to the case of autocorrelation. Although n differential data can be made from n + 1 measured values, we made each differential datum from two consecutively measured values; i.e., we used 2n measured values for n differential data. This idea is based on the following fact. The differentiated data are 9 bits as a result of subtraction. Because the original 8-bit data are transformed into 9-bit data, the differentiated data have artificial redundancy if only n + 1 measured values are used. To avoid artificial redundancy, we used 2n samples for n differential data.

3.4 About NIST SP800-22

NIST SP800-22 consists of 15 kinds of tests, the total number of which is 188. The simplest test is checking the number of 0s and 1s. Another example is checking the appearance probability of specific patterns. The reason why the total number of tests is large compared with the number of the kinds is that the number of the specific patterns is large. In each test, a quantity called the p-value is evaluated for a unit of 106 random numbers. The p-value quantifies the deviations of randomness for each unit. It is defined as a value ranging from 0 to 1 for every kind of test. The definition has a property that the probability of the p-value being less than or equal to x is x for ideal random numbers. The p-values were evaluated for 1000 or 2000 units in each test and the distribution of p-values was obtained. This evaluation was performed on all 188 terms. The obtained distribution was compared with the ideal case from the viewpoints of uniformity and proportion.

Uniformity is quantified as follows. The p-value range is divided into ten regions, where the ideal random numbers are equally distributed on average; i.e., the expectation of each region is 100 for 1000-unit samples; next, the distribution for each region is determined for the measured samples; the deviations from the expectation are quantified with a quantity called P-valueT, where 0 ≤ P-valueT ≤ 1 and the probability of the P-valueT being less than or equal to x is x for ideal random numbers; a typical criterion for passing the uniformity test is P-valueT ≥ 0.0001. This criterion corresponds to a test with a significance level of 0.01%. To pass the uniformity test, P-valueT ≥ 0.0001 must be satisfied for all 188 terms. Because 0.9999188 = 0.981…, the test has a significance level of 1.9% on the whole [23]. As is apparent from the concept of significance level, even true random numbers do not pass the test sometimes. Therefore, we did not simply judge randomness from the obtained P-valueT; rather we did so comprehensively by checking the failure probability at a significance level.

The proportion test typically counts the cases in which the p-value ≥ 0.01. The probability of a p-value being equal to or greater than 0.01 is 99% for ideal random numbers, and the counts with a p-value ≥ 0.01 are distributed with a binomial distribution of p = 0.99. The proportion test checks whether the counts with a p-value ≥ 0.01 are in the 3σ range of the binomial distribution of p = 0.99. The 3σ range for 1000 samples is 980 ≤ counts ≤ 1000, and the probability for a sample to be in the 3σ range is 0.9985…. Because 0.9985…188 = 0.7591…, the test on the whole has a significance level of 24%. Similar to the uniformity test, randomness is comprehensively judged by checking a failure probability at a significance level.

4. Experimental results

4.1 Probability distribution

Figure 4 plots the probability distribution of the measured phase noise. The number of data was 250×106, and the sampling rate was 0.1, 1, or 2.5 GSps. The results for each sampling rate are shown together with Gaussian curves as a guide. Because the phase noise of an LD originates from many spontaneous emissions, it is expected to be Gaussian distributed in accordance with the central limit theorem. As shown, when Id = 70 mA, the phase noise is not so large that sinδφ(t,τ) ≃ δφ(t,τ) is satisfied in Eq. (2), and the experimental results fit Gaussian curves. When Id = 12 mA, the linear approximation for the sine function is not applicable and the probability distribution drops at the tails. There is no dependence on sampling rate. As will be shown in sections 4.3 and 4.4, the tail has no effect on the randomness of the measured phase noise. This is because the shape of the distribution is not essential for randomness. The measured phase noise is sufficiently ideal with respect to the theoretical probability distribution.

 figure: Fig. 4.

Fig. 4. Probability distribution of phase noise. Gaussian curves are also drawn. Data were acquired at three different sampling rates. Measured phase noise is ideally distributed with respect to probability distribution.

Download Full Size | PDF

4.2 Noise spectra

Figures 5(a) and 5(b) show noise spectra at Id = 12 mA and 70 mA. Plots labeled “Phase noise” show total noise, and plots labeled “Amp. noise” show the case without input light, i.e., noise from the PD and the following amplifiers. The difference between these two plots is the net phase noise. The phase noise at the near-threshold condition in Fig. 5(a) is larger than that in Fig. 5(b). The dip at 2.5 GHz comes from an interference effect of the asymmetric interferometer.

 figure: Fig. 5.

Fig. 5. Power spectrum of phase noise.

Download Full Size | PDF

Although Fig. 5 shows a smooth frequency dependence without any structure, there is a peak near zero frequency. This peak is more apparent in Fig. 5(b) than in Fig. 5(a) because the net phase noise is less in Fig. 5(b) than in Fig. 5(a). Figure 6 shows the spectrum expanded near zero frequency. The extra noise is mainly at 20 MHz and consists of many line spectra, although they are not resolved in Fig. 6(a). The extra noise seems to come from the temperature controller for the LD. If so, it could be checked by turning the temperature controller on and off. However, when the LD is not temperature-controlled, the operating wavelength cannot be maintained, and we cannot obtain an interference signal. Thus, we measured the intensity noise instead, which is shown in Fig. 6(b). The noise near 20 MHz could be made to disappear by turning the temperature controller off. However, the extra noise at 3–8 MHz and 10–15 MHz did not disappear even when the controller was turned off, although it is not clearly shown in Fig. 6(b). This noise is expected to come from the laser driver. Although these extra noise sources exist as long as the LD operates, a large part of them can be removed by taking the difference between consecutively measured values, because the sampling rate is 100 MSps or more. In addition, the bandwidth of the extra noise is negligibly narrower than the whole bandwidth of the random noise. Thus, the remaining extra noise is negligible for differential data.

 figure: Fig. 6.

Fig. 6. (a) Power spectrum of phase noise. The horizontal axis of the inset is a log scale. (b) Power spectrum of intensity noise. The extra noise at 20 MHz comes from the temperature controller.

Download Full Size | PDF

The amplifier noise grows on the low-frequency side, as shown in Fig. 6(a). It consists of 1/f noise, apparent from the inset. It too can be removed by taking the difference between consecutively measured values.

4.3 Autocorrelation

Figure 7(a) shows the autocorrelation of the raw data, and Fig. 7(b) shows that of the differential data. For comparison, the mutual correlation data are shown in the inset; N = 1×, 2×, 3×, 4×, and 5×105 for the mutual correlation and N = 4×105 for the autocorrelation.

 figure: Fig. 7.

Fig. 7. Autocorrelation. Insets show mutual correlation. The horizontal axis in the insets is the number of data used for calculating the mutual correlation, i.e., 1×, 2×, 3×, 4×, and 5×105. (a) Autocorrelation for raw data. Autocorrelation is large in the small-delay region owing mainly to the low-frequency extra noise in the measuring system. (b) Autocorrelation for differential data. The correlation due to the extra noise in the measuring system is mostly removed by differentiating the consecutively measured values. When the LD is driven at a near-threshold current, i.e., Id = 12 mA, the variance of the autocorrelation is almost constant except for the case of 2.5 GSps sampling. The dotted lines indicate 2σsd (or 2σmd in the inset) for the ideal case.

Download Full Size | PDF

The autocorrelation of the raw data is generally large in the small-delay region (Fig. 7(a)). One reason is the correlation based on the 1/f noise, the low-frequency cut off, and the LD-driver- and temperature-controller-related noise. The correlation can be mostly removed by taking the difference between consecutively measured values (Fig. 7(b)). When Id = 12 mA and fr = 100 MSps and 1 GSps (plots (1) and (2) in Fig. 7(b)), the variance of the autocorrelation is almost constant at a level of 2σsd. Because the mutual correlation is less than 2σmd, the variance is slightly larger than that for the mutual correlation. However, the difference is minor, suggesting that the randomness is almost ideal at the accuracy of the measurements. However, under other conditions, the autocorrelation in the small-delay region rather deteriorates. Generally, the autocorrelation for Id = 12 mA shows better performance than that for Id = 70 mA. This is because the phase noise for Id = 12 mA is larger than that for Id = 70 mA, as can be seen in Figs. 5(a) and (b), and because the correlation due to the extra noise affects the autocorrelation more for Id = 70 mA than for Id = 12 mA. Moreover, even when Id = 12 mA, the autocorrelation for fr = 2.5 GSps deteriorates in the small-delay region (plot (3) in Fig. 7(b)). This is due to the bandwidth limitation. As shown in Fig. 5(a), the measured phase noise has a dip at 2.5 GHz owing to the interference effect of the asymmetric interferometer. The width of the spectrum in Fig. 5(a) is insufficient for 2.5-GSps sampling.

The above results suggest that when an LD is driven at near the threshold current and the sampling rate is less than the bandwidth, the phase noise is almost ideally random at the accuracy of the measurements.

4.4 Evaluation using NIST SP800-22

Table 1 shows the results of the NIST SP800-22 tests. The tests were done two times at each sampling rate for 1000-unit samples. A test for 2000-unit samples was also done for differential data by combining two data sets of 1000 units. Table 1 indicates the number of failures for 188 terms with respect to uniformity (U) and proportion (P). Tests with “0” mean “pass.” There are many failures for the raw data, particularly at 2.5 GSps sampling. One cause is the correlation due to the extra noise; the other is the bandwidth limitation.

Tables Icon

Table 1. Results of NIST SP800-22 test.

The correlation can be removed by taking the difference between consecutively measured values. The effect of this differentiation is clearly seen in Table 1(a). Here, the effect of the bandwidth limitation is removed as well, and there is no clear dependence on the measuring conditions, i.e., the sampling rate and drive current. The reason why the effect of bandwidth limitation is removed is that the main frequency component cut off by the bandwidth limitation is 2.5 GHz. The lack of the 2.5-GHz component that originally existed causes the correlation between consecutively measured values. Because its frequency component is in-phase between consecutively measured values, it was removed by taking the difference.

In Table 1(a), the uniformity test for differential data was passed in every case regardless of the conditions. However, there should be statistical events in which true random numbers are judged as failures and incomplete random numbers are judged as passes. Therefore, random numbers should not be simply judged, but rather statistically judged on the whole.

The criterion for passing the uniformity test in Table 1(a) is P-valueT ≥ 0.0001. This criterion means that the probability of failure is 1/10000 for ideal random numbers. Because the number of test terms is 188, the criterion might be too loose. Thus, we changed the criterion to P-valueT ≥ 0.01 (Table 1(b)). This stricter criterion leads to failures even for the ideal random numbers once every 100 terms on average. The results in Table 1(b) have statistical fluctuations, and a dependence on the measuring conditions would be in the statistical fluctuations if it existed. Thus, we collectively evaluated the failure probability without considering the measuring conditions. The failure events numbered 24 for the 1000-unit case and 7 for the 2000-unit case. The failure probabilities were thus 24/(12×188) = 0.01063… and 7/(6×188) = 0.006205…, respectively. These values are near or less than 0.01 for ideal random numbers. They are summarized in Table 2(a).

Tables Icon

Table 2. Failure counts and rates in NIST SP800-22 test.

The results of the proportion test can be analyzed similarly. The failure events in the proportion test for differential data in Table 1(a) totaled 4 for the 1000-unit case and 1 for the 2000-unit case. The failure probabilities were 0.001773… and 0.0008865…, respectively. The failure probabilities for the 3σ criterion were 0.001496… for 1000 units and 0.001474 for 2000 units. Here, the range of 3σ was 980 ≤ proportion ≤ 1000 for 1000 units and 1966 ≤ proportion ≤ 1994 for 2000 units. The results are summarized in Table 2(b).

Only a few failure events occurred under the 3σ criterion. These events can be increased in number by changing the criterion to 2σ. The results are shown in Table 2(b). The range of 2σ is 983 ≤ proportion ≤ 997 for 1000 units and 1971 ≤ proportion ≤ 1989 for 2000 units.

The analysis so far has been one of a two-sided test. However, the deviation of incomplete random numbers will generally be to the lower side. Thus, a one-sided test was also done for the 2σ criterion. The results are also shown in Table 2(b).

The test results in Table 2 suggest that differential data have a failure probability near the ideal case on the whole. Although NIST SP800-22 has room for improvement [23], the obtained results support the randomness of the differential data given the use of the present test suite.

5. Discussion

The phase noise of LDs operating near the threshold is dominated by noise originating from spontaneous emission [2428]. Therefore, when the phase noise is much larger than the noise of the measuring system, we can in principle measure ideal random phase noise. The condition is satisfied in ordinary experimental conditions, as shown in Fig. 5(a), i.e.,, there is the large difference between the phase noise and the amplifier noise intensities. The problem is the correlation due to the extra noise in the measuring system. However, it can be removed by taking the difference between consecutively measured values. The effect of the subtraction is clearly seen in Figs. 7(b). The other thing we should be concerned about is the bandwidth limitation of the measuring system. As shown in Fig. 5, the system does not have sufficient bandwidth for 2.5 GSps sampling. Indeed, the autocorrelation for 2.5 GSps sampling rather deteriorates in the low-delay region, as shown in Fig. 7(b). Thus, the result is reasonable. Because the bandwidth is sufficient for 0.1 and 1 GSps sampling, the autocorrelation shows no delay dependence in those cases (plots (1) and (2) in Fig. 7(b)). These results indicate that the measured phase noise of an LD is almost ideally random if the following conditions are satisfied; i.e., the LD is driven at a near-threshold current, the signal intensity is much larger than the amplifier noise in the measuring system, the necessary bandwidth is assured in the measuring system, and the correlation due to the extra noise is removed. The results of the NIST SP800-22 tests support this conclusion. Here, we should note that the results of the NIST SP800-22 tests indicate a high level of randomness even for the far-threshold case of Id = 70 mA and insufficient bandwidth case of 2.5 GSps sampling. They suggest that the evaluation using the NIST SP800-22 tests is less rigorous than the one using the autocorrelation.

The probability distribution of the phase noise fits a Gaussian distribution very well. This finding is further support showing the randomness of the phase noise. Thus, all the experimental results support the conclusion that the randomness of the measured phase noise of an LD is very high. This conclusion was arrived at by only taking the difference between consecutively measured values as outputs. It means the measured phase noise of an LD can be used as a continuous-variable random number generator. On this basis, by appropriately setting the threshold for outputs, we can obtain binary random numbers with an occurrence ratio of p:(1-p). However, the resolution of p is limited by that of the AD converter. A high-resolution AD converter should thus be used in the measurement to increase the accuracy of p.

Lastly, we should mention the min-entropy, which is often referred to in recent reports related to quantum random number generators [5]. This quantity estimates the possible entropy that can be obtained from a nonuniform distribution. We directly calculated about 4.5 bits as the entropy obtainable from our experiment. The quantity corresponds to the min-entropy. Although this quantity should be modified owing to the extra noise in the measuring system and the small classical component included in the phase noise of an LD [25,27], the former is negligibly small, as described in section 4.2 and discussed in the first paragraph of this section, and the latter is also negligibly small because of the near-threshold operation [28]. Thus, the modification is also negligibly small. To use the NIST SP800-22 as one of the evaluation tools, we transformed the measured phase noise into 4-bit values. In addition, to remove the artificial redundancy, we reduced the entropy of the measured values to half. Thus, the measured values are also appropriately treated from the viewpoint of the min-entropy. However, our intention here is not to propose a method for increasing the entropy of measured values, but rather to generate binary random numbers with an occurrence ratio of p:(1-p) without an ordinary extractor that produces binary random numbers with a 1:1 ratio. In accordance with this purpose, we proposed a simple method that involves taking the difference between consecutively measured values and showed that the extra noise can be almost completely removed. At the present stage, we are thinking of taking one binary number from a set of consecutively measured values.

6. Summary

We aimed at obtaining binary random numbers with an occurrence ratio of p:(1-p) from the phase noise of a cw LD. To ensure its foundation, we evaluated the phase noise as a continuous quantity by using its autocorrelation and a statistical test suite without an ordinary extractor for binary random numbers. Because the phase noise mainly originates from spontaneous emission, if it is measured under appropriate conditions, it should have ideal randomness. Here, the appropriate conditions are that (1) the LD is driven by a near-threshold current, (2) the signal intensity is much greater than the additional-noise intensity, (3) the bandwidth is larger than the sampling rate, and (4) the correlation due to the extra noise in the measuring system is removed by taking the difference between consecutively measured values. The autocorrelation results showed a reasonable dependence on the measuring conditions. Therefore, when a cw LD is driven under the above conditions, the measured phase noise has almost ideal randomness that is assured by spontaneous emission, making it possible to use the phase noise of an LD as a continuous-variable random-number generator and as a binary random number generator with an occurrence ratio of p:(1-p). Because phase noise is Gaussian distributed, the tail part is less distributed, so p can be finely controlled. The results of the NIST SP800-22 tests support this conclusion. Because the test results suggested “pass” for all of the drive-current and sampling rate conditions examined in this report, the evaluation using the NIST SP800-22 tests seems to be less rigorous than the one using the autocorrelation.

References

1. T. Jennewein, U. Achleitner, G. Weihs, H. Weinfurter, and A. Zeilinger, “A fast and compact quantum random number generator,” Rev. Sci. Instrum. 71(4), 1675–1680 (2000). [CrossRef]  

2. Y. Shen, L. Tian, and H. Zou, “Practical quantum random number generator based on measuring the shot noise of vacuum states,” Phys. Rev. A 81(6), 063814 (2010). [CrossRef]  

3. C. Gabriel, C. Wittmann, D. Sych, R. Dong, W. Mauerer, U. L. Andersen, C. Marquardt, and G. Leuchs, “A generator for unique quantum random numbers based on vacuum states,” Nat. Photonics 4(10), 711–715 (2010). [CrossRef]  

4. B. Qi, Y.-M. Chi, H.-K. Lo, and L. Qian, “High-speed quantum random number generation by measuring phase noise of a single-mode laser,” Opt. Lett. 35(3), 312–314 (2010). [CrossRef]  

5. F. Xu, B. Qi, X. Ma, H. Xu, H. Zheng, and H.-K. Lo, “Ultrafast quantum random number generation based on quantum phase fluctuations,” Opt. Express 20(11), 12366–12377 (2012). [CrossRef]  

6. F. Raffaelli, P. Sibson, J. E. Kennard, D. H. Mahler, M. G. Thompson, and J. C. F. Matthews, “Generation of random numbers by measuring phase fluctuations from a laser diode with a silicon-on-insulator chip,” Opt. Express 26(16), 19730–19741 (2018). [CrossRef]  

7. Z. L. Yuan, M. Lucamarini, J. F. Dynes, B. Fröhlich, A. Plews, and A. J. Shields, “Robust random number generation using steady-state emission of gain-switched laser diodes,” Appl. Phys. Lett. 104(26), 261112 (2014). [CrossRef]  

8. C. Abellán, W. Amaya, M. Jofre, M. Curty, A. Acín, J. Capmany, V. Pruneri, and M. W. Mitchell, “Ultra-fast quantum randomness generation by accelerated phase diffusion in a pulsed laser diode,” Opt. Express 22(2), 1645–1654 (2014). [CrossRef]  

9. X. Li, A. B. Cohen, T. E. Murphy, and R. Roy, “Scalable parallel physical random number generator based on a superluminescent LED,” Opt. Lett. 36(6), 1020–1022 (2011). [CrossRef]  

10. C. R. S. Williams, J. C. Salevan, X. Li, R. Roy, and T. E. Murphy, “Fast physical random number generator using amplified spontaneous emission,” Opt. Express 18(23), 23584–23597 (2010). [CrossRef]  

11. T. Tomaru, “Secret key generation from channel noise with the help of a common key,” arXiv1803.05090.

12. “Encrypted Communications Device Making Eavesdropping Practically Impossible,” Hitachi Review68(2), 146 (2019); https://www.hitachi.com/rev/archive/2019/r2019_02/26/index.html#sec02.

13. N. Nisan and A. Ta-Shma, “Extracting Randomness: A Survey and New Constructions,” J. Comp. Sys. Sci. 58(1), 148–173 (1999). [CrossRef]  

14. L. Trevisan, “Extractors and pseudorandom generators,” J. Assoc. Comput. Mach. 48(4), 860–879 (2001). [CrossRef]  

15. M. W. Mitchell, C. Abellan, and W. Amaya, “Strong experimental guarantees in ultrafast quantum random number generation,” Phys. Rev. A 91(1), 012314 (2015). [CrossRef]  

16. I. Reidler, Y. Aviad, M. Rosenbluh, and I. Kanter, “Ultrahigh-speed random number generation based on a chaotic semiconductor laser,” Phys. Rev. Lett. 103(2), 024102 (2009). [CrossRef]  

17. J. Zhang, Y. Wang, M. Liu, L. Xue, P. Li, A. Wang, and M. Zhang, “A robust random number generator based on differential comparison of chaotic laser signals,” Opt. Express 20(7), 7496–7506 (2012). [CrossRef]  

18. P. Li, J. Zhang, L. Sang, X. Liu, Y. Guo, X. Guo, A. Wang, K. A. Shore, and Y. Wang, “Real-time online photonic random number generation,” Opt. Lett. 42(14), 2699–2702 (2017). [CrossRef]  

19. P. Li, Y. Guo, Y. Guo, Y. Fan, X. Guo, X. Liu, K. Li, K. A. Shore, Y. Wang, and A. Wang, “Ultrafast Fully Photonic Random Bit Generator,” J. Lightwave Technol. 36(12), 2531–2540 (2018). [CrossRef]  

20. P. Li, Y. Guo, Y. Guo, Y. Fan, X. Guo, X. Liu, K. A. Shore, E. Dubrova, B. Xu, Y. Wang, and A. Wang, “Self-balanced real-time photonic scheme for ultrafast random number generation,” APL Photonics 3(6), 061301 (2018). [CrossRef]  

21. http://csrc.nist.gov/publications/PubsSPs.html

22. T. Tomaru, “Quantum mechanically formulated fluctuation characterization method using symmetric and asymmetric interferometers,” J. Opt. Soc. Am. B 28(6), 1502–1513 (2011). [CrossRef]  

23. Only two tests are responsible for most of the 188 terms in NIST SP800-22. Therefore, SP800-22 does not evaluate binary random numbers equally with respect to the 15 kinds of tests. Although there is room for improvement, it is beyond the scope of our study. Thus, we decided to use SP800-22 as it stands in order to obtain one of evaluation figures.

24. C. Henry, “Theory of the Linewidth of Semiconductor Lasers,” IEEE J. Quantum Electron. 18(2), 259–264 (1982). [CrossRef]  

25. K. Vahala, “Occupation fluctuation noise: A fundamental source of linewidth broadening in semiconductor lasers,” Appl. Phys. Lett. 43(2), 140–142 (1983). [CrossRef]  

26. D. Welford and A. Mooradian, “Observation of linewidth broadening in (GaAl)As diode lasers due to electron number fluctuations,” Appl. Phys. Lett. 40(7), 560–562 (1982). [CrossRef]  

27. K. Kikuchi, T. Okoshi, and R. Arata, “Measurement of linewidth and FM-noise spectrum of 1.52 µm InGaAsP lasers,” Electron. Lett. 20(13), 535–536 (1984). [CrossRef]  

28. A. Villafranca, J. A. Lázaro, I. Salinas, and I. Garcés, “Measurement of the Linewidth Enhancement Factor in DFB Lasers Using a High-Resolution Optical Spectrum Analyzer,” IEEE Photon. Technol. Lett. 17(11), 2268–2270 (2005). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Model of phase-noise measuring system.
Fig. 2.
Fig. 2. Block diagram of experimental setup. The right figure shows an example of phase noise measured at 100 MSps.
Fig. 3.
Fig. 3. How to translate 256-value fluctuations into 16-value random numbers. (a) Simple way. (b) Way of finely equalizing the probability distribution of 4-value random numbers.
Fig. 4.
Fig. 4. Probability distribution of phase noise. Gaussian curves are also drawn. Data were acquired at three different sampling rates. Measured phase noise is ideally distributed with respect to probability distribution.
Fig. 5.
Fig. 5. Power spectrum of phase noise.
Fig. 6.
Fig. 6. (a) Power spectrum of phase noise. The horizontal axis of the inset is a log scale. (b) Power spectrum of intensity noise. The extra noise at 20 MHz comes from the temperature controller.
Fig. 7.
Fig. 7. Autocorrelation. Insets show mutual correlation. The horizontal axis in the insets is the number of data used for calculating the mutual correlation, i.e., 1×, 2×, 3×, 4×, and 5×105. (a) Autocorrelation for raw data. Autocorrelation is large in the small-delay region owing mainly to the low-frequency extra noise in the measuring system. (b) Autocorrelation for differential data. The correlation due to the extra noise in the measuring system is mostly removed by differentiating the consecutively measured values. When the LD is driven at a near-threshold current, i.e., Id = 12 mA, the variance of the autocorrelation is almost constant except for the case of 2.5 GSps sampling. The dotted lines indicate 2σsd (or 2σmd in the inset) for the ideal case.

Tables (2)

Tables Icon

Table 1. Results of NIST SP800-22 test.

Tables Icon

Table 2. Failure counts and rates in NIST SP800-22 test.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

I^(t)12ξ[a^(t+τ)a^(t)+a^(t)a^(t+τ)]
I(t)(1)nξa0(t+τ)a0(t)sinδφ(t,τ).
jδxj2¯=j(xj+1xj)2¯=j(xj+122xj+1xj+xj2)¯=2jxj2¯,
jδxjδxj+1¯=j(xj+1xj)(xj+2xj+1)¯=j(xj+1xj+2xj+12xjxj+2+xjxj+1)¯=jxj+12¯.
s(d)=jδxjδxj+1/jδxjδxj+1jδxj2jδxj2¯=jδxjδxj+1¯/s(d)=jδxjδxj+1/jδxjδxj+1jδxj2jδxj2¯=jδxjδxj+1¯jδxj2¯jδxj2¯=0.5.
(j=1Nδxjδxj+d)2¯=[j=1N(xj+1xj+1+dxj+1xj+dxjxj+1+d+xjxj+d)]2¯=j=1N(xj+12xj+1+d2¯+xj+12xj+d2¯+xj2xj+1+d2¯+xj2xj+d2¯)+2j=2Nxj2xj+d2¯=4Nxj2¯2+2(N1)xj2¯26Nxj2¯2.
(j=1Nδxjδxj+1)2¯=[j=1N(xj+1xj+2xj+12xjxj+2+xjxj+1)]2¯=j=1N(xj+12xj+22¯+xj2xj+22¯+xj2xj+12¯)+2j=2Nxj2xj+12¯+j=1Nxj+12j=1Nxj+12¯=3Nxj2¯2+2(N1)xj2¯2+N(xj4¯xj2¯2)+N2xj2¯27Nxj2¯2+N2xj2¯2,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.