Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Squeezing as a resource for time series processing in quantum reservoir computing

Open Access Open Access

Abstract

Squeezing is known to be a quantum resource in many applications in metrology, cryptography, and computing, being related to entanglement in multimode settings. In this work, we address the effects of squeezing in neuromorphic machine learning for time-series processing. In particular, we consider a loop-based photonic architecture for reservoir computing and address the effect of squeezing in the reservoir, considering a Hamiltonian with both active and passive coupling terms. Interestingly, squeezing can be either detrimental or beneficial for quantum reservoir computing when moving from ideal to realistic models, accounting for experimental noise. We demonstrate that multimode squeezing enhances its accessible memory, which improves the performance in several benchmark temporal tasks. The origin of this improvement is traced back to the robustness of the reservoir to readout noise, which is increased with squeezing.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Squeezing is a quantum phenomenon characterized by reduced light field quadrature fluctuations below shot noise levels [13]. Initially employed in fundamental quantum tests, such as Einstein–Podolsky–Rosen (EPR) paradox experiments [4,5], squeezing has emerged as a crucial resource in diverse quantum technologies. Notably, squeezed states have been extensively utilized in quantum metrology to enhance measurement sensitivity for parameter estimation [6,7], clock synchronization [8], and gravitational wave detection [9]. Moreover, their role as a resource for quantum entanglement has been harnessed for quantum cryptography protocols [10,11]. In boson sampling experiments, large multimode squeezed states have made it possible to achieve a quantum advantage [12,13]. Additionally, they serve as the primary resource for universal measurement-based quantum computing in continuous variables (CV) [14] through the generation of cluster states [1517]. In the context of quantum machine learning, squeezing is fundamental for CV quantum neural networks to outperform their classical counterparts [18]. In this work, we will focus on the favorable impact of squeezing on time-series prediction and forecasting in the context of Quantum Reservoir Computing (QRC) [19].

Reservoir Computing (RC) constitutes an unconventional paradigm within the realm of machine learning techniques rooted in recurrent neural networks [2022]. Particularly tailored for time series processing, RC allows fast learning with minimal training costs. The RC framework has demonstrated its effectiveness in real-world scenarios including temporal prediction tasks [2326] as well as classification tasks [27,28]. By harnessing the information processing capabilities of high-dimensional dynamical systems, RC concepts have seamlessly transitioned to physical substrates [29], with photonic and optoelectronic implementations receiving attention for their high-speed attributes [3033].

Recently, the scope of RC has expanded to encompass quantum systems, capitalizing on their augmented Hilbert space for enhanced performance [19]. Notably, quantum enhancements in temporal tasks under ideal conditions have been observed in both spin [34,35] and photonic setups [36,37]. Different aspects influencing quantum reservoirs perfomance have been considered and the effectiveness of complex task solving has been addressed considering different evolution maps [34,3739], the role of statistics [40,41], or different quantum phases [42]. These investigations assumed ideal conditions, attributing performance improvements to factors such as improved memory properties, more favorable nonlinearities, and expanded accessible Hilbert spaces.

However, QRC faces several challenges, with substantial attention directed towards addressing the presence of noise in output observables [4347]. While readout noise is relevant to classical RC as well [48,49], it acquires heightened significance in QRC due to intrinsic sampling noise arising from the stochastic nature of quantum measurements. This noise significantly hinders potential quantum enhancements [4446]. The strategy of monitoring the output accounting on the effects of quantum measurement for temporal tasks is also particularly critical [44,45]. Preserving quantum advantage within these non-ideal circumstances is needed to ensure the viability of QRC protocols.

In Ref. [44] it was shown how weak –instead of projective– measurements allow continuous monitoring in QRC –instead of buffering inputs and rewind the reservoir dynamics. Continuous homodyne monitoring in QRC was addressed in Ref. [45] in a photonic platform. Here, a loop-based photonic platform was proposed, suitable for online time series processing. These works show the possibility of sustaining the RC performance in the quantum setting in the presence of non-ideal measurement conditions. Motivated by these results, in this work, we adopt a slightly simplified version of the photonic platform used in Ref. [45] and investigate how the presence of squeezing in the optical cavity can improve the performance of the reservoir. We will numerically analyze the role of squeezing, addressing active and passive coupling terms in a photonic network, in both linear and nonlinear tasks. An analytical argument is made to justify such an improvement.

The paper is structured as follows: in Sect. 2, the general framework of RC as well as a detailed description of our photonic platform are exposed. In Sect. 3. the simulation results for some benchmark RC tasks under non-ideal noise conditions are shown: we test the linear memory of the system (Sect. 3.1) as well as the nonlinear memory (Sect. 3.2) and its performance on time-series forecasting (3.3). In Sect. 4 we show numerical evidence to explain the noise robustness improvement caused by squeezing in the previous tasks. Finally, conclusions are given in Sect. 5.

2. Loop-based architecture

2.1 Reservoir computing

RC architectures are mainly composed of three distinct layers: the input layer, the reservoir, and the readout layer [50]. The external signal is encoded and fed into the system in the input layer. This is done sequentially at each time step. The reservoir layer (or just reservoir) is usually a complex dynamical system that applies a nonlinear map to the inputs. The reservoir must retain short-term memory of previous inputs to be able to perform temporal tasks. This short-term or fading memory, together with the echo state property, is part of the universality proofs of RC [51]. The readout layer is then made of a certain number of reservoir observables, which are monitored sequentially after the input is encoded. The output from this layer is a linear combination of the measured observables. Supervised learning is performed by optimizing this linear combination of the output to yield the desired target.

In more detail, if we have a training set consisting of a sequence of $L$ inputs $\left \{ s_{1}, s_{2}, \dots, s_{L} \right \}$, at time step $k$ the input $s_{k}$ is encoded and introduced into the reservoir. If we call $\mathbf {x}_{k}$ to the reservoir degrees of freedom, we can write the reservoir map at time step $k$ as

$$\mathbf{x}_{k} = \mathcal{H}\left(\mathbf{x}_{k-1}, s_{k}\right) .$$

This map is fixed throughout the whole protocol. For the readout layer at time step $k$, we use $\mathbf {O}_{k}$ as readout observables (functions of $\mathbf {x}_{k}$). The reproduced function at each time is obtained by performing a linear regression on the readout observables,

$$y_{k} = w_{0} + \mathbf{O}_{k}^{\top} \mathbf{w} = \left( 1 , \mathbf{O}_{k}^{\top} \right) \left( \begin{array}{c} w_{0} \\ \mathbf{w} \end{array} \right) ,$$
where the weight vector $\mathbf {W} = \left ( w_{0}, \mathbf {w}^{\top } \right )^{\top }$ is optimized through training examples. The way training works is as follows: for the sequence of $L$ training inputs, we define the matrices
$$V = \left( \begin{array}{cc} 1 & \mathbf{O}_{1}^{\top} \\ 1 & \mathbf{O}_{2}^{\top} \\ \vdots & \vdots \\ 1 & \mathbf{O}_{L}^{\top} \end{array} \right) ; \quad \mathbf{y} = \left( \begin{array}{c} y_{1} \\ y_{2} \\ \vdots \\ y_{L} \end{array} \right) ,$$
so Eq. (2) can be rewritten as $\mathbf {y} = V \mathbf {W}$. If we want the output $\mathbf {y}$ to get as close as possible to the desired target function, $\bar {\mathbf {y}}$, we choose the set of weights in order to minimize the mean square error (MSE),
$$\text{MSE}\left( \mathbf{y}, \bar{\mathbf{y}} \right) = \frac{1}{L} \sum_{k=1}^{L} \left( y_{k} - \bar{y}_{k} \right)^{2} .$$

It can be shown that the optimal set of weights to reach the minimum of Eq. (4) are the ones obtained through $\mathbf {W}_{\text {opt}} = V^{\text {MP}} \bar {\mathbf {y}}$, where $V^{MP} = \left (V^{\top } V\right )^{-1} V^{\top }$ is the Moore-Penrose inverse of $V$ [22]. The higher the value of $L$ the more precise our estimation of the optimal weights will usually be. Once the system has been trained, we consider a (smaller) test set of $L^{\prime }$ inputs to be fed into the reservoir afterwards. The RC performance is then checked with this test set of new unseen data using the MSE metric from Eq. (4).

2.2 Description of the platform

Our architecture works in the CV quantum optical regime. The physical substrate is an $N$-mode optical pulse traveling through a closed optical loop or cavity ($N$ denotes the size of the reservoir). The $N$-mode internal degrees of freedom inside each pulse can be attained via frequency multiplexing, in which the optical modes belong to different frequency bands [16,5255]. In optics, frequency multiplexing has already been shown to be a useful strategy for classical RC [56]. In our approach, the external information is injected from a pulse-generating light source, which provides squeezed vacuum states, depicted as source in Fig. 1. Each external pulse is multiplexed in the frequency domain, portrayed as a frequency comb in Fig. 1. We consider that the source pulse is in a product state of frequency modes that are squeezed identically. The input is encoded in the global phase of the pulse, so every frequency mode is rotated by the same amount, $\phi _{k} = f(s_{k})$. We have checked that the redundant input encoding in each frequency mode of the input pulse is beneficial to the performance of the reservoir. At each time step, a different input is encoded sequentially into a new incoming external pulse. Fast, accurate, and reconfigurable phase-setting devices have already been used in experiments with great impact [13]. Each input pulse is coupled to the loop pulse using a beam-splitter (BS), with reflectivity $R$ shown in Fig. 1, yielding two output pulses. One of them remains in the loop and gives feedback to the next iteration (creating a quantum memory). In this way, the reservoir can retain information from previous inputs without the need for external memory. To perform this BS coupling sequentially, the loop length needs to be tuned so it matches the repetition rate of the external laser. The remaining output pulse is passed to a detector that measures each mode and uses the obtained observables for the readout layer. The fraction of light that remains in the cavity on each round trip is determined by the BS reflectivity $R$.

 figure: Fig. 1.

Fig. 1. Scheme of the loop-based architecture. At a given time step, an external frequency multiplexed pulse (Source) in a vacuum squeezed state, with a classical input encoded in its phase (top-left inset), is injected into the system. Each input pulse is partially transmitted into the cavity via a beam splitter (BS) with reflectivity $R$ and goes through a nonlinear (NL) crystal that generates entanglement among the frequency modes. Light partially transmitted out of the cavity is monitored through a homodyne detector (top right), being coupled to a local oscillator (LO) through a 50:50 BS.

Download Full Size | PDF

Inside the cavity, a nonlinear medium is placed (NL in Fig. 1), which applies a dynamical transformation to the loop pulse each time it passes through. This creates a complex optical network [53,54,5759] and can be modeled by a Hamiltonian that is quadratic in the field operators,

$$\hat{H} = \frac{1}{2}\sum_{i,j = 1}^{N} \left( \alpha_{ij} \hat{a}_{i}^{{\dagger}} \hat{a}_{j} + \beta_{ij} \hat{a}_{i}^{{\dagger}} \hat{a}_{j}^{{\dagger}} + \text{h.c.} \right) ,$$
where $\hat {a}_{i}$ ($\hat {a}_{i}^{\dagger }$) is the annihilation (creation) operator of the $i$-th frequency mode. The coupling terms $\alpha _{ij}$ and $\beta _{ij}$ encode different network topologies and lead to entanglement among modes inside the loop pulse, at each round trip. If all the terms $\beta _{ij} = 0$, the dynamical transformation is called passive, whereas if there are any $\beta _{ij} \neq 0$ the transformation is active. Active transformations in CV quantum optics do not conserve the average number of photons of the quantum state and are known to generate squeezing, the main resource for entanglement [2,3]. It is important to note that a passive cavity also produces entangled states because the external input pulses are already squeezed (even though it does not generate additional squeezing).

The detector from Fig. 1 performs homodyne measurements to every mode in the incoming pulses. Concretely, the measured set of operators contains the $x$-quadrature of each optical mode, $\hat {x}_{i} = \frac {1}{\sqrt {2}} \left ( \hat {a}_{i} + \hat {a}_{i}^{\dagger } \right )$ ($i = 1, 2, \dots, N$). From these measurements, the moments of the operator vector $\hat {\mathbf {x}} = \left ( \hat {x}_{1}, \dots, \hat {x}_{N} \right )^{\top }$ can be computed and used as observables for the readout layer. As we are injecting squeezed vacuum states, only even-order moments are considered. For the tasks shown in Sect. 3, the chosen set of observables is composed of second and fourth-order moments. Concretely, the chosen set is $\left \{ \left \langle \hat {x}_{i} \hat {x}_{j} \right \rangle, \left \langle \hat {x}_{i}^{2} \hat {x}_{j}^{2} \right \rangle, \left \langle \hat {x}_{i}^{3} \hat {x}_{j} \right \rangle \right \}_{i,j = 1}^{N} \equiv \left \{ O_{l} \right \}_{l=1}^{N (3N+1)/2}$, which has a total of $N (3N + 1)/2$ observables. In Gaussian states, fourth-order moments can be written as nonlinear functions of the second-order moments. In our case, they are useful for enhancing the accessible nonlinear terms, which are relevant for several tasks and come at no experimental expenditure [45]. We note that the readout size scales quadratically with the number of modes $N$. Usually, in optical reservoir computing the dimensionality scales linearly with $N$, as the information is encoded in field amplitudes [32,33,60]. By introducing the inputs in the field quantum fluctuations we access a broader dimensional space by generating mode correlations and entanglement, which allows more complex information processing in relatively smaller reservoirs [36,45]. To access the averaged values of the field correlations we consider averaging over an ensemble of realizations [34].

Each RC time step encompasses the whole process we have detailed: BS coupling of the input and loop pulses, detection of the output pulse, and transformation of the loop pulse under the nonlinear crystal. There are as many time steps as samples in the input sequence, and they will be labeled with the letter $k$. So at the $k$-th time step, the input $s_{k}$ will be encoded and introduced into the system and the vector of observables $\mathbf {O}^{(k)} = \left ( O_{1}^{(k)}, O_{2}^{(k)}, \dots \right )$ is measured and used for the readout layer.

The platform is based on the proposal in Ref. [45], where real-time information processing was reported. The main novelty here is that we specifically address the role of a squeezing reservoir, with both active and passive transformation. The goal is to assess the importance of quantum resources for the performance of QRC. In order to simplify the experimental footprint we consider a single NL crystal. Indeed this has no significant effect when processing past inputs. This design can be also adapted for single-loop ensemble processing by adding a fiber, as shown in [45].

3. Reservoir computing tasks under additive noise

Noise in the readout layer is known to be significantly detrimental to RC performance [48,49]. Some strategies have been developed in different architectures to make reservoirs more robust to noise [43,48]. In this section, we study the effect of the amount of squeezing produced by the cavity crystal on the noise robustness of the platform in the readout layer and compare it to the one obtained by tuning the BS reflectivity. In our simulations, readout noise is included as additive fluctuations in the measured observables, $\mathbf {O}_{\text {meas}}^{(k)}$, concretely

$$\mathbf{O}_{\text{meas}}^{(k)} = \mathbf{O}_{\text{ideal}}^{(k)} + \mathbf{\mathcal{E}}^{(k)}(0, \sigma_{\text{noise}}^{2}) ,$$
where $\mathbf {O}_{\text {ideal}}^{(k)}$ stands for the observable that would be measured in the ideal case of zero fluctuations, which would correspond to an infinite number of measurements, and $\mathbf {\mathcal {E}}^{(k)}$ stands for the additive noise vector. For the noise, we model it as normally distributed fluctuations of variance equal to $\sigma _{\text {noise}}^{2}$ applied to the measured quadratures. Although the added noise is absolute (it does not depend on the magnitude of the observables), we can use the vacuum noise variance ($\sigma _{\text {vacuum}}^{2} = 1/2$ in our case) or shot noise as a relative measure of the additive noise intensity. In that regard, added noise of variance equal to $0.1$ would be equivalent to $20{\% }$ of vacuum fluctuations.

In our simulations, we generate every crystal Hamiltonian, Eq. (5), randomly with the condition that it applies an amount of squeezing of $e^{-r}$ per mode (see App. A for details). We have made sure that the chosen cavity squeezing parameters are within reasonable experimental capabilities, as the maximum value used in the manuscript ($r = 1.5$, around 6.5 dB), has already been reached in frequency multimode settings [54]. In every realization, the modes of the input pulses are squeezed with a fixed squeezing strength, $r_{\text {input}} = 2$ (approximately 8.7 dB). The encoding function of the squeezing phase, $\phi _{k} = f(s_{k})$, is tuned depending on the task we are considering. Concretely, we consider the family of linear functions $\phi _{k}^{(m)} = m \pi s_{k}$. In this respect, the smaller the value of $m$, the better the reservoir is at reproducing linear and quadratic functions of $s_{k}$. For increasing values of $m$, higher nonlinear contributions become more relevant [36,45]. In all of our simulations we have considered a fixed reservoir size of $N = 12$.

We consider three temporal tasks: the linear memory task, the nonlinear autoregressive moving average (NARMA) task, and the forecasting of the Mackey-Glass chaotic time series [61,62]. These three tasks provide a broad overall picture of the properties of the reservoir memory for both linear and nonlinear computations. After applying the training protocol described in Sect 2.1 for a given target function, we check the performance of the trained reservoirs on an additional test input sequence. For evaluation, we mainly use the mean-square error, Eq. (4), after training optimization, $\text {min}_{\mathbf {W}} \text {MSE}\left ( \mathbf {y}, \bar {\mathbf {y}} \right )$. To get the MSE to be normalized between 0 and 1, we normalize both the target and the reproduced function data to zero mean and one standard deviation.

3.1 Linear memory task

The linear memory task is the simplest way to check the accessible memory of our reservoir in the presence of noise. We train the reservoir to reproduce past entries from the input series. That is, we consider the target function

$$\bar{y}_{k}(\mathbf{s},d) = s_{k-d} ,$$
where we aim to reproduce, at time step $k$, the input that was introduced $d$ time steps in the past (or the input at delay $d$). For simulations, we consider an input sequence composed of random entries from a uniform distribution in the interval $[-1, 1]$. We tune the squeezing angle encoding to be $\phi _{k} = \pi s_{k}/4$ to maximize linear contributions. For visualization purposes, we use the linear capacity, defined as $C(d) = 1 - \text {min}_{\mathbf {W}} \text {MSE}\left ( \mathbf {y}, \bar {\mathbf {y}}(d) \right )$, to test the performance of this task. The vector $\bar {\mathbf {y}}(d)$ is composed of the target function from Eq. (7) at each time step as a function of the delay $d$. Training and test set sizes are set to be 4000 and 1000, respectively (and also for the following tasks). These sizes ensure that we get to optimal error rates with minimal computational costs for the training step and have enough samples to perform statistics for the testing step.

Figures 2(a) and 2(b) show the linear capacity as a function of the delay $d$ on the target function from Eq. (7) for different values of the reflectivity ($R=0.75$ in Fig. 2(a) and $R=0.9$ in Fig. 2(b)). In both plots, the noise variance is $10^{-2}$ ($2{\% }$ of vacuum fluctuations). We see that for both reflectivities, increasing the cavity squeezing improves the attainable memory. Having a higher reflectivity provides a longer ‘tail’ in the linear capacity at the expense of reducing it for small and intermediate delays. This is the effect of the smaller amount of light leaving the cavity for increasing values of $R$. In Fig. 2(c) the delay at which the linear capacity drops below $0.9$ (which we also call the delay cut) is plotted as a function of the noise variance. We find that cavity squeezing provides significant noise robustness. For the reflectivity $R = 0.75$, adding cavity squeezing equal to $r = 1.5$ provides a high linear capacity beyond delay $10$ for a noise intensity of $\sigma _{\text {noise}}^{2} = 0.1$. In Fig. 2(c) the drawback of increasing the BS reflectivity can also be noted: for small noise intensity ($\sigma _{\text {noise}}^{2} = 10^{-3}$) and no cavity squeezing (light color), having $R = 0.9$ (dashed line) improves the delay cut compared to $R = 0.75$ (solid line), as increasing the reflectivity improves the memory robustness to noise. However, when the noise intensity is increased, the delay cut for $R = 0.9$ drops below the one for $R = 0.75$. This is because the higher the reflectivity, the less light leaves the cavity and travels to the detector. If the readout fluctuations are large compared to the intensity of the light coming from the loop, the accessible memory of the reservoir will be severely degraded. The cavity squeezing increases significantly the accessible linear memory in the presence of a large noise intensity.

 figure: Fig. 2.

Fig. 2. Linear memory under additive noise: (a-b) linear capacity as a function of the delay for different values of cavity squeezing (green for $r = 0$, orange for $r = 0.75$ and purple for $r = 1.5$) and different reflectivities: $R = 0.75$ in (a) and $R = 0.9$ in (b). In both figures, the noise variance is $\sigma _{\text {noise}}^{2} = 10^{-2}$. The curves are taken from averaging among 100 different random realizations and the shadows depict the standard deviation. (c) delay at which the linear capacity drops below $0.9$ as a function of $\sigma _{\text {noise}}^{2}$, including the ideal case ($\sigma _{\text {noise}}^{2} = 0$) for different values of cavity squeezing (different colors) and different values of the reflectivity (line format).

Download Full Size | PDF

3.2 NARMA10 task

In this section, we are analyzing the performance of the reservoir for the NARMA task, which requires high linear and nonlinear memory. It is one of the most common benchmark tasks and has been used to test several QRC proposals [42,43]. In this article, we consider the NARMA10 task, with a target function at time $k$ being

$$ \bar{y}_{k} = \alpha \bar{y}_{k-1} + \beta \bar{y}_{k-1} \sum_{i=1}^{10} \bar{y}_{k-i} + \gamma u_{k-1} u_{k-10} + \delta , $$
$$u_{k} = \mu + \nu s_{k} , $$
where the default constant parameters are set to $(\alpha, \beta, \gamma, \delta ) = (0.3, 0.05, 1.5, 0.1)$. The function input parameters $(\mu,\nu )$ are chosen to be $(0,0.2)$, where $s_{k}$ are taken from a uniform random distribution in the interval $[-1, 1]$ (they are the source inputs). For the reservoir to be able to have a good performance on the NARMA10 task it is necessary for it to be able to have a high linear capacity up to delay 10 and a low error reproducing the function $y_{k} = s_{k-1} \cdot s_{k-10}$ [63].

To perform this task we consider the same input encoding as in Sect. 3.1, namely $\phi _{k} = \pi s_{k}/4$. To test the performance we use the MSE defined in Eq. (4). In Figs. 3(a) and 3(b) the performance is shown, comparing the effect of cavity squeezing (x-axis in Fig. 3(a)) and BS reflectivity (x-axis in Fig. 3(b)). Three different noise scenarios are considered in both figures: the ideal case (blue boxes) and noise variance equal to $10^{-2}$ (green boxes) and $10^{-1}$ (pink boxes). In the ideal case, the optimal values of cavity squeezing and BS reflectivity are found to be $r = 0$ and $R = 0.5$. It can be seen that increasing either cavity squeezing or reflectivity degrades the performance in the absence of noise. The reason for this is that, for the chosen encoding and observables, increasing these two parameters increases linear memory at the expense of quadratic memory, which is also very relevant for the NARMA task. This balance among linear and nonlinear memory is well known in the field of RC [36,64,65].

 figure: Fig. 3.

Fig. 3. Performance on NARMA10 task: box plot of the mean square error (MSE) of the NARMA10 task as a function of (a) squeezing and (b) reflectivity for different values of the noise variance (different colors). For Fig. 3(a) we take the reflectivity equal to $R = 0.5$ and in Fig. 3(b) the cavity squeezing is equal to zero. For a given value of the x-axis, the boxes for each noise scenario are split to avoid overlapping.

Download Full Size | PDF

Moving to the realistic case of a finite number of measurements, for a noise intensity of $\sigma _{\text {noise}}^{2} = 10^{-2}$, the improvement of the cavity squeezing over the reflectivity is clearly seen. In that noise scenario, increasing both parameters improves the performance up to an optimal value ($r \sim 1$ and $R \sim 0.8$). However, in the passive cavity case, the optimal value has a higher error in comparison to the active cavity and is further away from the ideal case error. For higher values of the noise, $\sigma _{\text {noise}}^{2} = 10^{-1}$, most examples completely fail at attempting to reproduce the NARMA function. Only when the cavity squeezing $r \gtrsim 1$, the MSE drops below 1. This is due to the fact that in most scenarios the noise level makes it impossible for the reservoir to resolve inputs with a delay of 10 or more as its intensity becomes comparable with the value of the system observables, giving a bad signal-to-noise ratio [45].

Even though there is a counterbalance between linear and quadratic memory, and thus increasing the cavity squeezing and the BS reflectivity is detrimental to the performance in ideal scenarios, in the presence of readout noise the active cavities with high squeezing outperform the rest. Indeed, the only case where the performance of the noisy reservoir comes close to the ideal case is when the cavity squeezing is higher than $r = 1$. This provides an interesting example where the role of a given resource needs to be addressed beyond ideal settings, as benefits could arise in the presence of noise.

3.3 Time series prediction of a chaotic signal

One of the main applications of RC is time series forecasting, and thus in this section, we consider the task of forecasting the Mackey-Glass time series [66]. The differential equation that describes the dynamics of the signal is

$$\dot{s}(t) ={-}0.1 s(t) + \frac{0.2 s(t - \tau)}{1+ s(t - \tau)^{10}} ,$$
where for $\tau = 17$ the time series is chaotic [61,62]. For the input sequence, we have sampled the solutions to Eq. (10) with time resolution $t_{r} = 3$ [60], so that $s_{k} = s(t_{0} + k t_{r})$ (the initial conditions are chosen randomly). The target function for the training will be to predict the next input in the sequence, that is, $\bar {y}_{k} = s_{k+1}$. For this specific task, we use the input encoding $\phi _{k} = \pi s_{k}$, which provides higher nonlinear memory [45]. Once the reservoir has been trained to predict the next value of the signal, we can feed the predicted values as new inputs for the protocol. The reservoir thus, ideally, faithfully reproduces the chaotic signal without the need for new input data. We call this protocol autonomous driving.

In Fig. 4(a) the autonomously driven signal evolution is plotted for two different values of the cavity squeezing (green curve for $r = 0$ and blue curve for $r = 1.25$) while the noise variance is kept at $\sigma _{\text {noise}}^{2} = 10^{-1}$. The real signal is shown as a black curve for comparison. We see that the active cavity performs a much better prediction than the passive one in the long term, achieving accurate predictions up to 50 time steps. The passive cavity reservoir cannot overcome the effects of noise, and thus the prediction performance drops dramatically. In Figs. 4(b) and 4(c), we compare the Mackey-Glass chaotic attractor, black dots, to $3000$ values from single realizations of trained reservoirs with a passive cavity (Fig. 4(b), green dots) and an active cavity with $r = 1.25$ (blue dots). Also in this case, we see that the reservoir with cavity squeezing is able to approximately reproduce the attractor, while the passive reservoir is not.

 figure: Fig. 4.

Fig. 4. Mackey-Glass time series prediction under additive noise: (a) time series as a function of the reservoir time steps: the black curve shows the real-time series while the green and blue curves show the autonomously driven predictions for $r = 0$ and $r = 1.25$, respectively (averages taken among 100 realizations, shadows depicting the standard deviation). (b-c) chaotic attractor in the phase space $y(t)$-$y(t$-$6)$: the black points provide the real attractor while the green and blue dots provide the results of an autonomously driven realization with $r = 0$ and $r = 1.25$, respectively. In every simulation, the BS reflectivity is set to $R = 0.75$ and the noise variance is $\sigma _{\text {noise}}^{2} = 10^{-1}$.

Download Full Size | PDF

4. Accessible memory enhancement

In this section, we will explain in detail the reason behind the performance improvement due to the cavity squeezing that we have shown in the previous sections. From a physical point of view, the BS and the cavity crystal can induce competing effects on the reservoir memory. The BS causes a loss of photons in the loop pulse. On the other hand, the nonlinear processes that arise from the interactions inside the crystal can increase the energy of the loop pulse. Concretely, active crystals (those that produce squeezing) increase the total photon number inside the pulse, as opposed to passive crystals, which maintain it constant. This energy enhancement from active crystals counteracts BS losses and helps retain information inside the loop pulse for longer times.

To quantify how these effects contribute constructively to the functioning of the QRC, we study the time evolution of the loop pulse during the protocol. At each round trip, the field quadratures of the loop pulse transform via the symplectic matrix $A = \sqrt {R} S$, where $R$ stands for the BS reflectivity and $S$ is the symplectic matrix modeling the evolution of the pulse inside the cavity crystal (see Apps. A and B for details on the matrices $S$ and $A$, respectively). The memory retention of our reservoir is directly related to the powers of the symplectic matrix $A$ (App. B.1) and can be quantified using the spectral norm of $A^{d}$ (written as $\lVert A^{d} \rVert _{2}$), in which $d$ resembles the delay of the input information (Eq. (18) from App. B.2). In that regard, the faster $\lVert A^{d} \rVert _{2}$ decays to zero, the smaller the memory retention of our reservoir will be (and vice-versa). It can be analytically shown that, if the crystal inside the cavity is passive, then $\lVert A^{d} \rVert _{2} = R^{d/2}$ and, if it is active, then $\lVert A^{d} \rVert _{2} \geq R^{d/2}$. From these equations, we infer that active cavity crystals can improve the memory retention of the reservoir (similarly to the direct effect of the BS reflectivity). In Fig. 5 we show the values of $\lVert A^{d} \rVert _{2}$ as a function of the delay for the randomly generated Hamiltonians that we considered in Sect. 3. In Fig. 5(a) the spectral norm of $A$ is plotted for active cavities with different values of cavity squeezing, $r$, while in Fig. 5(b) the same is shown for passive cavities with different values of the BS reflectivity. We see how the negative slope of the spectral norm decreases as we increase either the cavity squeezing (Fig. 5(a)) or the BS reflectivity (Fig. 5(b)). This means that the magnitude of the delayed input information decays more slowly, and thus it may be reproduced by our trained reservoir more easily. In the presence of readout noise, input information decaying more slowly with time translates into more accessible memory and improves the reservoir’s robustness to noise. This is the reason why cavity squeezing improves the performance in every benchmark task shown in Sect. 3. While both $R$ and $A$ contribute to such a memory enhancement, increasing the BS reflectivity has the drawback of reducing the cavity light reaching the detector (lowering the signal-to-noise ratio) and thus is not as effective for increasing the noise robustness.

 figure: Fig. 5.

Fig. 5. Spectral norm of loop matrix dynamics: spectral norm of the loop dynamical matrix $A$ to the power of the delay $d$ for (a) active transformations with $R = 0.5$ and different values of the squeezing, $r$, and (b) passive transformations ($r = 0$) for different values of the reflectivity. Each curve shows the median from 100 different realizations while the shades range from the first to the ninth decile.

Download Full Size | PDF

5. Conclusion

In light of the significant achievements in classical RC [67], photonic platforms are emerging as promising candidates for quantum implementations. Commendable features are, for instance, fast processing rates and low decoherence at room temperature [19]. Different photonic QRC platforms have already been theoretically explored and show improvements due to the enlarged Hilbert space [36,37] as well as the ability of real-time processing without the use of external memories [45].

Detrimental effects of readout noise have been discussed both in classical RC [48,49] and in QRC settings [4345]. In the case of quantum reservoirs, the problem is even more profound, as readout noise is theoretically unavoidable due to the stochastic nature of quantum measurements, which produce statistical fluctuations that can hinder any possible quantum advantage [19,44,45]. In this paper, we demonstrate the performance-enhancing potential of quantum squeezing (active cavity) applied to a vacuum state quantum memory (loop pulse) to overcome noise in realistic scenarios. This establishes squeezing as a quantum resource for accessing the enlarged space of quantum correlations and entanglement and for improving the performance in relevant benchmark tasks, either predictive or requiring memory. Even though tuning the BS reflectivity also improves memory retention, as in [45], increasing cavity squeezing is shown to be a preferred method to improve the reservoir robustness under adverse noise conditions. Interestingly, the effect of squeezing on the QRC performance when accounting for measurement noise completely deviates from predictions under ideal conditions.

In summary, state-of-the-art frequency multiplexed quantum networks [16,54,55,59] represent a powerful set-up for near-term experimental implementations of QRC. Our results serve as a guide for the experimental design laying the foundations for photonic QRC in CV in realistic noisy scenarios and exploiting quantum resources.

A. Symplectic formalism of the crystal dynamics

A CV quantum state is completely determined, in the Heisenberg picture, by the statistics of the quadrature vector $\hat {\mathbf {Q}} = \left ( \hat {x}_{1}, \hat {p}_{1}, \dots, \hat {x}_{N}, \hat {p}_{N} \right )^{\top }$, where $\hat {x}_{i} = \frac {1}{\sqrt {2}} \left ( \hat {a}_{i}^{\dagger } + \hat {a}_{i} \right )$ and $\hat {p}_{i} = \frac {i}{\sqrt {2}} \left ( \hat {a}_{i}^{\dagger } - \hat {a}_{i} \right )$ ($i = 1, \dots, N$) are, respectively, the amplitude and phase quadratures of each mode. For quadratic Hamiltonians as in Eq. (5) the evolution of the quadrature vector can be written as

$$e^{i \hat{H}t} \hat{\mathbf{Q}} e^{{-}i \hat{H}t} = S_{\hat{H}} (t) \hat{\mathbf{Q}} ,$$
where $S_{\hat {H}}(t)$ is a $2N \times 2N$ symplectic matrix [2,3]. For the sake of readability, we will drop the $\hat {H}$ subscript (and the time dependency) and call the symplectic matrix simply as $S$. Every symplectic matrix admits a Bloch-Messiah decomposition [68,69],
$$ S = U \Delta_{\mathbf{\xi}} V , $$
$$ \Delta_{\mathbf{\xi}} = \text{diag}\left( \xi_{1},\xi_{1}^{{-}1}, \xi_{2},\xi_{2}^{{-}1}, \dots, \xi_{N},\xi_{N}^{{-}1} \right) , $$
where $U$ and $V$ are orthogonal symplectic matrices and $\xi _{i}^{-1}$ ($\xi _{i}$) provides the squeezing (anti-squeezing) applied to the $i$-th supermode (we consider $\xi _{i} \geq 1 \ \forall i$). The supermodes basis is the one in which the Hamiltonian generates single-mode squeezed states. The squeezing and anti-squeezing values applied to the supermodes correspond to the singular values of $S$. So the transformation is passive if all the singular values of $S$ are equal to 1 ($\Delta _{\mathbf {\xi }}$ is equal to the identity). It is trivial to show that if $S$ is passive, then all its powers $S^{d}$ are also passive.

For the simulations we have performed in this manuscript, we have considered the nonlinear crystal to squeeze every supermode by the same amount for simplicity, so $\xi _{i}^{-1} \equiv e^{-r/2}$ ($\forall i$), where $r$ denotes the squeezing strength per mode applied by the Hamiltonian. So for our simulations we set $\Delta _{\xi } = \bigoplus _{i=1}^{N} \text {diag} \left ( e^{r/2}, e^{-r/2} \right )$ and choose the matrices $U$ and $V$ randomly to generate the symplectic matrix $S$, applying Eq. (12). The effect of more complex squeezing strength distributions can also be explored.

B. QRC dynamics

B.1. Recursive equations

In this section, we will work out the equations describing the reservoir dynamics to gain insight into the effect of cavity squeezing. We start by considering the quadrature operators of the pulses after the BS coupling at time step $k$. We write them as

$$ \hat{\mathbf{Q}}_{\text{out}}^{(k)} = \sqrt{R} \hat{\mathbf{Q}}_{\text{in}}^{(k)} - \sqrt{1-R} \hat{\mathbf{Q}}_{\text{loop}}^{(k)} , $$
$$ \hat{\mathbf{Q}}_{\text{loop}}^{(k) \prime} = S \left(\sqrt{R} \hat{\mathbf{Q}}_{\text{loop}}^{(k)} + \sqrt{1-R} \hat{\mathbf{Q}}_{\text{in}}^{(k)} \right) , $$
where $\hat {\mathbf {Q}}_{\text {in}}$ and $\hat {\mathbf {Q}}_{\text {loop}}$ are the quadrature operator vector of the input and the cavity pulse (respectively) before the coupling, $R$ is the BS reflectivity and $S$ is the symplectic transformation that the nonlinear medium applies. To find an expression for the observables we compute the expressions of the covariance matrix, $\Gamma _{\text {loop}}^{(k)} = \left \langle \hat {\mathbf {Q}}_{\text {loop}}^{(k)} \hat {\mathbf {Q}}_{\text {loop}}^{(k) \top } \right \rangle - \left \langle \hat {\mathbf {Q}}_{\text {loop}}^{(k)}\right \rangle \left \langle \hat {\mathbf {Q}}_{\text {loop}}^{(k)}\right \rangle ^{\top }$, where the averages are taken from an ensemble of realizations. As we are working in squeezed vacuum, the covariance matrix and the second-order moments are identical. Also, as we are dealing with Gaussian states, the higher-order moments are functions of the first and second-order moments of the quadratures. It can be shown that the covariance matrix of the cavity pulse at the next time step can be written as
$$\Gamma_{\text{loop}}^{(k+1)} = R S\Gamma_{\text{loop}}^{(k)}S^{\top} + (1-R) S\Gamma_{\text{in}}^{(k)}S^{\top} ,$$
which can be further expanded by recursion to yield
$$\Gamma_{\text{loop}}^{(k+1)} = (1-R) S \left[\sum_{d=0}^{\infty} A^{d} \Gamma_{\text{in}}^{(k-d)} \left(A^{d}\right)^{\top} \right] S^{\top} ,$$
for $A = \sqrt {R} S$. The details of the derivations of Eqs. (16) and (17) can be found in our previous work [45]. In Eq. (17), the index $d$ resembles the delay of the input.

B.2. Input memory decay

From Eq. (17) we can see that the magnitude decay of the input information ($\Gamma _{\text {in}}$) depends on the powers of $A$, $A^{d}$, where $d$ denotes the delay of the encoded input. The accessible memory of the reservoir is determined by how much this input information decays each round trip. To quantify this decay we use the spectral norm or L2-norm, defined as

$$\lVert A \rVert_{2} = \max_{|\mathbf{x}|\neq0} \frac{|A \mathbf{x}|}{|\mathbf{x}|} \equiv \xi_{\text{max}}(A) ,$$
where $|\cdot |$ stands for the usual euclidian norm of a vector and $\xi _{\text {max}}(A)$ is the maximum singular eigenvalue of $A$ [70]. From the definition of $A$, we have that $\lVert A^{d} \rVert _{2} = R^{d/2} \lVert S^{d}\rVert _{2}$. If the cavity is passive, we have that $\lVert S^{d} \rVert _{2} = \lVert S \rVert _{2} = 1$, and so $\lVert A^{d} \rVert _{2} = R^{d/2}$. If the cavity is active we have that $\lVert A^{d} \rVert _{2} \geq R^{d/2}$. For the RC protocol to work, it is important that both the echo state property and the fading memory condition hold [71]. It can be shown that these conditions are fulfilled if $\rho (A) < 1$, where $\rho (\cdot )$ stands for the spectral radius of a matrix [36]. Moreover, if $\rho (A) < 1$ then the following is also true:
$$\lim_{d \to \infty} \lVert A^{d} \rVert_{2} = 0 .$$

This ensures that the energy of the system does not diverge. We checked that the condition $\rho (A) < 1$ is held in every simulation in the manuscript.

Funding

Agencia Estatal de Investigación (CEX2021-001164-Q3-M, PID2019-109094GB-C21, PID2019-109094GB-C22, PID2022-140506NB-C21, PID2022-140506NB-C22).

Acknowledgements

We would like to thank Valentina Parigi for her valuable feedback during the preparation of this article. We acknowledge the Spanish State Research Agency, through the María de Maeztu project CEX2021-001164-M funded by the MCIN/AEI/10.13039/501100011033, through the QUARESC project (PID2019-109094GB-C21 and -C22/ AEI / 10.13039/501100011033) and through the COQUSY project PID2022-140506NB-C21 and -C22 funded by MCIN/AEI/10.13039/501100011033, MINECO through the QUANTUM SPAIN project, and EU through the RTRP - NextGenerationEU within the framework of the Digital Spain 2025 Agenda. J.G-B. is funded by the Conselleria d’Educació, Universitat i Recerca of the Government of the Balearic Islands with grant code FPI/036/2020. G.L.G. is funded by the Spanish MEFP/MiU and co-funded by the University of the Balearic Islands through the Beatriz Galindo program (BG20/00085). The CSIC Interdisciplinary Thematic Platform (PTI) on Quantum Technologies in Spain is also acknowledged.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. Ferraro, S. Olivares, M. G. Paris, et al., Gaussian states in quantum information (Bibliopolis, Napoli, 2005).

2. G. Adesso, S. Ragy, and A. R. Lee, “Continuous variable quantum information: Gaussian states and beyond,” Open Syst. Inf. Dyn. 21(01n02), 1440001 (2014). [CrossRef]  

3. A. Serafini, Quantum continuous variables: a primer of theoretical methods (CRC Press, 2017).

4. M. D. Reid and P. D. Drummond, “Quantum correlations of phase in nondegenerate parametric oscillation,” Phys. Rev. Lett. 60(26), 2731–2733 (1988). [CrossRef]  

5. Z. Y. Ou, S. F. Pereira, H. J. Kimble, et al., “Realization of the einstein-podolsky-rosen paradox for continuous variables,” Phys. Rev. Lett. 68(25), 3663–3666 (1992). [CrossRef]  

6. V. Giovannetti, S. Lloyd, and L. Maccone, “Quantum-enhanced measurements: Beating the standard quantum limit,” Science 306(5700), 1330–1336 (2004). [CrossRef]  

7. A. A. Berni, T. Gehring, B. M. Nielsen, et al., “Ab initio quantum-enhanced optical phase estimation using real-time feedback control,” Nat. Photonics 9(9), 577–581 (2015). [CrossRef]  

8. V. Giovannetti, S. Lloyd, and L. Maccone, “Quantum-enhanced positioning and clock synchronization,” Nature 412(6845), 417–419 (2001). [CrossRef]  

9. J. Aasi, J. Abadie, B. P. Abbott, et al., “Enhanced sensitivity of the ligo gravitational wave detector by using squeezed states of light,” Nat. Photonics 7(8), 613–619 (2013). [CrossRef]  

10. L. S. Madsen, V. C. Usenko, M. Lassen, et al., “Continuous variable quantum key distribution with modulated entangled states,” Nat. Commun. 3(1), 1083 (2012). [CrossRef]  

11. T. Gehring, V. Händchen, J. Duhme, et al., “Implementation of continuous-variable quantum key distribution with composable and one-sided-device-independent security against coherent attacks,” Nat. Commun. 6(1), 8795 (2015). [CrossRef]  

12. H.-S. Zhong, H. Wang, Y.-H. Deng, et al., “Quantum computational advantage using photons,” Science 370(6523), 1460–1463 (2020). [CrossRef]  

13. L. S. Madsen, F. Laudenbach, M. F. Askarani, et al., “Quantum computational advantage with a programmable photonic processor,” Nature 606(7912), 75–81 (2022). [CrossRef]  

14. N. C. Menicucci, P. van Loock, M. Gu, et al., “Universal quantum computation with continuous-variable cluster states,” Phys. Rev. Lett. 97(11), 110501 (2006). [CrossRef]  

15. J.-I. Yoshikawa, S. Yokoyama, T. Kaji, et al., “Invited Article: Generation of one-million-mode continuous-variable cluster state by unlimited time-domain multiplexing,” APL Photonics 1(6), 060801 (2016). [CrossRef]  

16. M. Chen, N. C. Menicucci, and O. Pfister, “Experimental realization of multipartite entanglement of 60 modes of a quantum optical frequency comb,” Phys. Rev. Lett. 112(12), 120505 (2014). [CrossRef]  

17. W. Asavanant, Y. Shiozawa, S. Yokoyama, et al., “Generation of time-domain-multiplexed two-dimensional cluster state,” Science 366(6463), 373–376 (2019). [CrossRef]  

18. N. Killoran, T. R. Bromley, J. M. Arrazola, et al., “Continuous-variable quantum neural networks,” Phys. Rev. Res. 1(3), 033063 (2019). [CrossRef]  

19. P. Mujal, R. Martínez-Pe na, J. Nokkala, et al., “Opportunities in quantum reservoir computing and extreme learning machines,” Adv. Quantum Technol. 4(8), 2100027 (2021). [CrossRef]  

20. H. Jaeger, “The “echo state” approach to analysing and training recurrent neural networks-with an erratum note,” Bonn, Germany: German National Research Center for Information Technology GMD Technical Report 148, 13 (2001).

21. W. Maass, T. Natschläger, and H. Markram, “Real-time computing without stable states: A new framework for neural computation based on perturbations,” Neural Comput. 14(11), 2531–2560 (2002). [CrossRef]  

22. M. Lukoševičius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Comput. Sci. Rev. 3(3), 127–149 (2009). [CrossRef]  

23. F. Wyffels and B. Schrauwen, “A comparative study of reservoir computing strategies for monthly time series prediction,” Neurocomputing 73(10-12), 1958–1964 (2010). [CrossRef]  

24. X. Lin, Z. Yang, and Y. Song, “Short-term stock price prediction based on echo state networks,” Expert Syst. with Appl. 36(3), 7313–7317 (2009). [CrossRef]  

25. I. Ilies, H. Jaeger, O. Kosuchinas, et al., “Stepping forward through echoes of the past: forecasting with echo state networks,” Tech. Rep. (2007).

26. K. Nakajima and I. Fischer, Reservoir Computing: Theory, Physical Implementations, and Applications (Springer, 2021).

27. F. Triefenbach, A. Jalalvand, B. Schrauwen, et al., “Phoneme recognition with large hierarchical reservoirs,” in Advances in Neural Information Processing Systems, vol. 23 J. Lafferty, C. Williams, J. Shawe-Taylor, et al., eds. (Curran Associates, Inc., 2010).

28. L. Wang, Z. Wang, and S. Liu, “An effective multivariate time series classification approach using echo state network and adaptive differential evolution algorithm,” Expert Syst. with Appl. 43, 237–249 (2016). [CrossRef]  

29. G. Tanaka, T. Yamane, J. B. Héroux, et al., “Recent advances in physical reservoir computing: A review,” Neural Netw. 115, 100–123 (2019). [CrossRef]  

30. D. Brunner, M. C. Soriano, C. R. Mirasso, et al., “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4(1), 1364 (2013). [CrossRef]  

31. K. Vandoorne, P. Mechet, T. Van Vaerenbergh, et al., “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 5(1), 3541 (2014). [CrossRef]  

32. L. Larger, A. Baylón-Fuentes, R. Martinenghi, et al., “High-speed photonic reservoir computing using a time-delay-based architecture: Million words per second classification,” Phys. Rev. X 7(1), 011015 (2017). [CrossRef]  

33. G. Van Der Sande, D. Brunner, and M. C. Soriano, “Advances in photonic reservoir computing,” Nanophotonics 6(3), 561–576 (2017). [CrossRef]  

34. K. Fujii and K. Nakajima, “Harnessing disordered-ensemble quantum dynamics for machine learning,” Phys. Rev. Appl. 8(2), 024030 (2017). [CrossRef]  

35. R. Martínez-Pe na, J. Nokkala, G. L. Giorgi, et al., “Information processing capacity of spin-based quantum reservoir computing systems,” Cogn. Comput. 15(5), 1440–1451 (2020). [CrossRef]  

36. J. Nokkala, R. Martínez-Pe na, G. L. Giorgi, et al., “Gaussian states of continuous-variable quantum systems provide universal and versatile reservoir computing,” Commun. Phys. 4(1), 53 (2021). [CrossRef]  

37. M. Spagnolo, J. Morris, S. Piacentini, et al., “Experimental photonic quantum memristor,” Nat. Photonics 16(4), 318–323 (2022). [CrossRef]  

38. J. Chen, H. I. Nurdin, and N. Yamamoto, “Temporal information processing on noisy quantum computers,” Phys. Rev. Appl. 14(2), 024065 (2020). [CrossRef]  

39. A. Sannia, R. Martínez-Pe na, M. C. Soriano, et al., “Dissipation as a resource for quantum reservoir computing,” arXiv, arXiv:2212.12078 (2022). [CrossRef]  

40. S. Ghosh, A. Opala, M. Matuszewski, et al., “Quantum reservoir processing,” npj Quantum Inf. 5(1), 35 (2019). [CrossRef]  

41. G. Llodrà, C. Charalambous, G. L. Giorgi, et al., “Benchmarking the role of particle statistics in quantum reservoir computing,” Adv. Quantum Technol. 6(1), 2200100 (2023). [CrossRef]  

42. R. Martínez-Pe na, G. L. Giorgi, J. Nokkala, et al., “Dynamical phase transitions in quantum reservoir computing,” Phys. Rev. Lett. 127(10), 100502 (2021). [CrossRef]  

43. J. Nokkala, R. Martínez-Pe na, R. Zambrini, et al., “High-performance reservoir computing with fluctuations in linear networks,” IEEE Trans. Neural Netw. Learning Syst. 33(6), 2664–2675 (2022). [CrossRef]  

44. P. Mujal, R. Martínez-Peña, G. L. Giorgi, et al., “Time-series quantum reservoir computing with weak and projective measurements,” npj Quantum Information 9(1), 16 (2023). [CrossRef]  

45. J. García-Beni, G. L. Giorgi, M. C. Soriano, et al., “Scalable photonic platform for real-time quantum reservoir computing,” Phys. Rev. Appl. 20(1), 014051 (2023). [CrossRef]  

46. F. Hu, G. Angelatos, S. A. Khan, et al., “Tackling sampling noise in physical systems for machine learning applications: Fundamental limits and eigentasks,” arXiv, arXiv:2307.16083 (2023). [CrossRef]  

47. W. D. Kalfus, G. J. Ribeill, G. E. Rowlands, et al., “Hilbert space as a computational resource in reservoir computing,” Phys. Rev. Res. 4(3), 033007 (2022). [CrossRef]  

48. M. C. Soriano, S. Ortín, D. Brunner, et al., “Optoelectronic reservoir computing: tackling noise-induced performance degradation,” Opt. Express 21(1), 12–20 (2013). [CrossRef]  

49. M. Soriano, S. Ortin, L. Keuninckx, et al., “Delay-based reservoir computing: Noise effects in a combined analog and digital implementation,” IEEE Trans. Neural Netw. Learning Syst. 26(2), 388–393 (2015). [CrossRef]  

50. K. Nakajima and I. Fischer, Reservoir Computing: Theory, Physical Implementations, and Applications (Springer Singapore, 2021).

51. L. Grigoryeva and J.-P. Ortega, “Echo state networks are universal,” Neural Netw. 108, 495–508 (2018). [CrossRef]  

52. R. Medeiros de Araújo, J. Roslund, Y. Cai, et al., “Full characterization of a highly multimode entangled state embedded in an optical frequency comb using pulse shaping,” Phys. Rev. A 89(5), 053828 (2014). [CrossRef]  

53. J. Roslund, R. M. de Araújo, S. Jiang, et al., “Wavelength-multiplexed quantum networks with ultrafast frequency combs,” Nat. Photonics 8(2), 109–112 (2014). [CrossRef]  

54. Y. Cai, J. Roslund, G. Ferrini, et al., “Multimode entanglement in reconfigurable graph states using optical frequency combs,” Nat. Commun. 8(1), 15645 (2017). [CrossRef]  

55. T. Kouadou, F. Sansavini, M. Ansquer, et al., “Spectrally shaped and pulse-by-pulse multiplexed multimode squeezed states of light,” APL Photonics 8(8), 086113 (2023). [CrossRef]  

56. L. Butschek, A. Akrout, E. Dimitriadou, et al., “Photonic reservoir computer based on frequency multiplexing,” Opt. Lett. 47(4), 782–785 (2022). [CrossRef]  

57. J. Nokkala, F. Arzani, F. Galve, et al., “Reconfigurable optical implementation of quantum complex networks,” New J. Phys. 20(5), 053024 (2018). [CrossRef]  

58. A. Cabot, F. Galve, V. M. Eguíluz, et al., “Unveiling noiseless clusters in complex quantum networks,” npj Quantum Information 4(1), 57 (2018). [CrossRef]  

59. P. Renault, J. Nokkala, G. Roeland, et al., “Experimental optical simulator of reconfigurable and complex quantum environment,” PRX Quantum 4(4), 040310 (2023). [CrossRef]  

60. S. Ortín, M. C. Soriano, L. Pesquera, et al., “A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron,” Sci. Rep. 5(1), 14945 (2015). [CrossRef]  

61. J. D. Farmer and J. J. Sidorowich, “Predicting chaotic time series,” Phys. Rev. Lett. 59(8), 845–848 (1987). [CrossRef]  

62. H. Jaeger and H. Haas, “Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication,” Science 304(5667), 78–80 (2004). [CrossRef]  

63. T. Kubota, H. Takahashi, and K. Nakajima, “Unifying framework for information processing in stochastically driven dynamical systems,” Phys. Rev. Res. 3(4), 043135 (2021). [CrossRef]  

64. J. Dambre, D. Verstraeten, B. Schrauwen, et al., “Information processing capacity of dynamical systems,” Sci. Rep. 2(1), 514 (2012). [CrossRef]  

65. M. Inubushi and K. Yoshimura, “Reservoir computing beyond memory-nonlinearity trade-off,” Sci. Rep. 7(1), 10199 (2017). [CrossRef]  

66. M. C. Mackey and L. Glass, “Oscillation and chaos in physiological control systems,” Science 197(4300), 287–289 (1977). [CrossRef]  

67. D. Brunner, M. C. Soriano, and G. Van der Sande, Photonic reservoir computing (De Gruyter, 2019).

68. S. L. Braunstein, “Squeezing as an irreducible resource,” Phys. Rev. A 71(5), 055801 (2005). [CrossRef]  

69. G. Cariolaro and G. Pierobon, “Reexamination of Bloch-Messiah reduction,” Phys. Rev. A 93(6), 062115 (2016). [CrossRef]  

70. R. A. Horn and C. R. Johnson, Matrix Analysis (Cambridge University Press, 1985).

71. Z. Konkoli, “On reservoir computing: from mathematical foundations to unconventional applications,” in Advances in unconventional computing, (Springer, 2017), pp. 573–607.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Scheme of the loop-based architecture. At a given time step, an external frequency multiplexed pulse (Source) in a vacuum squeezed state, with a classical input encoded in its phase (top-left inset), is injected into the system. Each input pulse is partially transmitted into the cavity via a beam splitter (BS) with reflectivity $R$ and goes through a nonlinear (NL) crystal that generates entanglement among the frequency modes. Light partially transmitted out of the cavity is monitored through a homodyne detector (top right), being coupled to a local oscillator (LO) through a 50:50 BS.
Fig. 2.
Fig. 2. Linear memory under additive noise: (a-b) linear capacity as a function of the delay for different values of cavity squeezing (green for $r = 0$, orange for $r = 0.75$ and purple for $r = 1.5$) and different reflectivities: $R = 0.75$ in (a) and $R = 0.9$ in (b). In both figures, the noise variance is $\sigma _{\text {noise}}^{2} = 10^{-2}$. The curves are taken from averaging among 100 different random realizations and the shadows depict the standard deviation. (c) delay at which the linear capacity drops below $0.9$ as a function of $\sigma _{\text {noise}}^{2}$, including the ideal case ($\sigma _{\text {noise}}^{2} = 0$) for different values of cavity squeezing (different colors) and different values of the reflectivity (line format).
Fig. 3.
Fig. 3. Performance on NARMA10 task: box plot of the mean square error (MSE) of the NARMA10 task as a function of (a) squeezing and (b) reflectivity for different values of the noise variance (different colors). For Fig. 3(a) we take the reflectivity equal to $R = 0.5$ and in Fig. 3(b) the cavity squeezing is equal to zero. For a given value of the x-axis, the boxes for each noise scenario are split to avoid overlapping.
Fig. 4.
Fig. 4. Mackey-Glass time series prediction under additive noise: (a) time series as a function of the reservoir time steps: the black curve shows the real-time series while the green and blue curves show the autonomously driven predictions for $r = 0$ and $r = 1.25$, respectively (averages taken among 100 realizations, shadows depicting the standard deviation). (b-c) chaotic attractor in the phase space $y(t)$-$y(t$-$6)$: the black points provide the real attractor while the green and blue dots provide the results of an autonomously driven realization with $r = 0$ and $r = 1.25$, respectively. In every simulation, the BS reflectivity is set to $R = 0.75$ and the noise variance is $\sigma _{\text {noise}}^{2} = 10^{-1}$.
Fig. 5.
Fig. 5. Spectral norm of loop matrix dynamics: spectral norm of the loop dynamical matrix $A$ to the power of the delay $d$ for (a) active transformations with $R = 0.5$ and different values of the squeezing, $r$, and (b) passive transformations ($r = 0$) for different values of the reflectivity. Each curve shows the median from 100 different realizations while the shades range from the first to the ninth decile.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

x k = H ( x k 1 , s k ) .
y k = w 0 + O k w = ( 1 , O k ) ( w 0 w ) ,
V = ( 1 O 1 1 O 2 1 O L ) ; y = ( y 1 y 2 y L ) ,
MSE ( y , y ¯ ) = 1 L k = 1 L ( y k y ¯ k ) 2 .
H ^ = 1 2 i , j = 1 N ( α i j a ^ i a ^ j + β i j a ^ i a ^ j + h.c. ) ,
O meas ( k ) = O ideal ( k ) + E ( k ) ( 0 , σ noise 2 ) ,
y ¯ k ( s , d ) = s k d ,
y ¯ k = α y ¯ k 1 + β y ¯ k 1 i = 1 10 y ¯ k i + γ u k 1 u k 10 + δ ,
u k = μ + ν s k ,
s ˙ ( t ) = 0.1 s ( t ) + 0.2 s ( t τ ) 1 + s ( t τ ) 10 ,
e i H ^ t Q ^ e i H ^ t = S H ^ ( t ) Q ^ ,
S = U Δ ξ V ,
Δ ξ = diag ( ξ 1 , ξ 1 1 , ξ 2 , ξ 2 1 , , ξ N , ξ N 1 ) ,
Q ^ out ( k ) = R Q ^ in ( k ) 1 R Q ^ loop ( k ) ,
Q ^ loop ( k ) = S ( R Q ^ loop ( k ) + 1 R Q ^ in ( k ) ) ,
Γ loop ( k + 1 ) = R S Γ loop ( k ) S + ( 1 R ) S Γ in ( k ) S ,
Γ loop ( k + 1 ) = ( 1 R ) S [ d = 0 A d Γ in ( k d ) ( A d ) ] S ,
A 2 = max | x | 0 | A x | | x | ξ max ( A ) ,
lim d A d 2 = 0 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.