1. Introduction
Reservoir computing (RC) is a machine learning method that is particularly suited to solving dynamical tasks [1]. It was introduced as a way of using recurrent networks for machine learning but circumventing the costly training of the network weights [2]. The main principle underpinning reservoir computing is that the reservoir projects the inputs into a sufficiently high dimensional phase space such that it suffices to linearly sample the response of the reservoir in order to approximate the desired target for a given task. For this to work, the reservoir must fulfil certain criteria: the response to sufficiently different inputs must be linearly separable, the reservoir must be capable of performing nonlinear transforms, and the reservoir must have the fading memory property [2]. However, even when these criteria are fulfilled, the performance depends greatly on the dynamics of the reservoir. Hence, in the past two decades a lot of research in the reservoir computing community has focused on the optimisation of the reservoir parameters [3,4,5,6,7,8,9]. Furthermore, the optimisation of the reservoir is a task-specific problem [1,10,11,12] and a universal reservoir, which performs well in a range of tasks, remains elusive.
In a recent paper [13], the authors aim to eliminate the issue of hyperparameter optimisation altogether by removing the reservoir. Their approach essentially takes the well-known nonlinear vector autoregression (NVAR) method, uses a less parsimonious approach to filling the feature vector, and adds Tikhonov regularisation. However, the method of [13] trades the optimisation of the reservoir hyperparameters for the optimisation of the feature vector elements and it cannot be asserted that the latter is generally less costly. Furthermore, one of the main factors driving research into reservoir computing forward is the possibility for hardware implementation [14,15,16,17,18,19], which is impractical when the reservoir is absent.
In this contribution we demonstrate a new approach that reduces the need for hyperparameter optimisation and is well suited to boosting the performance of physically implemented reservoir computers. Specifically, we show that, by adding a time-delayed version of the input for a given task, the performance of an unoptimised reservoir can be greatly improved. We demonstrate this by using one unaltered reservoir to perform six different time series prediction tasks. In each case the only optimisation parameters are the delay and input strength of the additional delayed input. The aim of this work is not to achieve the best possible performance, but rather to demonstrate that reasonable performance can be achieved for various tasks using the same reservoir and at a very low computational cost.
Using time-delayed input is a common approach for adding memory to feedforward networks [20,21,22,23] and is the basis of statistical forecasting methods [21,24]. However, despite the simplicity of this idea, to the best of our knowledge, time-delayed inputs have not been widely used to optimise the performance of reservoir computers. This may be because the focus has been on constructing reservoirs that have the necessary memory to perform a given task [1]. One study in which time-delayed inputs have been used to improve the performance of a time series prediction task is [25]. However, in [25], the manner in which the time-delayed input was constructed assumed that the memory requirements of the task monotonically decrease with increasing steps into the past and did not allow for the input scaling of the delayed input to be varied as a free parameter.
Our results are of particular relevance to the hardware implementation of reservoir computing, because in physical systems one does not always have access to the relevant hyperparameters necessary for optimisation of the task-dependent performance but it should always be possible to add an additional input.
2. Methods
In the following, we describe the reservoir computing concept, the model for the reservoir that we use, our proposed time-delayed input method, and the benchmarking tasks that are used to test our approach.
2.1. Reservoir Computing
In reservoir computing, the reservoir, which at this point can be treated as a black box, is fed an input and the response of the system is sampled a number of times. The responses are then linearly combined to approximate the desired output (see Figure 1a). The linear output weights are trained via linear regression, typically using Tikhonov regularisation or regularisation by noise [1]. A variant of reservoir computing, that is of particular relevance for hardware implementation, is time-multiplexed reservoir computing using only one nonlinear element [26]. In this scheme both the injection of the data into the reservoir and the filling of state matrix occur sequentially. Typically, a mask is applied to the input data in order to diversify the response of the reservoir to the input. In the training phase, the reservoir is fed a sequence of training data of length . A mask of length is applied to each element of the training data, where corresponds to the readout dimension (i.e., the number virtual nodes). Hence, there are time-multiplexed inputs that are sequentially fed into the reservoir. The corresponding state matrix, which has the dimensions , is filled row by row with an additional bias term of 1 at the end of each row. The training step is then to find the dimensional weight vector that best approximates
(1)
where is the vector of target outputs. The solution to this linear problem is given by(2)
where is the Tikhonov regularisation parameter and is the identity matrix.Error Measure
To quantify the performance of the reservoir computer we use the normalised root mean squared error (NRMSE), defined as
(3)
where are the target values, are the outputs produced by the reservoir computer, is the length of the vector , and is the variance of the target sequence.2.2. Reservoir Model
To investigate the effect of delayed input on a physically implemented reservoir computer, we model a physical system that is inspired by optical delay line reservoirs [27,28]. Delay line implementations have shown promise due to high throughput speeds [29]. However, complex network connectivity, achieved via the introduction of multiple delays, represents a significant experimental hurdle, or requires opto-electrical conversion of the signal for electronic storage, thereby forgoing the advantages of an all-optical implementation. Recent developments in optical quantum memories with high bandwidth [30] and high capacity [31] allow for the on-demand storage and retrieval of optical pulses and thus the implementation of delays of arbitrary length, limited only by the coherence time of the optical memory, which can reach up to one second [32]. The reservoir model described below models a physical optical system including the optical memory for the reconfigurable and arbitrary coupling of the injected information (modeled as memory cells with input and output coupling), a nonlinear element (modeled as a semiconductor optical amplifier), and a short delay line, whose purpose is not for introducing delay, but to recouple existing information back into the optical system. A sketch of the envisaged setup is shown in Figure 1b. A time-multiplexed input is fed through a nonlinear element and then stored in the memory cells (described with the index n) with a certain input topology . Combinations of the memory cells are partially read out with the output topology and finally coupled back into the nonlinear element. Since the write and read-out process repeats in time, it is possible to realise time-varying read and write topologies. We describe this by adding the index , where M is the period within which the coupling sequence repeats, giving the coupling matrix elements and . The map describing this process is given by the following. Let be the state of the memory cell at time step k. The next time step is then given by
(4)
where(5)
is the function describing the nonlinear element, and the matrices , , and describe the (possibly time-varying) coupling into and out of the memory cells. The value of K describes the percentage of the output that is coupled back into the nonlinear element and is the input. The coupling matrices have the dimensions MxN, where N is the number of memory cells. Note that the index depends on the time step k. For each iteration one row of the coupling matrices determines which memory cells are written into and which are read out of. gives the write sequence and the out-coupling sequence. These two matrices contain values from zero to one. For , the row sum must be one. The entries of the matrix are This allows the memory cells with new input to be overwritten and those without to be updated according to how much was read out.The model described above allows for arbitrary coupling between the memory cells. For this study, we choose , and . This means, at every input cycle, one memory cell is overwritten and one is read out. For this choice of coupling, Equations (4) and (5) can be rewritten as
(6)
We then choose where is the number of virtual nodes that will be used for the reservoir computing tasks. This coupling describes a type of ring coupling akin to delay-based reservoir computers with the feedback delay time , where T is the input clock-cycle and is the virtual node separation [28,33]. Comparing the continuous and discrete cases gives , , and . We choose such a simple coupling scheme as it has been demonstrated that such coupling topologies perform similarly to random coupling topologies [4]. Using Equation (6), the rows of the state matrix are filled with sequential , i.e., and the bias (see Figure 1 for an illustration).For the nonlinearity, we choose
(7)
which describes the input response of a semiconductor optical amplifier [34,35].2.3. Input and Mask
The reservoir input is given by a task-dependent time series and a time-delayed version of this time series. Before the data are fed into the reservoir, masks are applied to both input series. The masks consist of values drawn from a uniform distribution between 0 and 1. The final input is then given by
(8)
where and are the input scaling factors, is the input time series, d is the input delay, and are th entries of the dimensional masking vectors, and is a constant offset. A sketch of the masked input sequence is shown in Figure 2.2.4. Time Series Prediction Tasks
2.4.1. Mackey–Glass
The Mackey–Glass equation is a delay differential equation which exhibits chaotic dynamics. The reservoir computing benchmarking task is to predict the time series s number of steps ahead in the chaotic regime. The Mackey–Glass equation is [36]:
(9)
We use the standard parameters: , , , and . To create the input sequence , the time series generated by Equation (9) is sampled with a time step of . The corresponding target sequence is then given by .2.4.2. NARMA10
NARMA10 is a commonly used benchmarking task that is defined by the iterative formula
(10)
where are identically and independently drawn random numbers from a uniform distribution in the interval [0, 0.5] [37]. The reservoir input sequence is given by the sequence of and the target sequence is given by the corresponding .2.4.3. Lorenz
The Lorenz system [38] is given by
(11)
With , and , this system exhibits chaotic dynamics. We use the x variable, sampled with a step size of , as the input for two time series prediction tasks. The first is one step ahead () prediction of the x variable. The second is one step ahead () cross-prediction of the z variable.2.5. Simulation Conditions
For all tasks, the reservoir is initialised with an input sequence of length 10,000. The system is then trained on 10,000 inputs. This is followed by another buffer of 10,000 inputs, before the performance is tested on a sequence of inputs, unless stated otherwise. For each task, the reservoir parameters are kept identical and are as given in Table 1. The input scaling of the primary input (nondelayed input) and the offset are scaled such that the input range for each task is approximately [0.4, 1.3]. The scaling of the delayed input and the input delay d are used as the optimisation parameters. For each task, the performance is averaged over 100 realisations of the random masks and also, in the case of NARMA10, the random inputs.
3. Results
The performance of the reservoir with additional delayed input is tested on six tasks; we first consider Mackey–Glass time series prediction for one, three, and ten steps into the future. The results of the Mackey–Glass tasks, and their relation to the delayed input parameters, are depicted in Figure 3a–c. By inspecting the evolution of the performance error as a function of d and , an optimal performance and thus an optimal value for d can be identified (brightest light yellow region). This value, however, depends on the task and thus changes in between the panels. In order to quantify the impact of the delayed-input strength on the performance, we present scans of the delayed-input strength for the optimal input delay d, for each task, in Figure 4a–c. corresponds to the system without delayed input and should be used as the reference to quantify the performance boost due to the delayed input. For each of the three cases, the delayed input leads to a reduction in the NRMSE, ranging from 20% for to over a factor three for . The optimal values for the delay and the input scaling vary depending on the number of steps s predicted into the future. In agreement with the results presented in [39], larger input scaling is required as s increases, indicating that nonlinear transforms become increasingly important. In terms of the absolute performance, similar results are achieved compared with other studies [39,40], despite the number of virtual nodes used in this study being significantly lower.
The results for the NARMA10 task are shown in Figure 3d and Figure 4d. Without the delayed input () the performance of the reservoir is very poor. This is in contrast to the Mackey–Glass for which the performance without delayed input (Figure 3a with ) is reasonable. Moreover, this finding supports the general observation that reservoir computers have to be optimised to individual tasks and perform poorly as universal approximators [1,10,11]. The inclusion of delayed input significantly reduced the NARMA10 error, reaching an NRMSE of about 0.3 for the input delay . In absolute terms, an NRMSE of 0.3 is within the range of typically quoted best values (NRMSE = 0.15–0.4) [4,10,41,42,43,44], however, it is usually achieved with a much higher output dimension than the used here. The performance achieved in this study came at a very low computational cost. As a comparison, in [44] the authors investigate the influence of combining echo state networks with different timescales and achieve a best performance of just under 0.4 for the NRMSE, at a greater computational expense.
The remaining two tasks are one step ahead Lorenz x prediction and one step head Lorenz z cross-prediction, the results of which are shown in Figure 3e,f and Figure 4e,f. In both cases there is an improvement in the performance with the correct choice of the delayed input. It is has been demonstrated that the Lorenz x one step ahead prediction requires only the very recent history of the x-variable time series [13], and we find the optimal input delay of to be consistent with this prior knowledge. For the Lorenz z cross-prediction task, on the other hand, there is a strong dependence on the history of the Lorenz x variable. In this case, the best performance is achieved when the second input is delayed by time steps. The optimal delayed-input scaling is larger for the Lorenz z task than the Lorenz x task (as seen by comparing the positions of the minima in Figure 4e,f), indicating that the cross-prediction task requires a greater degree of nonlinearity as well as a longer memory.
In order to demonstrate that the improvement in the performance with delayed input is not specific to the reservoir parameters used for Figure 3, in Figure 5 we show the NRMSE for the Mackey–Glass task as a function of (a) the virtual node coupling strength K and (b) the coupling delay N (i.e., the number of memory cells). These parameters have a strong influence on the properties of the reservoir. In both cases the NRMSE without delayed input (orange dotted line) shows a large variation over the respective parameter ranges and is always larger than the error with optimised delayed input (blue dashed line). With optimised delayed input the variation in the error is comparatively small, demonstrating that the inclusion of the delayed input works well independent of the reservoir properties. The peak in the NRMSE at in Figure 5b is a well-known resonance effect that occurs at resonances between the number of virtual nodes and the coupling range N, equivalent to clock time and delay resonances in time continuous systems [45].
To further demonstrate the universality of this method, we show the NARMA10 error with delayed input for a time continuous reservoir in Figure 6. In this case the reservoir is given by the Stuart–Landau equation with time-delayed feedback (see Appendix B). The reservoir parameters have not been optimised for the NARMA10 task, resulting in very poor performance without delayed input (). With optimised delayed-input parameters reasonable performance is achieved, similar to the optimal results for the memory cell reservoir in Figure 3d. For the Stuart–Landau reservoir, optimal performance is achieved for the input delay , whereas, for the memory cell reservoir, the optimal input delay is . This is because the required input delay depends both on the dynamics of the reservoir as well as the memory requirements of the particular task.
4. Discussion
We have shown that, for various time series prediction tasks, including a delayed version of the input can lead to a substantial improvement in the performance of a reservoir. We have demonstrated this using a simple map describing a semiconductor optical amplifier nonlinearity and a ring-like coupling realised via memory cells. With this approach we were able to use one unaltered reservoir to perform well on six different tasks, each with different memory and nonlinear transform requirements. The performance boost due to the delayed input is achieved over a wide range of the reservoir parameters and was also demonstrated for a time continuous system, indicating that our approach is applicable to a wide range of reservoirs.
Our results are significant for a number of reasons. Firstly, we have demonstrated that computationally expensive hyperparameter optimisation can be circumvented by tuning only two input parameters. By including an additional delayed input, reasonable performance can be achieved using an unoptimised reservoir. Nevertheless, we note that, depending on the requirements for a given task, additional hyperparameter optimisation may be necessary. Secondly, to the best of our knowledge, this is the first demonstration of an identical reservoir performing well on such a large range of tasks. Thirdly, the simplicity of our approach means that it is well suited to be applied on physical reservoirs.
This study has raised several questions surrounding delay-based reservoir optimisation that require further investigation. For example, the optimal delayed-input parameters are task dependent and how these relate to a given task is not fully understood. The NARMA10 results presented in this study indicate that the optimal delayed-input parameters are related both to the reservoir and requirements of the task. This means that it may be possible to not only use reservoir computing for real-world time series prediction tasks, but also to gain insights into the dynamical systems being investigated. For example, in tasks such as El Niño prediction where the underlying dynamical system is very complex and the relevant physical processes are not fully understood [46]. Here, investigations surrounding delay-based input could provide critical insight into the involved timescales. Furthermore, the minimum requirements for a reservoir to yield good performance on a range of tasks by only tuning the delayed input parameters remain to be determined.
A natural extension of our proposed approach is to include multiple delayed input terms. This would bring the reservoir computing approach closer to classical statistical forecasting methods such as NVAR and could lead to a further improved performance, especially for tasks involving multiple disparate timescales. However, possible performance improvement with added input terms must be weighed against the associated increase in the computational cost as each added input adds two new optimisation parameters.
Conceptualisation, L.J., J.W. and K.L.; methodology, L.J.; software, L.J.; validation, L.J.; formal analysis, L.J.; investigation, L.J. and E.R.; writing—original draft preparation, L.J.; writing—review and editing, L.J., E.R., J.W. and K.L.; visualisation, L.J. and K.L.; funding acquisition, L.J., J.W. and K.L. All authors have read and agreed to the published version of the manuscript.
This research was funded by the Deutsche Forschungsgemeinschaft (DFG), grant number LU 1729/3-1. E.R. acknowledges funding through the Helmholtz Einstein International Berlin Research School in Data Science (HEIBRiDS).
Not applicable.
Not applicable.
The datasets generated and analysed during the current study are available from the corresponding author on reasonable request.
The authors declare no conflict of interest.
The following abbreviations are used in this manuscript:
RC | Reservoir computing |
NVAR | Nonlinear vector autoregression |
NRMSE | Normalised root mean squared error |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. (a) Sketch of the reservoir computing concept. The vector [Forumla omitted. See PDF.] is the responses of the reservoir to an input [Forumla omitted. See PDF.] and the corresponding output [Forumla omitted. See PDF.] is generated by a weighted sum of these responses. The read-out weights [Forumla omitted. See PDF.] are trained. (b) Sketch of the memory cell reservoir described in Section 2.2, where a time-multiplexed input [Forumla omitted. See PDF.] is fed into the reservoir ([Forumla omitted. See PDF.], please see Figure 2 for the construction of the time-multiplexed input). The index n labels the memory cells (in total N) that are addressed via the coupling matrices [Forumla omitted. See PDF.] and [Forumla omitted. See PDF.]. K labels the feedback strength at which the output of the memory cells [Forumla omitted. See PDF.] (given by Equation (5)) is fed back into the nonlinearity [Forumla omitted. See PDF.]. The elements of the state matrix [Forumla omitted. See PDF.] are given by [Forumla omitted. See PDF.], with [Forumla omitted. See PDF.] and [Forumla omitted. See PDF.], i.e., one row of [Forumla omitted. See PDF.] corresponds to the vector of responses [Forumla omitted. See PDF.] in (a).
Figure 2. Sketch of the generation of the final time-multiplexed input sequence [Forumla omitted. See PDF.] using the task-dependent input [Forumla omitted. See PDF.], a delayed version of this input [Forumla omitted. See PDF.], and the masks [Forumla omitted. See PDF.] and [Forumla omitted. See PDF.], as described by Equation (8).
Figure 3. NRMSE as a function of the delayed input parameters d and [Forumla omitted. See PDF.] for Mackey–Glass (a) one, (b) three, and (c) ten steps ahead prediction, (d) NARMA10, (c) Lorenz x one step ahead prediction, and (f) Lorenz z one step ahead cross-prediction. Parameters are as stated in Section 2.5, except for (a,e) where [Forumla omitted. See PDF.].
Figure 4. NRMSE for optimised input delay d, as a function of the delayed-input scaling [Forumla omitted. See PDF.] for Mackey–Glass (a) one, (b) three, and (c) ten steps ahead prediction, (d) NARMA10, (c) Lorenz x one step ahead prediction, and (f) Lorenz z one step ahead cross-prediction. The error bars indicate the standard deviation. The optimal input delays are (a) [Forumla omitted. See PDF.], (b) [Forumla omitted. See PDF.], (c) [Forumla omitted. See PDF.], (d) [Forumla omitted. See PDF.], (e) [Forumla omitted. See PDF.], and (f) [Forumla omitted. See PDF.]. The remaining parameters are as stated in Section 2.5, except for (a,e) where [Forumla omitted. See PDF.].
Figure 5. NRMSE for Mackey–Glass 10 step ahead prediction as a function of (a) the virtual node coupling strength K and (b) the coupling delay N. The orange dotted (blue dashed) lines show the results without (with) delayed input. Along the blue curve the delayed input parameters d and [Forumla omitted. See PDF.] have been optimised (see Figure A1 in Appendix A for their values). The error bars indicate the standard deviation. All remaining parameters are as stated in Section 2.5.
Figure 6. NRMSE for the NARMA10 task as a function of the delayed input parameters d and [Forumla omitted. See PDF.] using the Stuart–Landau delay-based reservoir computer described in Appendix B.
Reservoir and input parameter values.
Parameter | Value | Parameter | Value |
---|---|---|---|
|
40 |
|
1 |
K | 0.02 | N | 31 |
|
30 |
|
5 × 10 |
1 | 0 | ||
0.03 | 0.85 | ||
1.8 | 0.4 |
Appendix A. Optimised Input Parameters
The optimised values of the delayed-input scaling d and the input delay d corresponding to
Figure A1. Values for the optimised input parameters [Forumla omitted. See PDF.] (orange) and d (blue), corresponding to the Mackey–Glass [Forumla omitted. See PDF.] results depicted in Figure 5, as a function of (a) the virtual node coupling strength K and (b) the coupling delay N. All remaining parameters are as stated in Section 2.5.
Appendix B. Stuart-Landau Delay-Based Reservoir Computer
The Stuart–Landau system with time-delayed feedback is given by
Reservoir and input parameter values for the Stuart–Landau RC.
Parameter | Value | Parameter | Value |
---|---|---|---|
|
−0.02 |
|
0 |
|
−0.1 | K | 0.1 |
|
105 |
|
0 |
|
30 |
|
1 × 10 |
|
0.01 |
|
0 |
T | 80 |
References
1. Nakajima, K.; Fischer, I. Reservoir Computing: Theory, Physical Implementations, and Applications; Springer: New York, NY, USA, 2021.
2. Jaeger, H. The ’Echo State’ Approach to Analysing and Training Recurrent Neural Networks; GMD Report 148 GMD—German National Research Institute for Computer Science: Darmstadt, Germany, 2001.
3. Dutoit, X.; Schrauwen, B.; Van Campenhout, J.; Stroobandt, D.; Van Brussel, H.; Nuttin, M. Pruning and regularization in reservoir computing. Neurocomputing; 2009; 72, pp. 1534-1546. [DOI: https://dx.doi.org/10.1016/j.neucom.2008.12.020]
4. Rodan, A.; Tiňo, P. Minimum Complexity Echo State Network. IEEE Trans. Neural Netw.; 2011; 22, pp. 131-144. [DOI: https://dx.doi.org/10.1109/TNN.2010.2089641] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21075721]
5. Grigoryeva, L.; Henriques, J.; Larger, L.; Ortega, J.P. Stochastic nonlinear time series forecasting using time-delay reservoir computers: Performance and universality. Neural Netw.; 2014; 55, 59. [DOI: https://dx.doi.org/10.1016/j.neunet.2014.03.004] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24732236]
6. Nguimdo, R.M.; Verschaffelt, G.; Danckaert, J.; Van der Sande, G. Simultaneous Computation of Two Independent Tasks Using Reservoir Computing Based on a Single Photonic Nonlinear Node With Optical Feedback. IEEE Trans. Neural Netw. Learn. Syst.; 2015; 26, pp. 3301-3307. [DOI: https://dx.doi.org/10.1109/TNNLS.2015.2404346] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/25751880]
7. Griffith, A.; Pomerance, A.; Gauthier, D.J. Forecasting chaotic systems with very low connectivity reservoir computers. Chaos; 2019; 29, 123108. [DOI: https://dx.doi.org/10.1063/1.5120710] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31893676]
8. Carroll, T.L. Path length statistics in reservoir computers. Chaos; 2020; 30, 083130. [DOI: https://dx.doi.org/10.1063/5.0014643]
9. Zheng, T.Y.; Yang, W.H.; Sun, J.; Xiong, X.Y.; Li, Z.T.; Zou, X.D. Parameters optimization method for the time-delayed reservoir computing with a nonlinear duffing mechanical oscillator. Sci. Rep.; 2021; 11, 997. [DOI: https://dx.doi.org/10.1038/s41598-020-80339-5]
10. Ortín, S.; Pesquera, L. Reservoir Computing with an Ensemble of Time-Delay Reservoirs. Cogn. Comput.; 2017; 9, pp. 327-336. [DOI: https://dx.doi.org/10.1007/s12559-017-9463-7]
11. Röhm, A.; Lüdge, K. Multiplexed networks: Reservoir computing with virtual and real nodes. J. Phys. Commun.; 2018; 2, 085007. [DOI: https://dx.doi.org/10.1088/2399-6528/aad56d]
12. Brunner, D. Photonic Reservoir Computing, Optical Recurrent Neural Networks; De Gruyter: Berlin, Germany, 2019.
13. Gauthier, D.J.; Bollt, E.M.; Griffith, A.; Barbosa, W.A.S. Next generation reservoir computing. Nat. Commun.; 2021; 12, 5564. [DOI: https://dx.doi.org/10.1038/s41467-021-25801-2] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34548491]
14. Vandoorne, K.; Dambre, J.; Verstraeten, D.; Schrauwen, B.; Bienstman, P. Parallel reservoir computing using optical amplifiers. IEEE Trans. Neural Netw.; 2011; 22, pp. 1469-1481. [DOI: https://dx.doi.org/10.1109/TNN.2011.2161771] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21803686]
15. Duport, F.; Schneider, B.; Smerieri, A.; Haelterman, M.; Massar, S. All-optical reservoir computing. Opt. Express; 2012; 20, pp. 22783-22795. [DOI: https://dx.doi.org/10.1364/OE.20.022783]
16. Tanaka, G.; Yamane, T.; Héroux, J.B.; Nakane, R.; Kanazawa, N.; Takeda, S.; Numata, H.; Nakano, D.; Hirose, A. Recent advances in physical reservoir computing: A review. Neural Netw.; 2019; 115, pp. 100-123. [DOI: https://dx.doi.org/10.1016/j.neunet.2019.03.005] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30981085]
17. Canaday, D.; Griffith, A.; Gauthier, D.J. Rapid time series prediction with a hardware-based reservoir computer. Chaos; 2018; 28, 123119. [DOI: https://dx.doi.org/10.1063/1.5048199]
18. Harkhoe, K.; Verschaffelt, G.; Katumba, A.; Bienstman, P.; Van der Sande, G. Demonstrating delay-based reservoir computing using a compact photonic integrated chip. Opt. Express; 2020; 28, 3086. [DOI: https://dx.doi.org/10.1364/OE.382556]
19. Freiberger, M.; Sackesyn, S.; Ma, C.; Katumba, A.; Bienstman, P.; Dambre, J. Improving Time Series Recognition and Prediction With Networks and Ensembles of Passive Photonic Reservoirs. IEEE J. Sel. Top. Quantum Electron.; 2020; 26, 7700611. [DOI: https://dx.doi.org/10.1109/JSTQE.2019.2929699]
20. Waibel, A.; Hanazawa, T.; Hinton, G.E.; Shikano, K.; Lang, K.J. Phoneme recognition using time-delay neural networks. IEEE Trans. Signal Process.; 1989; 37, pp. 328-339. [DOI: https://dx.doi.org/10.1109/29.21701]
21. Karamouz, M.; Razavi, S.; Araghinejad, S. Long-lead seasonal rainfall forecasting using time-delay recurrent neural networks: A case study. Hydrol. Process.; 2008; 22, pp. 229-241. [DOI: https://dx.doi.org/10.1002/hyp.6571]
22. Han, B.; Han, M. An Adaptive Algorithm of Universal Learning Network for Time Delay System. Proceedings of the 2005 International Conference on Neural Networks and Brain; Beijing, China, 13–15 October 2005; Volume 3, pp. 1739-1744. [DOI: https://dx.doi.org/10.1109/icnnb.2005.1614964]
23. Ranzini, S.M.; Da Ros, F.; Bülow, H.; Zibar, D. Tunable Optoelectronic Chromatic Dispersion Compensation Based on Machine Learning for Short-Reach Transmission. Appl. Sci.; 2019; 9, 4332. [DOI: https://dx.doi.org/10.3390/app9204332]
24. Bardella, P.; Drzewietzki, L.; Krakowski, M.; Krestnikov, I.; Breuer, S. Mode locking in a tapered two-section quantum dot laser: Design and experiment. Opt. Lett.; 2018; 43, pp. 2827-2830. [DOI: https://dx.doi.org/10.1364/OL.43.002827] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29905699]
25. Takano, K.; Sugano, C.; Inubushi, M.; Yoshimura, K.; Sunada, S.; Kanno, K.; Uchida, A. Compact reservoir computing with a photonic integrated circuit. Opt. Express; 2018; 26, pp. 29424-29439. [DOI: https://dx.doi.org/10.1364/OE.26.029424] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30470106]
26. Appeltant, L.; Soriano, M.C.; Van der Sande, G.; Danckaert, J.; Massar, S.; Dambre, J.; Schrauwen, B.; Mirasso, C.R.; Fischer, I. Information processing using a single dynamical node as complex system. Nat. Commun.; 2011; 2, 468. [DOI: https://dx.doi.org/10.1038/ncomms1476]
27. Paquot, Y.; Duport, F.; Smerieri, A.; Dambre, J.; Schrauwen, B.; Haelterman, M.; Massar, S. Optoelectronic Reservoir Computing. Sci. Rep.; 2012; 2, pp. 1-6. [DOI: https://dx.doi.org/10.1038/srep00287] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/22371825]
28. Brunner, D.; Penkovsky, B.; Marquez, B.A.; Jacquot, M.; Fischer, I.; Larger, L. Tutorial: Photonic neural networks in delay systems. J. Appl. Phys.; 2018; 124, 152004. [DOI: https://dx.doi.org/10.1063/1.5042342]
29. Brunner, D.; Soriano, M.C.; Mirasso, C.R.; Fischer, I. Parallel photonic information processing at gigabyte per second data rates using transient states. Nat. Commun.; 2013; 4, 1364. [DOI: https://dx.doi.org/10.1038/ncomms2368] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23322052]
30. Wolters, J.; Buser, G.; Horsley, A.; Béguin, L.; Jöckel, A.; Jahn, J.P.; Warburton, R.J.; Treutlein, P. Simple Atomic Quantum Memory Suitable for Semiconductor Quantum Dot Single Photons. Phys. Rev. Lett.; 2017; 119, 060502. [DOI: https://dx.doi.org/10.1103/PhysRevLett.119.060502]
31. Jiang, N.; Pu, Y.F.; Chang, W.; Li, C.; Zhang, S.; Duan, L.M. Experimental realization of 105-qubit random access quantum memory. NPJ Quantum Inf.; 2019; 5, 28. [DOI: https://dx.doi.org/10.1038/s41534-019-0144-0]
32. Katz, O.; Firstenberg, O. Light storage for one second in room-temperature alkali vapor. Nat. Commun.; 2018; 9, 2074. [DOI: https://dx.doi.org/10.1038/s41467-018-04458-4]
33. Arecchi, F.T.; Giacomelli, G.; Lapucci, A.; Meucci, R. Two-dimensional representation of a delayed dynamical system. Phys. Rev. A; 1992; 45, R4225. [DOI: https://dx.doi.org/10.1103/PhysRevA.45.R4225] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/9907493]
34. Zajnulina, M.; Lingnau, B.; Lüdge, K. Four-wave Mixing in Quantum Dot Semiconductor Optical Amplifiers: A Detailed Analysis of the Nonlinear Effects. IEEE J. Sel. Top. Quantum Electron.; 2017; 23, 3000112. [DOI: https://dx.doi.org/10.1109/JSTQE.2017.2681803]
35. Lingnau, B.; Lüdge, K. Quantum-Dot Semiconductor Optical Amplifiers. Handbook of Optoelectronic Device Modeling and Simulation; Series in Optics and Optoelectronics; Piprek, J. CRC Press: Boca Raton, FL, USA, 2017; Volume 1, Chapter 23 [DOI: https://dx.doi.org/10.1201/9781315152301]
36. Mackey, M.C.; Glass, L. Oscillation and chaos in physiological control systems. Science; 1977; 197, 287. [DOI: https://dx.doi.org/10.1126/science.267326]
37. Atiya, A.F.; Parlos, A.G. New results on recurrent network training: Unifying the algorithms and accelerating convergence. IEEE Trans. Neural Netw.; 2000; 11, pp. 697-709. [DOI: https://dx.doi.org/10.1109/72.846741]
38. Lorenz, E.N. Deterministic nonperiodic flow. J. Atmos. Sci.; 1963; 20, 130. [DOI: https://dx.doi.org/10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2]
39. Goldmann, M.; Mirasso, C.R.; Fischer, I.; Soriano, M.C. Exploiting transient dynamics of a time-multiplexed reservoir to boost the system performance. Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN); Shenzhen, China, 18–22 July 2021; pp. 1-8. [DOI: https://dx.doi.org/10.1109/ijcnn52387.2021.9534333]
40. Ortín, S.; Soriano, M.C.; Pesquera, L.; Brunner, D.; San-Martín, D.; Fischer, I.; Mirasso, C.R.; Gutierrez, J.M. A Unified Framework for Reservoir Computing and Extreme Learning Machines based on a Single Time-delayed Neuron. Sci. Rep.; 2015; 5, 14945. [DOI: https://dx.doi.org/10.1038/srep14945] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26446303]
41. Köster, F.; Yanchuk, S.; Lüdge, K. Insight into delay based reservoir computing via eigenvalue analysis. J. Phys. Photonics; 2021; 3, 024011. [DOI: https://dx.doi.org/10.1088/2515-7647/abf237]
42. Köster, F.; Ehlert, D.; Lüdge, K. Limitations of the recall capabilities in delay based reservoir computing systems. Cogn. Comput.; 2020; 2020, pp. 1-8. [DOI: https://dx.doi.org/10.1007/s12559-020-09733-5]
43. Röhm, A.; Jaurigue, L.C.; Lüdge, K. Reservoir Computing Using Laser Networks. IEEE J. Sel. Top. Quantum Electron.; 2019; 26, 7700108. [DOI: https://dx.doi.org/10.1109/JSTQE.2019.2927578]
44. Manneschi, L.; Ellis, M.O.A.; Gigante, G.; Lin, A.C.; Del Giudice, P.; Vasilaki, E. Exploiting Multiple Timescales in Hierarchical Echo State Networks. Front. Appl. Math. Stat.; 2021; 6, 76. [DOI: https://dx.doi.org/10.3389/fams.2020.616658]
45. Stelzer, F.; Röhm, A.; Lüdge, K.; Yanchuk, S. Performance boost of time-delay reservoir computing by non-resonant clock cycle. Neural Netw.; 2020; 124, pp. 158-169. [DOI: https://dx.doi.org/10.1016/j.neunet.2020.01.010]
46. Nooteboom, P.D.; Feng, Q.Y.; López, C.; Hernández-García, E.; Dijkstra, H.A. Using network theory and machine learning to predict El Niño. Earth Syst. Dyn.; 2018; 9, pp. 969-983. [DOI: https://dx.doi.org/10.5194/esd-9-969-2018]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Reservoir computing is a machine learning method that solves tasks using the response of a dynamical system to a certain input. As the training scheme only involves optimising the weights of the responses of the dynamical system, this method is particularly suited for hardware implementation. Furthermore, the inherent memory of dynamical systems which are suitable for use as reservoirs mean that this method has the potential to perform well on time series prediction tasks, as well as other tasks with time dependence. However, reservoir computing still requires extensive task-dependent parameter optimisation in order to achieve good performance. We demonstrate that by including a time-delayed version of the input for various time series prediction tasks, good performance can be achieved with an unoptimised reservoir. Furthermore, we show that by including the appropriate time-delayed input, one unaltered reservoir can perform well on six different time series prediction tasks at a very low computational expense. Our approach is of particular relevance to hardware implemented reservoirs, as one does not necessarily have access to pertinent optimisation parameters in physical systems but the inclusion of an additional input is generally possible.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 Institute of Theoretical Physics, Technische Universität Berlin, Hardenbergstr. 36, 10623 Berlin, Germany
2 Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR), Institut fur Optische Sensorsysteme, Rutherfordstr. 2, 12489 Berlin, Germany;
3 Institute of Physics, Technische Universität Ilmenau, Weimarer Str. 25, 98693 Ilmenau, Germany;