1 Introduction
1.1 Data assimilation and model error
Data assimilation aims at estimating the state of a physical system from its observation and a numerical dynamical model for it.
It has been successfully applied to numerical weather and ocean prediction, where it often consisted in estimating the initial
conditions for the state trajectory of chaotic geofluids .
This objective is impeded by the deficiencies of the numerical model
Model errors can take many forms, and accounting for them depends on the chosen data assimilation scheme.
A first class of solutions relies on parametrising model error by, for instance, transforming the problem into a physical parameter estimation problem
These approaches essentially seek to correct, calibrate, or improve an existing model using observations. Hence, they all primarily make use of data assimilation techniques.
1.2 Data-driven forecast of a physical system
An alternative is to renounce physically based numerical models of the phenomenon of interest and instead to only use observations of that system. Given the huge required datasets, this may seem a far-reaching goal for operational weather and ocean forecasting systems, but recent progress in data-driven methods and convincing applications to geophysical problems of small to intermediate complexity are strong incentives to investigate this bolder approach. Eventually, the perspective of putting numerical models away has a strong practical appeal, even though such a perspective may generate intense debates.
For instance, forecasting of a physical system can be done by looking up past situations and patterns using the techniques of analogues, which can be combined with present observations using data assimilation , or it can rely on a representation of the physical system based on diffusion maps that look for a spectral representation of the dataset
1.3 Learning the dynamics of a model from its output
Data-driven techniques that seek to represent the model in a more explicit manner, and therefore with a greater interpretability, may use specific classes of nonlinear regression
as advocated by and . With a view to forecasting dynamical systems, it is possible to design neural networks
in order to reflect the iterative form of a Runge–Kutta (RK) integration scheme. proposed and achieved such a goal using classical activation functions, which may however blur the interpretation of the underlying dynamics. went further and used a bilinear residual neural network structured so as to mimic a fourth-order RK scheme (RK4) and noise-free data.
Using the Keras tool with the TensorFlow backend, their approach proved to be a very effective tool for the L63 model and to a lesser extent for the 40-variable Lorenz model
1.4 Goal and outline
From this point on, the physical system under scrutiny will be called the reference model. It will be assumed to be known only from observations. We follow a data-driven approach inspired by the works of and in the sense that we will consider an observed physical reference model, which might be generated by a hidden mathematical model or process. This work is focused on either one or a combination of the following goals: (i) to build a surrogate model for the dynamics, (ii) to produce forecasts that emulate those of the reference model, and (iii) to identify the underlying dynamics of the reference model given by a mathematical model. The reference model could be totally unknown or only partially specified. To achieve these goals, we introduce a surrogate model defined by a set of ordinary differential equations (ODEs):
1 where is the state vector and is a vector field that we shall call the flow rate. For the sake of simplicity, the dynamics in this study are supposed to be autonomous, i.e. do not explicitly depend on time. Our technique seeks a fit given observations of the reference model. This is a rather general representation since, for instance, PDEs can be discretised into ODEs. We will restrict ourselves to the case where is at most quadratic in . The numerical integration of Eq. () could be based on any RK scheme, but should additionally rely on the composition of such integration steps. As a result, quite general resolvents of Eq. () can be built (the resolvent is the model, i.e. the flow rate, integrated in time over a finite time interval).
Importantly, we will not require any machine learning software tool since the adjoint of the model resolvent can be derived without a lot of effort. As opposed to the contributions mentioned in the previous subsections, we embed the technique in a data assimilation framework. From a data assimilation standpoint, the technique can be seen as meant to deal with model error (with or without some prior on the model) and it naturally accommodates partial and noisy observations. Moreover, we will build representations of the dynamics that are either invariant by spatial translation (homogeneous) and/or local (i.e. the flow rate of a variable only depends on neighbouring variables whose perimeter is defined by a stencil). These properties make our technique scalable and thus potentially applicable to high-dimensional systems.
In Sect. , we present model identification as a Bayesian data assimilation problem. We first choose an ODE representation of the dynamics, introduce a nonlinear regressor basis, and define the integration schemes we will work with. We describe the local and homogeneous representations as physically based simplifications of the most general case, and we derive the gradient of the problem's cost function based on these representations. We then introduce the Bayesian problem and the resulting cost function used for joint supervised learning of the optimal representation and estimation of the state trajectory. The latter is the standard goal of data assimilation, while the former is that of machine learning. Our approach blends them together using the formalism of data assimilation.
In Sect. , we discuss several theoretical issues: the prior of the model, the convergence of the training step, the connection with numerical analysis of integration schemes, the connection with deep learning architectures, and, finally, the pros and cons of our approach.
In Sect. , we illustrate the method with several low-order chaotic models (L63, L96, KS, and a two-scale Lorenz model) of various sizes, from a perfectly identifiable model, i.e. where the model used to generate the dataset can be retrieved exactly, to a reduced-order model where the model used to generate the dataset cannot be retrieved exactly, using full or partial, noiseless, or noisy observations. Conclusions are given in Sect. .
2 Model identification as a data assimilation problem2.1 Ordinary differential equation representation
Our surrogate model is chosen to be represented by an ODE system as described by Eq. (). We additionally assume that the flow rate can be written as
2 where is a matrix of real coefficients to be estimated and is a map that defines regressor functions of . is the latent space of the regressors in which the flow rate is linear.
In the absence of any peculiar symmetry, we choose this map to list all the monomials up to second order built on , i.e. the constant, linear, and bilinear monomials. Let us call the set of all variable indices and the set of all pairs of variable indices. We introduce the augmented state vector 3 extend to , and define as the distinct pairs of variable indices in .
As a result, the regressors are compactly defined by 4 where the scalars in the bracket are the entries of the vector . We count 5 regressors, i.e. the cardinal of . For instance, a model with three variables, , and , such as L63, has such regressors: 6
Higher-order regressors, as well as regressors of different functional forms, could be included as in . However, it is important to keep in mind that we do not seek an expansion of the resolvent of the reference model, but of the flow rate . As a consequence, higher-order products of the state variables are anyhow generated by the integration schemes and their composition. It is worth mentioning that nonlinear regressions are not widespread in geophysical data assimilation. We are nonetheless aware of at least one noticeable exception that extends traditional Gaussian-based methods .
2.2 Local and homogeneous representationsAt least two useful simplifications for the ODEs could be exploited if the state is assumed to be the discretisation of a spatial field.
2.2.1 Locality
First, we use a locality assumption based on the physical locality of the system: all multivariate monomials in the ODEs have variables that belong to a stencil, i.e. a local arrangement of grid points around a given node. This can significantly reduce the number of bilinear monomials in . We assume that is the stencil around node , the pattern being the same for all nodes except the last one. For the node corresponding to the extra variable , we assume that its stencil consists of all the nodes. We then define as the sub-set of all pairs of variables for which . The set of required monomials can therefore be reduced to
7 Under these conditions, becomes sparse. Indeed, for each node , we assume that , the time derivative of , is impacted only by linear terms such that and quadratic terms such that , , and . However, to keep a dense matrix, we choose to compactly redefine and shrink by eliminating all a priori zero entries due to the locality assumption. The number of columns of is then significantly reduced from to . As a consequence of this redefinition of , the matrix multiplication in between and must be changed accordingly. Nonetheless, the operation that assigns coefficients in to the monomials in remains linear, and we write it as 8
Let us take the example of a one-dimensional extended space as those used in Sect. . The domain is supposed to be periodic (circle) and the nodes are indexed by . Recall that the node of index is associated with the extra . For , the stencil is defined as the set of nodes of index , plus the extra node of index . The stencil consists of all the nodes, i.e. . We assume . In that case as defined by Eq. () has monomials. For instance, there are such regressors for a -variable model defined on a circular domain, such as L96, with : 9
The row of the dense contains the following coefficients for each . First there are regressors built with (the constant and linear regressors). Second, we consider the square monomials with , i.e. whose number is . Then we consider those separated by one space step, whose number is , followed by those separated by two space steps whose number is , and so on until a separation of is reached. Quadratic monomials of greater separation are discarded since they do not belong to a common stencil as per the above definition reflecting the locality assumption. Hence there is a total of coefficients per grid cell.
In Appendix , we show in the one-dimensional space case how to compute the reduced form of the product between and , assuming locality. This type of technical parametrisation is required for a parsimonious representation of the control variables, i.e. the coefficients of , and is key for a successful implementation with high-dimensional models.
Note that this locality assumption is hardly restrictive. Indeed, owing to the absence of long-range instantaneous interactions (which are precluded in geophysical fluids), farther distance correlations between state variables can be generated by small stencils in the definition of through time integrations. This would not prevent potential specific long-distance dependencies (such as teleconnections).
2.2.2 HomogeneityFurthermore, a symmetry hypothesis could optionally be used by assuming translational invariance of the ODEs, called homogeneity in the following. Because our control parameters, i.e. the coefficients of , parametrise the flow rate, the symmetry simply translates into the rows of the dense being the same for all . Hence simply becomes a vector in .
Let us enumerate its coefficients in the case of the L96 model with and assuming both locality and homogeneity. The coefficients are partitioned into for the bias, for the linear sector, and for the bilinear sector. In the linear sector, is the relative position with respect to the current grid point. In the bilinear sector, are the relative positions with respect to the current grid point of the two variables in the product. Proceeding in the same way we counted them, the coefficients of are
10
Note that while both constraints, locality and homogeneity, apply to the ODEs, they do not apply to the states per se. For instance, ODEs for discretised homogeneous two-dimensional turbulence satisfy both constraints and yet generate non-uniform flows.
For realistic geofluids, the forcing fields (solar irradiance, bathymetry, boundary conditions, friction, etc.) are heterogeneous, so that the homogeneity assumption should be dropped. Nonetheless, the fluid dynamics part of the model would remain homogeneous. As a result, a hybrid approach could be enforced.
2.3 Integration scheme and cyclingThe reference model will be observed at time steps , indexed by integer . Hence, we need to be able to express the resolvent of the surrogate model from to . We assume that is a multiple of the integration time step of the surrogate model, , where is the integration time step and is the number of integrations. The time steps can be uneven, which is reflected in the dependence of on . Hence, the resolvent of the surrogate model from to can be written as
11 i.e. the integration of Eq. () from to using the representation Eq. ().
We define intermediate state vectors in between : is the state vector defined at time for , as the result of compositions of on : . Figure is a schematic of the composition of the integration steps, along with the state vectors and .
Figure 1Representation of the data assimilation system as a hidden Markov chain model and of the model resolvents and .
[Figure omitted. See PDF]
The operator is meant to be an explicit numerical integration scheme. In the following, we shall consider an RK scheme applied to , with steps. This number of steps coincides with the accuracy of the schemes that we will consider: first order for the Euler scheme, second order for RK2, and fourth order for RK4 (, and , respectively). Provided the dynamics are autonomous, a general RK scheme reads as where the coefficients and entirely specify the scheme and . Note that are zero for , so that Eq. () can be computed iteratively from to , followed by the sum Eq. () to get .
In the following, will be absorbed into the definition of and hence , so that we can take without loss of generality.
2.4 Bayesian analysisWe consider a sequence of observation vectors of the physical system at indexed by . The system state is observed through
13 where is the observation operator at time . The observation error will be assumed to be Gaussian with zero mean and covariance matrix . It is also assumed to be white in time. The flow rate is given by the approximation Eq. (), so that the resolvent of the surrogate model should also be considered an approximation of the reference model's resolvent. Hence, we generalise Eq. () to 14 where are unbiased Gaussian errors of covariance matrices , supposed to be white in time and uncorrelated from the observation errors. Note that, in all generality, the state space of the surrogate model does not have to match that of the reference model. We will nonetheless take them to coincide here merely for simplicity.
With the goal of identifying a model or building a surrogate of the reference one, we are interested in estimating the probability density function (pdf) , where stands for all observations in the window .
To obtain a tractable expression for this conditional likelihood, we need to marginalise over the state variables within the window:
15
An approximate maximum a posteriori for could be obtained by using the Laplace approximation of this integral, which would require finding the maximum of
16
Nonetheless, maximising Eq. () rigorously yields the maximum a posteriori of the joint variables
.
The cost function associated with this joint pdf is by definition .
Because Eq. () is Markovian and given the Gaussian form of both model and observational errors, the cost function reads as
17
up to a constant depending on and only.
The vector norm is defined as .
This is the cost function of a weak constraint 4D-Var
In the case where the reference model is fully and directly observed, i.e. , and in the absence of observation noise, i.e. , we have and the cost function simplifies to where is the fully and perfectly observed state trajectory of the reference model. This is notably similar to a traditional least-square function used in machine and deep learning regression. This connection between machine learning and data assimilation cost functions had been previously put forward by and , although in a different form. Reciprocally, when the aforementioned hypotheses of noiseless and complete observations do not hold, Eq. () can be seen as a natural data assimilation extension of Eq. (). Note that Eq. () only depends on the sequence . If, in addition, the dependence on is neglected in Eq. (), then the maximum a posteriori should not depend on a global rescaling of .
The data assimilation system is represented in Fig. as a hidden Markov chain model. This Bayesian view highlights the choice that must be made for and/or and provides an interpretation in terms of errors. Furthermore, one could implement an objective estimation of these error statistics as in .
2.5 Gradients and adjoint of the representationTo efficiently minimise the cost function Eq. () with a gradient-based optimisation tool, we need to analytically derive the gradient of Eq. () with respect to both and . As for , we have where for and for ; is the tangent linear operator of the resolvent computed at for ; is the tangent linear operator of the observation operator computed at for . As for , we have
20 assuming is independent of . Hence, a key technical aspect of the problem is to compute the tangent linear and adjoint operators required by these gradients. In this paper, we assume that the adjoints of the tangent linear operators of the observation operators are known, for instance if the latter are linear as in Sect. .
The computations of the gradients and the required adjoints are developed in Appendix . These technicalities can be skipped since they are not required to understand the method. Nonetheless, they are critical to its numerical efficiency.
3 Discussion of the theoretical pointsIn this section, we discuss the prior pdf , the optimisation of the cost function , and the connections with deep learning techniques.
3.1 Prior information on the reference model
The goal is either to reconstruct an ODE for the reference model, characterised by the coefficients through , or to build a surrogate model of it. The estimation of is then accessory even though factually critical to the estimation of . The alternative would have been to consider the estimation of as the primary problem, under model error of a prescribed ODE form, the estimation of becoming accessory. In both cases, but particularly in the latter one, one may benefit from an informative prior pdf .
The prior pdf can be used to encode any prior knowledge on the reference model, such as pieces of it that would be known. Indeed, can formally quantity the uncertainty associated with any part of the surrogate model. For instance, assume that the reference model is partially identifiable, which means that part of the reference model could be represented by an up to bilinear flow rate of the form Eq. () and Eq. (). Moreover, assume that there is one such part of the reference model which is known, i.e. that elements of are actually known, while others need to be estimated. Then, the known coefficients can formally be encoded in with Dirac factors. In practice it could be implemented as a constrained optimisation problem, for instance using an augmented Lagrangian, in order to avoid significantly altering the gradients with respect to . More generally, assigning a non-trivial prior likelihood, such as a Gaussian one for , is certainly appealing but may not be practical.
3.2 Numerical optimisation: issues and solutions
The success of the optimisation of depends above all on the ability to evaluate it robustly.
In particular, it depends on the stability of the numerical integration scheme .
In this paper, we chose to rely on one-step explicit schemes which are much simpler to describe and efficient to integrate (a family to which the RK schemes of any belong).
These schemes are -stable, which means that the finite time error growth goes to zero as the integration step goes to zero. But, as a major drawback, they have a limited absolute (or -)stability domain
This says that, in the absence of a strong prior , it is safer to start with likely to lie close to . Alternatively, if stability constraints are known about , they could be encoded in . It also says that we should strike an empirical compromise between the composition numbers and the easiness in evaluating . On the one hand, the larger , the more the iterates of in the optimisation must be kept confined in . On the other hand, the longer , the broader the class of achievable resolvents and hence the ability to build a good surrogate. Moreover, the higher the stability of the integration scheme, the larger , and the easier the optimisation in spite of an increase in its numerical cost.
As for the sensitivity on , the longer the time window, the more observations are available to constrain the problem. However, the longer , the higher the chances of having a significant instability: the chances of a successful integration typically decrease exponentially with the length .
This stability issue can be somehow alleviated by normalising the observations by their mean and variance in order to avoid excessively large value ranges of the regressors. This will not change the fundamental stability of the schemes, yet may delay the occurrences of instabilities due to the nonlinear terms.
Moreover, instabilities can significantly be mitigated by replacing the monomials with smoothed or truncated ones:
21 One can for instance choose , in order to cut off too large values of and hence delay the growth of instabilities. The parameter is roughly chosen as the typical maximum amplitude of as approximately inferred from the observations. If is deemed to be numerically too costly, one can choose instead or more generally , with a small trend.
This latter change in variables is the one implemented for all numerical applications described in Sect. , together with the normalisation. From these experiments, we learned that these tricks often turned critical in the first iterates of the optimisation when the estimate of progressively migrated to the -stability domain. After a few iterations, however, the integrations are stabilised and the nonlinear regime of the truncations in Eq. () is not tapped into anymore.
3.3 Connection and analogies with deep learning architecturesIt has recently been advocated that residual deep learning architectures
of neural networks can roughly be interpreted as dynamical systems
By contrast, we have started here from a pure dynamical system standpoint and proposed to use data assimilation techniques. In order to explore complex model resolvents, applied to each interval between observations, we need the composition of integration steps. In particular, this allows the resolvent to exhibit more realistic long-range correlations. Even when using a reasonably small stencil, long-range correlations will arise as a result of the integration steps. Nonetheless, the stencil might not be too small so as to model discretised higher-order differential operators. As noted by , each application of could be seen as a layer of the neural network. Moreover, within such a layer, there are sublayers corresponding to the steps of the integration scheme. The larger is, the deeper this network is, and the richer the class of resolvents is to optimise on.
Following this analogy, the analysis step where is optimised can be called the training phase. Backpropagation in the network, as coined in machine learning , corresponds to the computation of the gradient of the network with respect to and of the model adjoint derived in Sect. . This is a shortcut for the use of machine learning software libraries such as TensorFlow or PyTorch (see Appendix for a brief discussion).
Because of our complete control of the backpropagation, we hope for a gain in efficiency. However, our method does not have the flexibility of deep learning through established tools. For instance, addition of extra parameters, adaptive batch normalisation, and dropouts are not granted in our approach without further considerations.
Convolutional layers play the role of localisation in neural architecture. In our approach this role is played by the locality assumption and its stencil prescription. Recall that a tight stencil does not prevent longer-range correlations that are built up through the integration scheme and their composition. This is similar to stacking several convolutional layers to account for multiple scales from the reference model which the neural network is meant to learn from.
Finally, we note that, as opposed to most practical deep learning strategies with a huge amount of weights to estimate, we have reduced the number of control variables (i.e. ) as much as possible.
4 Numerical illustrations
4.1 Model setup and forecast skill
In this section, we shall consider four low-order chaotic models defined on a physical one-dimensional space, except for L63, which is -dimensional. They will serve as reference models.
-
The L63 model as defined by the ODEs:with the canonical values . Its Lyapunov time
The Lyapunov time is defined as the inverse of the first Lyapunov exponent, i.e. the typical time over which the error grows by a factor .
is about . Besides its intrinsic value, this model is introduced for benchmarking against . It is integrated using an RK4 scheme with as the integration time step. -
The L96 model as defined by ODEs defined over a periodic domain of variables indexed by where :
23where , , , and . This model is an idealised representation of a one-dimensional latitude band of the Earth's atmosphere. Its Lyapunov time is . It is integrated using the RK4 scheme and with .
-
The KS model, as defined by the PDE:24over the periodic domain on which we apply a spectral decomposition with modes. The Lyapunov time of our KS model is time units. This model is of interest because, even though it has dynamical properties comparable to that of L96, it is much steeper, so that much more stringent numerical integration schemes are required to efficiently integrate it. It is defined by a PDE, not an ODE system. It is integrated using the EDTRK4 scheme and .
-
The two-scale Lorenz model
L05III, is given by the two-scale ODEs:for with and with . The indices apply periodically over their domain; stands for the integer division of by . We use the original values for the parameters: for the timescale ratio, for the space-scale ratio, for the coupling, and for the forcing. When uncoupled (), the Lyapunov time of the slow variables sector of the model Eq. (25) is , which will be the key timescale when focusing on the slow variablessee e.g. .This model is of interest because the variable is meant to represent unresolved scales and hence model error when only considering the slow variables . For this reason, it has been used in data assimilation papers focusing on estimating model error
e.g. . It is integrated with an RK4 scheme and since it is steeper than the L96 model.
The numerical experiments consist of three main steps. First, the truth is generated, i.e. a trajectory of the reference model is computed. The reference model equations are supposed to be unknown, but the trajectory is observed through Eq. () to generate the observation vector sequence .
Next, estimators of the ODE model and state trajectory are learned by minimising the cost function . We choose to minimise it using an implementation of the quasi-Newton BFGS algorithm , which critically relies on the gradients obtained in Sect. . The default choices for the initial ODE model are and defined as the space-wise linear interpolation of . Note that the minimisation could converge to a local minimum, which may or may not yield satisfactory estimates.
Finally, we can make forecasts using the tentative optimal ODE model obtained from the minimisation. With a view to comparing it to the reference model used to generate the data, we will consider a set of forecasts with (approximately) independent initial conditions. Both the reference model and the surrogate one will be forecasted from these initial conditions. The departure from their trajectories, as measured by a root mean square error (RMSE) over the observed variables, will be computed for several forecast lead times. The RMSE is then averaged over all the initial conditions. We will also display the state trajectories of the reference and surrogate models starting from one of the initial conditions.
The integration time step of the truth (reference model) is over the time window . This parameter only matters for the reference model integration since only the training time steps and the output of the model (which may include knowledge of the observation operator) are known to the observer.
The integration time step of the surrogate model within the training time window is . It is assumed to be an integer divisor of the training time step , supposed to be constant, i.e. is a constant integer and the number of compositions , and that is why the index on has been dropped. The integration time step of the surrogate model within the forecast time window is denoted . Note that and can be distinct and that they are critical to the stability of the training and the forecast step, respectively.
The three steps of the numerical experiments are depicted in Fig. . Except when explicitly mentioned, the prior is disregarded, which means that no explicit regularisation on is introduced.
Figure 2Schematic of the three steps of the experiments, with the associated time steps (see main text). The beginning of the forecast window may or may not coincide with the end of the training window. The lengths of the segments , , and are arbitrary in this schematic.
[Figure omitted. See PDF]
4.2 Inferring the dynamics from dense and noiseless observations: perfectly identifiable modelsIn the first couple of experiments, we consider a densely observed
We choose the qualifier densely observed instead of fully observed because there is no way to tell from the observations alone whether the reference model is fully observed.
reference model with noiseless observations. In this case, is the identity operator, i.e. each grid point value is observed, and so that a uniform rescaling of the , all chosen to be , is irrelevant, assuming can be neglected, which is hypothesised here and is generally true for large . Moreover, we use the same numerical scheme with the same integration time step to generate the reference model trajectory as the one used by the surrogate model. In principle, we should be able to retrieve the reference model, since the reference is identifiable, meaning that it belongs to the set of all possible surrogate models.Let us first experiment with the L63 model, using an RK4 integration scheme, with and (this corresponds to about Lyapunov times). We have and . We choose . A convergence to the highest possible precision is achieved after about iterations. The cost function value reaches to machine precision at . The estimated is given by , because, as mentioned above, the optimised matrix absorbs the time step. The accuracy of is measured by the uniform norm , i.e. the absolute values of the entries of the difference , where is the matrix of the flow rate of L63 (including the zero coefficients). We obtain . To compute the RMSE as a function of the forecast lead time, we average over runs (each one starting from a difference initial condition). The RMSE (not shown) starts significantly, diverging from after Lyapunov time units, and reaches a saturation for a lead time of Lyapunov times.
A similar experiment is carried out with the L96 model, using an RK4 integration scheme, with and (this corresponds to about Lyapunov times). We choose here to implement the locality and homogeneity assumptions (see Sect. and ). The stencil has a width of (i.e. the local grid points with two points on its left and two points on its right). We have , , and . We choose . Through the minimisation, the main coefficients of the L96 model (forcing , advection terms, dissipation) are retrieved with a precision of a least .
To compute the RMSE as a function of the forecast lead time, we average over runs. The RMSE starts significantly, diverging from after Lyapunov times, and reaches a saturation for a lead time of Lyapunov times.
4.3 Inferring the dynamics from dense and noiseless observations: non-identifiable models
In this second couple of experiments, we consider again a densely observed reference model with noiseless observations. The reference model trajectory is generated by the L96 model () integrated with the RK4 scheme, with and .
As opposed to the reference model, in these non-identifiable model experiments, the surrogate model is based on the RK2 scheme, with compositions. We choose to implement the locality and homogeneity assumptions, with a stencil of width . We have and . We choose . In all the cases, the convergence is reached within a few dozens of iterations. The error on the coefficients of (i.e. ) is about but with the dominant contribution from . The RMSE as a function of the forecast lead time is computed for and is shown in Fig. . The error is reduced as is increased. But the improvement saturates at about .
Figure 3
Average RMSE of the surrogate model (L96 with an RK2 structure) compared to the reference model (L96 with an RK4 integration scheme) as a function of the forecast lead time (in Lyapunov time unit) for an increasing number of compositions.
[Figure omitted. See PDF]
Figure shows the trajectories of the reference and surrogate models starting from the same initial condition, as well as their difference, as a function of the forecast lead time. Their divergence becomes significant after 4 Lyapunov times and saturates after 8 Lyapunov times.
Figure 4Density plot of the L96 reference and surrogate model trajectories, as well as their difference trajectory, as a function of the forecast lead time (in Lyapunov time unit). The observations are noiseless and dense; the model is not identifiable.
[Figure omitted. See PDF]
Next, the reference model trajectory is generated by the KS model () integrated with the ETDRK4 scheme, with and (this corresponds to about Lyapunov time). We choose to implement the locality and homogeneity assumptions, with a stencil of width . The surrogate model is based on the RK4 scheme, with compositions. Note that in this experiment, the reference and surrogate models and their integration schemes significantly differ. We have and . We choose and . The forecast time step is somehow smaller than because the KS equations are stiff and so will the surrogate model be. This emphasises once again that we have learned about the intrinsic flow rate of the reference model and not a resolvent thereof. Alternatively, we could use a more robust integration scheme than RK4 such as ETDRK4 for the forecast.
Figure shows the trajectories of the reference and surrogate models starting from the same initial condition, as well as their difference, as a function of the forecast lead time, for a stencil of 9. Their divergence becomes significant after 4 Lyapunov times and saturates after 8 Lyapunov times.
Figure 5Density plot of the KS reference and surrogate model trajectories, as well as their difference trajectory, as a function of the forecast lead time (in Lyapunov time unit). The observations are noiseless and dense; the model is not identifiable.
[Figure omitted. See PDF]
To check whether the PDE of the KS model could be retrieved in spite of the differences in the method of integrations and representations, we have computed a Taylor expansion of all monomials in the surrogate ODE flow rate up to order so as to obtain an approximately PDE equivalent. The coefficients of this PDE (up to order in the expansion) are displayed in Fig. and compared to the coefficients of the reference model's PDE. The match is good and the terms , , and are correctly identified as the dominant ones. Nonetheless, there are three non-negligible coefficients for higher-order terms that either might have been generated by the Taylor expansion or may originate from a degeneracy among the higher-order operators, or may simply be identified with a shortcoming of our specific ODE representation.
Figure 6Coefficients of the surrogate PDE model (blue) resulting from the expansion of the surrogate ODEs and compared to the reference PDE's coefficients (orange).
[Figure omitted. See PDF]
4.4 Inferring the dynamics from partial and noisy observationsWe come back to the L96 model, which is densely observed but with noisy observations that are generated
using an independently identically distributed normal noise. The surrogate model is based on an RK4 scheme and a stencil of length , which makes the reference model identifiable.
In this case, the outcome theoretically depends on the choice for and , given that Eq. () is now used instead of Eq. ().
For the sake of simplicity, we have chosen them to be of the scalar forms and .
In these synthetic experiments, is supposed to be known, while is not.
We only have a qualitative view of the potential mismatch between the reference and the surrogate model. Moreover, a Gaussian additive noise might not even be the best statistical model for such error.
Nonetheless holding to the above Gaussian assumptions for model error,
the optimal value of could be determined using an empirical Bayes approach based on, for instance, the expectation–maximisation technique in order to determine the maximum a posteriori of the conditional density of
Figure shows the forecast skill of the surrogate model as a function of the forecast lead time and for increasing noise in the observations.
Figure 7
Average RMSE of the surrogate model (L96 with an RK4 structure) compared to the reference model (L96 with an RK4 integration scheme) as a function of the forecast lead time (in Lyapunov time unit) for a range of observation error standard deviations .
[Figure omitted. See PDF]
Even though, in this configuration, the model is identifiable, the reference value for may not correspond to a minimum of the cost function. The cost function might have several local minima. As a consequence, there is no guarantee, starting from a non-trivial initial value for , that the model will be identified. Indeed, as seen in Fig. , the forecast skill degrades significantly as the observation error standard deviation is increased.
This is confirmed by Fig. , where the precisions in identifying the model, measured by either the spectral norm or the uniform norm , are plotted as functions of the observation error standard deviation.
Figure 8Gap between the surrogate (L96 with an RK4 structure) and the (identifiable) reference dynamics (L96 with an RK4 integration scheme) as a function of the observation error standard deviation . Note the use of logarithmic scales.
[Figure omitted. See PDF]
Using the same setup, we have also reduced the number of observations. The observations of grid point values are regularly spaced and shifted by one grid cell at each observation time step. The initial in the optimisation remains , while the initial state is taken as a cubic spline interpolation of the observations over the whole surrogate model grid.
If the observations are noiseless, the reference model is easily retrieved to a high precision down to a density of site over . If the observations are noisy, the performance slowly degrades when the density is decreased down to about site over , below which the minimisation, trapped in a deceiving local minimum, yields an improper surrogate model.
We would like to point out that in the case of noiseless observations, the performance depends little on the length of the training window, beyond a relatively short length, typically . However, in the presence of noisy observations, the overall performance improves with longer , as expected since the information content of the observations linearly increases with the length of the window.
Figure displays the values of the coefficients in with respect to the minimisation iteration index for the noiseless and fully observed case. As expected, coefficients converge to the value specified by the exact L96 flow rate, while the other coefficients collapse to .
Figure 9L96 is the reference model, which is fully observed without noise: plot of the coefficients of the surrogate model as a function of the minimisation iteration number. The coefficient of the forcing () is in green, the coefficients of the convective terms are in cyan, and the dampening coefficient is in magenta.
[Figure omitted. See PDF]
Figure shows the same but in the significantly noisy case where and with a significantly longer window (about Lyapunov times).
Figure 10L96 is the reference model, which is fully observed with observation error standard deviation : plot of the coefficients of the surrogate model as a function of the minimisation iteration number. Note that the index axis is in logarithmic scale. The coefficient of the forcing () is in green, the coefficients of the convective terms are in cyan, and the dampening coefficient is in magenta.
[Figure omitted. See PDF]
Figure 11Density plot of the L05III reference and surrogate model trajectories, as well as their difference trajectory, as a function of the forecast lead time (in Lyapunov time unit). Panel (d) shows a zoom of the difference between times and .
[Figure omitted. See PDF]
4.5 Inferring reduced dynamics of a multiscale modelIn this experiment, we consider the L05III model. With the locality and the homogeneity assumptions, the scalability is typically linear with the size of the system, and we actually consider the -fold model where and to demonstrate that no issues were encountered when scaling up the method. The large-scale variable of the reference model is noiselessly and fully observed over a short training window (, which corresponds to about Lyapunov time), i.e. all slow variable values are observed, whereas the short-scale variable is not observed. The surrogate model is based on the RK4 scheme and compositions. We choose to implement the locality and homogeneity assumptions, with a stencil of width . We have and . We choose .
Figure shows the trajectories of the reference and surrogate models starting from the same initial condition, as well as their difference, as a function of time.
The emergence of error, i.e. the divergence from the reference, appears as long darker stripes on the density plot of the difference (close to zero difference values appear as white or light colour). We argue that these stripes result from the emergence of sub-scale perturbations that are not properly represented by the surrogate model. Reciprocally there are long-lasting stripes of low error not yet impacted by sub-scale perturbations. As expected, and similarly to the L96 model, the perturbations are transported eastward, as shown by the upward tilt of the stripes in Fig. . Clearly, in this case, a flow rate of the form Eq. () could be insufficient. Adding a stochastic parametrisation with parameters additionally inferred might offer a solution, as in . Because of this mixed performance, the RMSE slowly degrades (compared to the other experiments reported so far) with the increase in the forecast lead time (not shown).
5 Conclusions
We have proposed to infer the dynamics of a reference model from its observation using Bayesian data assimilation, which is a new and original scope for data assimilation. Over a given training time window, the control variables are the state trajectory and the coefficients of an ODE representation for the surrogate model. We have chosen the surrogate model to be the composition of an explicit integration scheme (Runge–Kutta typically) applied to this ODE representation. Time invariance, space homogeneity, and locality of the dynamics can be enforced, making the method suitable for high-dimensional systems. The cost function of the data assimilation problem is minimised using the adjoint of the surrogate resolvent which is explicitly derived. Analogies between the surrogate resolvent and a deep neural network have been discussed as well as the impact of stability issues of the reference and surrogate dynamics.
The method has been applied to densely noiseless observed systems with identifiable reference models yielding a perfect reconstruction close to machine precision (L63 and L96 models). It has also been applied to densely or partially observed, identifiable or non-identifiable models with or without noise in the observations (L96 and KS models). For moderate noise and sufficiently dense observation, the method is successful in the sense that the forecast is accurate beyond several Lyapunov times. The method has also been used as a way to infer a reduced model for a multi-scale observed system (L05-III model). The reduced model was successful in emulating slow dynamics of the reference model, but could not properly account for the impact of the fast unresolved scale dynamics on the slow ones. A subgrid parametrisation would be required or would have to be inferred.
Two potential obstacles have been left aside on purpose but should later be addressed. First, the model error statistics have not been estimated. This could be achieved using for instance an empirical Bayesian analysis built on an ensemble-based stochastic expectation maximisation technique. This is an especially interesting problem since the potential discrepancy between the reference and the surrogate dynamics is in general non-trivial. Second, we have used relatively short training time windows. Numerical efficient training on longer windows will likely require use of advanced weak constraint variational optimisation techniques.
In this paper, only autonomous dynamics have been considered. We could at least partially extend the method to non-autonomous systems by keeping a static part for the pure dynamics and consider time-dependent forcing fields. We have not numerically explored non-homogeneous dynamics, but we have shown how to learn from them using non-homogeneous local representations.
A promising yet challenging path would be to consider implicit or semi-implicit schemes following for instance the idea in . This idea is known in geophysical data assimilation as the continuous adjoint
If observations keep coming after the training time window, then one can perform data assimilation using the ODE surrogate model of the reference model. This data assimilation scheme could only focus on state estimation or it could continue to update the ODE surrogate model for the forecast.
Data availability
No datasets were used in this article.
Appendix A
Parametrisation of for local representations defined over a circle
In this Appendix, we show how to parametrise assuming locality of the representation, in the case where it is defined over a periodic one-dimensional domain, i.e. a circle. It is of the generic form A1 where is an integer such that . We can treat the bias, linear, and bilinear monomials separately into sectors, , , and , respectively. Let be the indices which span the columns of for each of the three sectors and the indices which span the entries of for each of the three sectors . Then, Eq. () can be more explicitly written as A2 where the dummy index for the linear terms browses the stencil, and the dummy indices for the bilinear monomials browse the stencil, in the same way as we did in Sect. to enumerate them. By enumeration, we find the following.
-
For the bias sector, we have and .
-
For the linear sector, we have and , where means the index in congruent to modulo , in order to respect the periodicity of the domain.
-
Finally, for the bilinear sector, we have and .
We observe that these indices , and do not depend on the site index . They only indicate a relative position with respect to . Hence, if homogeneity is additionally assumed, , and do not depend on anymore and becomes a vector.
Appendix BComputations of the gradients with respect to and
B1 Differentiation of the RK schemesIt will be useful in the following to consider the variation of each , defined by Eq. (), with respect to either or :
B1 where is evaluated at and is the tangent linear operator with respect to of evaluated at . Equation () can be written compactly in the form B2 where is the matrix of size defined by its blocks , is the vector of size which results from the stacking of the for , is the vector of size which results from the stacking of the for , and is the identity matrix. The important point is that is a lower triangular matrix and describes an iterative construction of the . Moreover, the diagonal entries of are by construction so that is invertible and B3 This will be used to compute the variations of via Eq. ().
B2 Integration stepWe first consider the situation when the observation interval corresponds to one integration time step of the surrogate model, i.e. : with . As a result, the time index can be omitted here. We will later consider the composition of several integration schemes (). Equation () is written again but as
B4 where is the matrix of a size tensor product of the vector defined by , i.e. the coefficients of the RK scheme as defined in Eq. (), with the state space identity matrix, and where is the vector of size defined after Eq. (). Looking first at the gradient with respect to the state variable, and using Eq. (), we have B5 which yields the adjoint operator B6 Let us consider an arbitrary vector ; we have B7 To avoid computing explicitly, let us define the vector such that B8 Because is upper triangular with diagonal entries of value , is the solution of a linear system easily solvable iteratively, which stands as an adjoint/dual to the RK iterative construction. Hence, we finally obtain a formula and algorithm to evaluate B9 which is key to computing Eq. (19). Indeed, Eq. (19) now reads as where is the iterative solution of the system for .
Second, let us look at the gradient of with respect to . From Eqs. () and (), and now considering variations with respect to , we obtain B11 which yields, using as defined by Eq. (), B12 For each , let us introduce , and let us denote the subvector of for the th block of the Runge–Kutta scheme. Then we have, for and , B13 This is key to efficiently computing Eq. (), which now reads as B14 where is the solution of . The index of indicates that the operators defined in the entries of are evaluated at .
B3 Composition of integration stepsWe now consider a resolvent which is the composition of integration steps over : , where is an alias to . Let us first look at the gradient with respect to the state variable. Within the scope of this section, we define and for : . Hence, . We also define as the tangent linear operator of at . By the Leibniz rule, we obtain
B15 We can now apply Eq. () to each individual integration step and obtain for any B16 where is the solution of B17 Hence, to compute , we define and for : . This finally reads as B18 for , where is the solution of B19 To compute the key terms in the gradients Eq. (19), must be chosen to be where and B20
Second, we look at the gradient with respect to . In this case, the application of the Leibniz rule yields B21 where . But , which focuses on a single integration step, is given by Eq. (), B22 and from Eq. (), B23 As a result, we obtain B24 where is the solution of B25
All of these results, Eqs. (B10), (), (), and (), allow us to efficiently compute the gradients of the cost function with respect to both and . Note, however, that they have been derived under the simplifying assumption that is given by Eq. () with a traditional matrix multiplication between and , but not by the compact Eq. (). When relying on homogeneity and/or locality, the calculation of the gradient with respect to follows the principle described above but requires further adaptations, which can be derived using Eq. (), with the asset of strongly reducing the computational burden.
Appendix C Adjoint differentiation with PyTorch and TensorFlowAs an alternative to the explicit computation of the gradients of Eq. () and the associated adjoint models, we have used PyTorch and TensorFlow as automatic differentiation tools. Only the cost function code needs to be implemented. We made a few tests of the experiments of Sect. that showed that the fastest code is a C
Author contributions
MB first developed the theory, implemented, ran, and interpreted the numerical experiments, and wrote the original version of the manuscript. All authors discussed the theory, interpreted the results, and edited the manuscript. The four authors approved the manuscript for publication.
Competing interests
The authors declare that they have no conflict of interest.
Acknowledgements
The authors are grateful to two anonymous reviewers and Olivier Talagrand acting as editor for their comments and suggestions. Marc Bocquet is thankful to Said Ouala and Ronan Fablet for enlightening discussions. CEREA and LOCEAN are members of the Institut Pierre-Simon Laplace (IPSL).
Financial support
This research has been supported by the Norwegian Research Council (project REDDA (grant no. 250711)).
Review statement
This paper was edited by Olivier Talagrand and reviewed by two anonymous referees.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Recent progress in machine learning has shown how to forecast and, to some extent, learn the dynamics of a model from its output, resorting in particular to neural networks and deep learning techniques. We will show how the same goal can be directly achieved using data assimilation techniques without leveraging on machine learning software libraries, with a view to high-dimensional models. The dynamics of a model are learned from its observation and an ordinary differential equation (ODE) representation of this model is inferred using a recursive nonlinear regression. Because the method is embedded in a Bayesian data assimilation framework, it can learn from partial and noisy observations of a state trajectory of the physical model. Moreover, a space-wise local representation of the ODE system is introduced and is key to coping with high-dimensional models.
It has recently been suggested that neural network architectures could be interpreted as dynamical systems. Reciprocally, we show that our ODE representations are reminiscent of deep learning architectures. Furthermore, numerical analysis considerations of stability shed light on the assets and limitations of the method.
The method is illustrated on several chaotic discrete and continuous models of various dimensions, with or without noisy observations, with the goal of identifying or improving the model dynamics, building a surrogate or reduced model, or producing forecasts solely from observations of the physical model.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details




1 CEREA, joint laboratory École des Ponts ParisTech and EDF R&D, Université Paris-Est, Champs-sur-Marne, France
2 Sorbonne University, CNRS-IRD-MNHN, LOCEAN, Paris, France; Nansen Environmental and Remote Sensing Center, Bergen, Norway
3 Nansen Environmental and Remote Sensing Center, Bergen, Norway; Geophysical Institute, University of Bergen, Bergen, Norway
4 Nansen Environmental and Remote Sensing Center, Bergen, Norway