Full text

Turn on search term navigation

© 2019. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Neural networks are able to approximate chaotic dynamical systems when provided with training data that cover all relevant regions of the system's phase space. However, many practical applications diverge from this idealized scenario. Here, we investigate the ability of feed-forward neural networks to (1) learn the behavior of dynamical systems from incomplete training data and (2) learn the influence of an external forcing on the dynamics. Climate science is a real-world example where these questions may be relevant: it is concerned with a non-stationary chaotic system subject to external forcing and whose behavior is known only through comparatively short data series. Our analysis is performed on the Lorenz63 and Lorenz95 models. We show that for the Lorenz63 system, neural networks trained on data covering only part of the system's phase space struggle to make skillful short-term forecasts in the regions excluded from the training. Additionally, when making long series of consecutive forecasts, the networks struggle to reproduce trajectories exploring regions beyond those seen in the training data, except for cases where only small parts are left out during training. We find this is due to the neural network learning a localized mapping for each region of phase space in the training data rather than a global mapping. This manifests itself in that parts of the networks learn only particular parts of the phase space. In contrast, for the Lorenz95 system the networks succeed in generalizing to new parts of the phase space not seen in the training data. We also find that the networks are able to learn the influence of an external forcing, but only when given relatively large ranges of the forcing in the training. These results point to potential limitations of feed-forward neural networks in generalizing a system's behavior given limited initial information. Much attention must therefore be given to designing appropriate train-test splits for real-world applications.

Details

Title
Generalization properties of feed-forward neural networks trained on Lorenz systems
Author
Scher, Sebastian 1   VIAFID ORCID Logo  ; Messori, Gabriele 2   VIAFID ORCID Logo 

 Department of Meteorology and Bolin Centre for Climate Research, Stockholm University, Stockholm, Sweden 
 Department of Meteorology and Bolin Centre for Climate Research, Stockholm University, Stockholm, Sweden; Department of Earth Sciences, Uppsala University, Uppsala, Sweden 
Pages
381-399
Publication year
2019
Publication date
2019
Publisher
Copernicus GmbH
ISSN
1023-5809
e-ISSN
1607-7946
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2311858768
Copyright
© 2019. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.