Abstract

Canonical transformation plays a fundamental role in simplifying and solving classical Hamiltonian systems. Intriguingly, it has a natural correspondence to normalizing flows with a symplectic constraint. Building on this key insight, we design a neural canonical transformation approach to automatically identify independent slow collective variables in general physical systems and natural datasets. We present an efficient implementation of symplectic neural coordinate transformations and two ways to train the model based either on the Hamiltonian function or phase-space samples. The learned model maps physical variables onto an independent representation where collective modes with different frequencies are separated, which can be useful for various downstream tasks such as compression, prediction, control, and sampling. We demonstrate the ability of this method first by analyzing toy problems and then by applying it to real-world problems, such as identifying and interpolating slow collective modes of the alanine dipeptide molecule and MNIST database images.

Alternate abstract:

Plain Language Summary

For centuries, physicists and astronomers have tackled the dynamics of complex interacting systems (such as the Sun-Earth-Moon system) with a mathematical tool known as a canonical transformation. Such a transformation changes the coordinates of the Hamiltonian equations (which describe the time evolution of the system) to simplify computation while preserving the form of the equations. However, despite being a deep concept, its wider application has been limited by cumbersome manual inspection and manipulation. By exploring the inherent connection between canonical transformation and a modern machine-learning method known as the normalizing flow, we have constructed a neural canonical transformation that can be trained automatically using the Hamiltonian function or data.

Normalizing flows are adaptive transformations often implemented as deep neural networks, and they find many real-world applications such as speech synthesis, image generation, and so on. In essence, it is an invertible change of variables that deforms a complex probability distribution into a simpler one. The canonical transformations are normalizing flows, albeit with two crucial twists. First, they are flows in the phase space, which contains both coordinates and momenta. Second, these flows satisfy the symplectic condition, a mathematical property that underlies most intriguing features in classical mechanics.

An immediate application of the neural canonical transformation is to simplify complex dynamics toward independent nonlinear modes, thereby allowing one to identify a small number of slow modes that are essential for applications such as molecule dynamics and dynamical control. Meanwhile, our work also stands as an example of imposing physical principles into the design of deep neural networks for better modeling of natural data.

Details

Title
Neural Canonical Transformation with Symplectic Flows
Author
Shuo-Hui Li  VIAFID ORCID Logo  ; Chen-Xiao, Dong  VIAFID ORCID Logo  ; Zhang, Linfeng  VIAFID ORCID Logo  ; Wang, Lei
Publication year
2020
Publication date
Apr-Jun 2020
Publisher
American Physical Society
e-ISSN
21603308
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2550636158
Copyright
© 2020. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.