計算與建模科學研究所 Institute of Computational and Modeling Science                     
 

2026-4-24 演講者: 呂秉澤 教授 (中正大學數學系) -Numerical Artifacts from Using Linear Multistep Methods and Explicit Runge-Kutta Methods in Learning Dynamical Systems and Linear Stochastic Differential Equations

【講題】Numerical Artifacts from Using Linear Multistep Methods and Explicit Runge-Kutta Methods in Learning Dynamical Systems and Linear Stochastic Differential Equations

【演講時間】4月24日(星期五)下午1點30分  

【演講地點】清華大學校本部第二綜合大樓B側8樓A813室

【摘要】

Learning dynamical systems from trajectory data has become a cornerstone of modern scientific machine learning, with Neural ODEs offering a flexible and powerful framework. Yet a subtle and underappreciated danger lurks inside every training loop: the numerical integrator used to simulate candidate trajectories does not merely approximate the dynamics — it actively shapes what can be learned. In the first part of this talk, we show that the geometry of the integrator's stability region on the complex plane rigidly constrains the learned eigenstructure. Concretely, integrators whose stability region overlaps the right half-plane — including Backward Euler, RK4, and common linear multistep methods — can cause a dissipative system to be identified as expansive, or produce a negative diffusion coefficient in a convection-diffusion PDE, even when training loss is small. We establish this theoretically for both one-step and linear multistep methods, and confirm it across Neural ODE experiments on the Lotka–Volterra system and a damped nonlinear pendulum. The implicit midpoint rule emerges as a principled default, being the unique common integrator whose stability region coincides exactly with the left half-plane. In the second part, we extend this analysis to the stochastic setting. When the underlying system is a stochastic differential equation (SDE), the Euler–Maruyama scheme introduces analogous stability artifacts. We characterize the stability properties of Euler–Maruyama in the learning context and derive a stochastic counterpart to our deterministic findings, showing that the same geometric constraints on the drift eigenvalue persist in the presence of diffusion noise. A brief numerical result illustrates how the deterministic and stochastic artifact regimes align, and points toward integrator recommendations for score-based and Neural SDE learning frameworks.

演講海報縮略圖
瀏覽數:
登入成功