24 research outputs found

    Comparison of POD reduced order strategies for the nonlinear 2D Shallow Water Equations

    Full text link
    This paper introduces tensorial calculus techniques in the framework of Proper Orthogonal Decomposition (POD) to reduce the computational complexity of the reduced nonlinear terms. The resulting method, named tensorial POD, can be applied to polynomial nonlinearities of any degree pp. Such nonlinear terms have an on-line complexity of O(kp+1)\mathcal{O}(k^{p+1}), where kk is the dimension of POD basis, and therefore is independent of full space dimension. However it is efficient only for quadratic nonlinear terms since for higher nonlinearities standard POD proves to be less time consuming once the POD basis dimension kk is increased. Numerical experiments are carried out with a two dimensional shallow water equation (SWE) test problem to compare the performance of tensorial POD, standard POD, and POD/Discrete Empirical Interpolation Method (DEIM). Numerical results show that tensorial POD decreases by 76×76\times times the computational cost of the on-line stage of standard POD for configurations using more than 300,000300,000 model variables. The tensorial POD SWE model was only 28×2-8\times slower than the POD/DEIM SWE model but the implementation effort is considerably increased. Tensorial calculus was again employed to construct a new algorithm allowing POD/DEIM shallow water equation model to compute its off-line stage faster than the standard and tensorial POD approaches.Comment: 23 pages, 8 figures, 5 table

    A nonintrusive hybrid neural-physics modeling of incomplete dynamical systems: Lorenz equations

    Get PDF
    This work presents a hybrid modeling approach to data-driven learning and representation of unknown physical processes and closure parameterizations. These hybrid models are suitable for situations where the mechanistic description of dynamics of some variables is unknown, but reasonably accurate observational data can be obtained for the evolution of the state of the system. In this work, we propose machine learning to account for missing physics and then data assimilation to correct the prediction. In particular, we devise an effective methodology based on a recurrent neural network to model the unknown dynamics. A long short-term memory (LSTM) based correction term is added to the predictive model in order to take into account hidden physics. Since LSTM introduces a black-box approach for the unknown part of the model, we investigate whether the proposed hybrid neural-physical model can be further corrected through a sequential data assimilation step. We apply this framework to the weakly nonlinear Lorenz model that displays quasiperiodic oscillations, the highly nonlinear chaotic Lorenz model, and two-scale Lorenz model. The hybrid neural-physics model yields accurate results for the weakly nonlinear Lorenz model with the predicted state close to the true Lorenz model trajectory. For the highly nonlinear Lorenz model and the two-scale Lorenz model, the hybrid neural-physics model deviates from the true state due to the accumulation of prediction error from one time step to the next time step. The ensemble Kalman filter approach takes into account the prediction error and updates the diverged prediction using available observations in order to provide a more accurate state estimate for the highly nonlinear chaotic Lorenz model and two-scale Lorenz system. The successful synergistic integration of neural network and data assimilation for low-dimensional system shows the potential benefits of the proposed hybrid-neural physics model for complex dynamical systems.acceptedVersio

    Long short-term memory embedded nudging schemes for nonlinear data assimilation of geophysical flows

    Get PDF
    Reduced rank nonlinear filters are increasingly utilized in data assimilation of geophysical flows but often require a set of ensemble forward simulations to estimate forecast covariance. On the other hand, predictor–corrector type nudging approaches are still attractive due to their simplicity of implementation when more complex methods need to be avoided. However, optimal estimate of the nudging gain matrix might be cumbersome. In this paper, we put forth a fully nonintrusive recurrent neural network approach based on a long short-term memory (LSTM) embedding architecture to estimate the nudging term, which plays a role not only to force the state trajectories to the observations but also acts as a stabilizer. Furthermore, our approach relies on the power of archival data, and the trained model can be retrained effectively due to the power of transfer learning in any neural network applications. In order to verify the feasibility of the proposed approach, we perform twin experiments using the Lorenz 96 system. Our results demonstrate that the proposed LSTM nudging approach yields more accurate estimates than both the extended Kalman filter (EKF) and ensemble Kalman filter (EnKF) when only sparse observations are available. With the availability of emerging artificial intelligence friendly and modular hardware technologies and heterogeneous computing platforms, we articulate that our simplistic nudging framework turns out to be computationally more efficient than either the EKF or EnKF approaches.publishedVersionLocked until 15.7.2021 due to copyright restrictions. Published by AIP Publishing. This article may be downloaded for personal use only. Any other use requires prior permission of the author and AIP Publishing. The following article appeared in Physics of Fluids and may be found at http://dx.doi.org/https://doi.org/10.1063/5.001285

    Memory embedded non-intrusive reduced order modeling of non-ergodic flows

    Get PDF
    Generating a digital twin of any complex system requires modeling and computational approaches that are efficient, accurate, and modular. Traditional reduced order modeling techniques are targeted at only the first two but the novel non-intrusive approach presented in this study is an attempt at taking all three into account effectively compared to their traditional counterparts. Based on dimensionality reduction using proper orthogonal decomposition (POD), we introduce a long short-term memory (LSTM) neural network architecture together with a principal interval decomposition (PID) framework as an enabler to account for localized modal deformation, which is a key element in accurate reduced order modeling of convective flows. Our applications for convection dominated systems governed by Burgers, Navier-Stokes, and Boussinesq equations demonstrate that the proposed approach yields significantly more accurate predictions than the POD-Galerkin method, and could be a key enabler towards near real-time predictions of unsteady flows

    Data Assimilation for Numerical Weather Prediction: A Review

    No full text
    corecore