On October 5, Han Bao successfully defended his PhD dissertation, Development of a Data-driven Framework for Mesh-Model Optimization in System-level Thermal-Hydraulic Simulation. Han’s committee consisted of his advisor, Nam Dinh, and members, Jeffrey Lane, Maria Avramova, Igor Bolotnov, and Hong Luo.
Abstract
BAO, HAN. Development of a Data-driven Framework for Mesh-Model Optimization in Systemlevel Thermal-Hydraulic Simulation. (Under the direction of Dr. Nam T. Dinh).
Over the past decades, several computer codes were developed for simulation and analysis of thermal-hydraulics of reactor cores, reactor coolant systems, and containment behaviors in nuclear reactors under operating, abnormal transient and accident conditions. However, the simulation errors and uncertainties still inevitably exist even while these codes have been extensively assessed and used to support design, licensing, and safety analysis of the plants. Main difficulty comes from the complexity of these multi-phase physical phenomena in the transients, the inevitable simulation error sources and user effects.
In this work, a data-driven framework (Optimal Mesh/Model Information System, OMIS) for the optimization of mesh and model in system-level thermal-hydraulic simulation is formulated and demonstrated. This framework is developed to estimate simulation error and suggest optimal selection of coarse mesh size and models for low-fidelity simulation, such as coarse-mesh Computational Fluid Dynamics-like (CFD-like) codes, to achieve computationally-effective accuracy comparable to that of high-fidelity simulation, such as high-resolution CFD. It takes advantages of computational efficiency of coarse-mesh simulation codes and regression capability of machine learning algorithms. Instead of expensive computation using fine-mesh as in CFD methods, a cluster of case running with different coarse mesh sizes are performed to obtain the error database between low-fidelity simulations and high-fidelity data. The error database is used to train a machine learning model and find the essential relationship between local simulation error and local physical features, then generate insight and help correct low-fidelity simulations for similar physical conditions. Based on the idea of TDMI (Total Data-Model Integration), the specific closure models, local mesh sizes and numerical solvers are treated as an integrated model. Data obtained from this integrated model is used to construct a library that identifies and stores the local similarities in different physical conditions. This library is self-improvable and automatically updated as new qualified data is available. OMIS framework is completed as a six-step procedure; each step is independent and accomplished with methods and algorithms in the state of the art. A mixed convection case study was designed and performed to illustrate the entire framework.
This work also provides an insight on the development of a data-driven scale-invariant approach to deal with scaling issues. According to the identification of global physics and local physics, four different Physics Coverage Conditions (PCCs) are classified as Global Interpolation through Local Interpolation (GILI), Global Interpolation through Local Extrapolation (GILE), Global Extrapolation through Local Interpolation (GELI) and Global Extrapolation through Local Extrapolation (GELE). The underlying local physics is assumed to be represented by a set of physical features. GELI condition indicates the situation that the global physical condition of new case is identified as an extrapolation of existing cases, but the local physics is similar. Exploring the local physics with the usage of advanced machine learning techniques makes it possible to bridge the global scale gap. Targeting on “GELI” condition, OMIS framework treats multi-scale data and machine learning techniques in a formulized manner. Different GELI conditions, such as the extrapolation of global parameters, geometry, boundary condition and dimension have been discussed based on the mixed convection case study. The similarity between the training data and testing data is quantified and visualized by defined extrapolation distance and Local Physics Coverage Mapping (LPCM) approach. It shows that the prediction by well-trained data-driven model has higher accuracy as the similarity of training data and testing data increases.