Progressive, Extrapolative Machine Learning for Turbulence Modeling
Conventional physics-based turbulence modeling is progressive: one begins by modeling a simple flow and progressively works towards more complex ones. The outcome is a series of models, with the next, more complex model accounting for some additional physics than the previous, less complex model. The more complex model respects the less complex model in that it degenerates to the less complex model when the additional physics is absent. The models generalize because they capture physics that is universal. The above has not been the philosophy of data-enabled turbulence modeling. Data-enabled modeling is one-stop: one trains against a data set, which contains simple, more complex, and potentially realistic engineering flows. The resulting model is a compromise between the less complex and the more complex flows and does not fully respect one particular flow in the training set. Like other machine learning products, data-enabled models generalize poorly. The dominance of physics-based models in computational fluid dynamics have left data-enabled models open to criticism: machine learned models do not fully respect, e.g., the law of the wall (among other empiricism), and they do not generalize to, e.g., high Reynolds numbers (among other conditions)---both of which are trivial to physics-based modeling. The purpose of this talk is to resolve these criticisms. We show that the conventional progressive approach is compatible with data-enabled modeling. Furthermore, we show that, when properly designed, a data-enabled model fully respects the known empiricism like the law of the wall. We also show that, when properly designed, a data-enabled model generalize as well as, if not better than, a physics-based model.