File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Bayesian uncertainty analysis with applications to turbulence modeling

TitleBayesian uncertainty analysis with applications to turbulence modeling
Authors
KeywordsModel validation under uncertainty
Model inadequacy representations
Forward propagation of uncertainty
Stochastic model classes
Bayesian analysis
Turbulence modeling
Issue Date2011
Citation
Reliability Engineering and System Safety, 2011, v. 96, n. 9, p. 1137-1149 How to Cite?
AbstractIn this paper, we apply Bayesian uncertainty quantification techniques to the processes of calibrating complex mathematical models and predicting quantities of interest (QoI's) with such models. These techniques also enable the systematic comparison of competing model classes. The processes of calibration and comparison constitute the building blocks of a larger validation process, the goal of which is to accept or reject a given mathematical model for the prediction of a particular QoI for a particular scenario. In this work, we take the first step in this process by applying the methodology to the analysis of the SpalartAllmaras turbulence model in the context of incompressible, boundary layer flows. Three competing model classes based on the SpalartAllmaras model are formulated, calibrated against experimental data, and used to issue a prediction with quantified uncertainty. The model classes are compared in terms of their posterior probabilities and their prediction of QoI's. The model posterior probability represents the relative plausibility of a model class given the data. Thus, it incorporates the model's ability to fit experimental observations. Alternatively, comparing models using the predicted QoI connects the process to the needs of decision makers that use the results of the model. We show that by using both the model plausibility and predicted QoI, one has the opportunity to reject some model classes after calibration, before subjecting the remaining classes to additional validation challenges. © 2011 Elsevier Ltd. All rights reserved.
Persistent Identifierhttp://hdl.handle.net/10722/296069
ISSN
2023 Impact Factor: 9.4
2023 SCImago Journal Rankings: 2.028
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorCheung, Sai Hung-
dc.contributor.authorOliver, Todd A.-
dc.contributor.authorPrudencio, Ernesto E.-
dc.contributor.authorPrudhomme, Serge-
dc.contributor.authorMoser, Robert D.-
dc.date.accessioned2021-02-11T04:52:46Z-
dc.date.available2021-02-11T04:52:46Z-
dc.date.issued2011-
dc.identifier.citationReliability Engineering and System Safety, 2011, v. 96, n. 9, p. 1137-1149-
dc.identifier.issn0951-8320-
dc.identifier.urihttp://hdl.handle.net/10722/296069-
dc.description.abstractIn this paper, we apply Bayesian uncertainty quantification techniques to the processes of calibrating complex mathematical models and predicting quantities of interest (QoI's) with such models. These techniques also enable the systematic comparison of competing model classes. The processes of calibration and comparison constitute the building blocks of a larger validation process, the goal of which is to accept or reject a given mathematical model for the prediction of a particular QoI for a particular scenario. In this work, we take the first step in this process by applying the methodology to the analysis of the SpalartAllmaras turbulence model in the context of incompressible, boundary layer flows. Three competing model classes based on the SpalartAllmaras model are formulated, calibrated against experimental data, and used to issue a prediction with quantified uncertainty. The model classes are compared in terms of their posterior probabilities and their prediction of QoI's. The model posterior probability represents the relative plausibility of a model class given the data. Thus, it incorporates the model's ability to fit experimental observations. Alternatively, comparing models using the predicted QoI connects the process to the needs of decision makers that use the results of the model. We show that by using both the model plausibility and predicted QoI, one has the opportunity to reject some model classes after calibration, before subjecting the remaining classes to additional validation challenges. © 2011 Elsevier Ltd. All rights reserved.-
dc.languageeng-
dc.relation.ispartofReliability Engineering and System Safety-
dc.subjectModel validation under uncertainty-
dc.subjectModel inadequacy representations-
dc.subjectForward propagation of uncertainty-
dc.subjectStochastic model classes-
dc.subjectBayesian analysis-
dc.subjectTurbulence modeling-
dc.titleBayesian uncertainty analysis with applications to turbulence modeling-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.ress.2010.09.013-
dc.identifier.scopuseid_2-s2.0-79959591356-
dc.identifier.volume96-
dc.identifier.issue9-
dc.identifier.spage1137-
dc.identifier.epage1149-
dc.identifier.isiWOS:000293107900013-
dc.identifier.issnl0951-8320-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats