Recent studies have demonstrated the effectiveness of applying the computational fluid dynamics (CFD)-driven symbolic machine learning (ML) frameworks to assist in the development of explicit physical models within Reynolds-averaged Navier-Stokes (RANS), particularly for modeling transition, turbulence, and heat flux. These approaches can yield improved flow predictions with marginal increase in computational cost compared to baseline models. Nevertheless, a key limitation lies in the substantial computational expense during the training phase, which often requires thousands of RANS evaluations. This challenge becomes severe in training models for complex industrial applications, where each RANS run is computationally intensive, and is further exacerbated when attempting to develop more generalizable and coupled multiple models across multiple product designs. Take the development of general transition and turbulence model corrections for both low- and high-pressure turbines as the study case, this work introduces two transformer-assisted strategies to accelerate model training. In the first, previously trained models are stored and used as inputs to the transformer, which generates new models informed by prior knowledge to partially replace randomly initialized models at the first training iteration. Results show that leveraging prior knowledge trained from different turbine configurations all effectively guide the search toward more promising regions of the solution space, thereby accelerating the training process. In the second scenario, when no prior knowledge is available, the transformer is integrated into the training loop to dynamically generate candidate models based on the small error models from the last training iteration and discarding high-error models. Results indicate that more frequent transformer updates, such as after every training iteration, further enhance the acceleration effect.

Accelerating CFD-driven training of transition and turbulence models for turbine flows by one-shot and real-time transformer integration / Fang, Yuan; Reissmann, Maximilian; Pacciani, Roberto; Zhao, Yaomin; Ooi, Andrew S.H; Marconcini, Michele; Akolekar, Harshal D.; Sandberg, Richard D.. - In: COMPUTERS & FLUIDS. - ISSN 0045-7930. - ELETTRONICO. - 306:(2026), pp. 106927.0-106927.0. [10.1016/j.compfluid.2025.106927]

Accelerating CFD-driven training of transition and turbulence models for turbine flows by one-shot and real-time transformer integration

Pacciani, Roberto;Marconcini, Michele;
2026

Abstract

Recent studies have demonstrated the effectiveness of applying the computational fluid dynamics (CFD)-driven symbolic machine learning (ML) frameworks to assist in the development of explicit physical models within Reynolds-averaged Navier-Stokes (RANS), particularly for modeling transition, turbulence, and heat flux. These approaches can yield improved flow predictions with marginal increase in computational cost compared to baseline models. Nevertheless, a key limitation lies in the substantial computational expense during the training phase, which often requires thousands of RANS evaluations. This challenge becomes severe in training models for complex industrial applications, where each RANS run is computationally intensive, and is further exacerbated when attempting to develop more generalizable and coupled multiple models across multiple product designs. Take the development of general transition and turbulence model corrections for both low- and high-pressure turbines as the study case, this work introduces two transformer-assisted strategies to accelerate model training. In the first, previously trained models are stored and used as inputs to the transformer, which generates new models informed by prior knowledge to partially replace randomly initialized models at the first training iteration. Results show that leveraging prior knowledge trained from different turbine configurations all effectively guide the search toward more promising regions of the solution space, thereby accelerating the training process. In the second scenario, when no prior knowledge is available, the transformer is integrated into the training loop to dynamically generate candidate models based on the small error models from the last training iteration and discarding high-error models. Results indicate that more frequent transformer updates, such as after every training iteration, further enhance the acceleration effect.
2026
306
0
0
Goal 7: Affordable and clean energy
Fang, Yuan; Reissmann, Maximilian; Pacciani, Roberto; Zhao, Yaomin; Ooi, Andrew S.H; Marconcini, Michele; Akolekar, Harshal D.; Sandberg, Richard D....espandi
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0045793025003871-main.pdf

Accesso chiuso

Tipologia: Pdf editoriale (Version of record)
Licenza: Creative commons
Dimensione 12.03 MB
Formato Adobe PDF
12.03 MB Adobe PDF   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1442012
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact