The trend towards more compact and efficient low-pressure turbine (LPT) designs can substantially benefit from advanced numerical predictive tools. The complex transitional and turbulent nature of unsteady flows seen in LPTs often demands high order methods such as Large Eddy Simulations (LES) for accurate predictions in turbine efficiency and loss generation. Integrating high-fidelity simulations into design cycles, predominately driven by rapid Unsteady Reynolds-Averaged Navier–Stokes (URANS) calculations, requires cutting-edge numerical tools able to leverage modern high-performance computing architectures. In Part I of this paper, a multi-fidelity simulation framework is presented, which maximizes the computational output of the latest high performance architectures by fully occupying the hardware with concurrent LES and URANS simulations. The continuous rise in computational power of supercomputing facilities has primarily been driven by advances in Graphics Processing Units (GPUs). A single GPU has thousands of parallel computing threads, offering substantial parallelism and enabling the rapid processing of large datasets. With dedicated software, GPUs are ideally suited for performing LES that require extensive grid counts for adequate resolution. In contrast, modern Central Processing Units (CPUs) typically have between 32 and 64 cores, offering lower performance than GPUs, and are better suited for less compute intensive tasks, such as Reynolds-Averaged Navier–Stokes calculations. A typical supercomputer node consists of one or two multi-core CPUs paired with up to eight GPUs. While previous studies have focused on numerical methods optimized for either CPU or GPU architectures, this paper introduces a novel multifidelity approach that uses both the CPUs and GPUs on the same node concurrently, increasing the utilization of modern supercomputers. In this framework, LESs are executed on the GPUs, with each GPU requiring a dedicated CPU core to handle host operations such as data reading and writing. Typically, the remaining CPU cores on the supercomputer node would then be idle, wasting computational resources. In contrast, the proposed computational workflow allows multiple, rapid URANS simulations, or an additional low Reynolds number LES case, to run in parallel with the GPU-based LES on the otherwise-idle CPU cores on the same supercomputer node. As a result, this approach maximizes the efficiency and computational output of the entire node, and provides high-fidelity and low-fidelity results at the same time.

Numerical Design of Experiments for Repeating Low-Pressure Turbine Stages Part 1: Computational Opportunities and Methodology / Rosenzweig M, Kozul M, Sandberg RD, Giannini G, Pacciani R, Marconcini M, Arnone A. - ELETTRONICO. - (In corso di stampa), pp. 0-0. (Intervento presentato al convegno ASME Turbo Expo 2025 Turbomachinery Technical Conference and Exposition tenutosi a Memphis, TN, USA nel June 16–20, 2025).

Numerical Design of Experiments for Repeating Low-Pressure Turbine Stages Part 1: Computational Opportunities and Methodology

Giannini G;Pacciani R;Marconcini M;Arnone A
In corso di stampa

Abstract

The trend towards more compact and efficient low-pressure turbine (LPT) designs can substantially benefit from advanced numerical predictive tools. The complex transitional and turbulent nature of unsteady flows seen in LPTs often demands high order methods such as Large Eddy Simulations (LES) for accurate predictions in turbine efficiency and loss generation. Integrating high-fidelity simulations into design cycles, predominately driven by rapid Unsteady Reynolds-Averaged Navier–Stokes (URANS) calculations, requires cutting-edge numerical tools able to leverage modern high-performance computing architectures. In Part I of this paper, a multi-fidelity simulation framework is presented, which maximizes the computational output of the latest high performance architectures by fully occupying the hardware with concurrent LES and URANS simulations. The continuous rise in computational power of supercomputing facilities has primarily been driven by advances in Graphics Processing Units (GPUs). A single GPU has thousands of parallel computing threads, offering substantial parallelism and enabling the rapid processing of large datasets. With dedicated software, GPUs are ideally suited for performing LES that require extensive grid counts for adequate resolution. In contrast, modern Central Processing Units (CPUs) typically have between 32 and 64 cores, offering lower performance than GPUs, and are better suited for less compute intensive tasks, such as Reynolds-Averaged Navier–Stokes calculations. A typical supercomputer node consists of one or two multi-core CPUs paired with up to eight GPUs. While previous studies have focused on numerical methods optimized for either CPU or GPU architectures, this paper introduces a novel multifidelity approach that uses both the CPUs and GPUs on the same node concurrently, increasing the utilization of modern supercomputers. In this framework, LESs are executed on the GPUs, with each GPU requiring a dedicated CPU core to handle host operations such as data reading and writing. Typically, the remaining CPU cores on the supercomputer node would then be idle, wasting computational resources. In contrast, the proposed computational workflow allows multiple, rapid URANS simulations, or an additional low Reynolds number LES case, to run in parallel with the GPU-based LES on the otherwise-idle CPU cores on the same supercomputer node. As a result, this approach maximizes the efficiency and computational output of the entire node, and provides high-fidelity and low-fidelity results at the same time.
In corso di stampa
Proceedings of the ASME Turbo Expo 2025: Turbomachinery Technical Conference and Exposition.
ASME Turbo Expo 2025 Turbomachinery Technical Conference and Exposition
Memphis, TN, USA
June 16–20, 2025
Goal 7: Affordable and clean energy
Rosenzweig M, Kozul M, Sandberg RD, Giannini G, Pacciani R, Marconcini M, Arnone A
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1415941
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact