Various Approaches to supercomputer modeling of multi-scale astrophysics challenges
Kulikov Igor
It is known that simulation of an individual filament with a size of several tens of megaparsecs with scale of normal galaxy size of ten kiloparsecs that require a computational grid of the order of 10K^3 is available nowadays [1]. However, the Gap between the detailed description of galaxies and large-scale structures in universe it is unlikely that the problem will be solved in the near future. At the Mind the Gap 2013 conference (Cambridge, UK) Professor Volker Springel noted in his report that it is needed about 60 years to achieve resolution of order of single star in cosmological modeling while maintaining the extensive growth of computing capacity. However, in the next 12 years, the phenomenon that exascale performance was achieved with a five-year delay and zetascale computing apparantly will be available closer to 2040. The problem is complicated by the lack of codes for using exascale supercomputers. In 2013, within the framework of the conference “Exaflos Computing in Astrophysics” (Ascona, Switzerland), the Challenges and the Road Map for developing core ExaScale codes have been formulated. However, the fact that these problems still have not been fully resolved, keeps the ExaScale Astrophysics conferences relevant in 2024.The parallel programming technologies is one of the main problem. In recent years, graphic [2] and other accelerators [3], as well as vectorization of computations [4], began to be actively used. Nevertheless, their usage while developing, and most importantly when maintaining and extending the code is associated with significant difficulties. Coarray Fortran technology [5] became a truly alternative to traditional parallel programming tools, which allows development of scalable parallel code with difficult architecture in a fairly short time. In addition to extensive development of numerical modeling that associated mainly with parallel programming technologies, it is important simultaneous development of intensive growth of computational codes in part of the development of physics-based multi-scale solutions. Note, the “Mind the Gap” problem is characteristic not only of cosmological modeling but also occurs in modeling of the star formation process, where is solved: by a special choice of computational grids [5] and the mechanism of detonation of a white dwarf; by construction of a subgrid function for the development of turbulent combustion of white dwarf material [6] and scenarious of carbon combustions during the SNeIa explosion; by construction of approximate functions of changes in isotope concentrations [7]. The various authors approaches for solving the computational aspects of the “Mind the Gap” problem will be describe in the report.
This work was supported by the Russian Science Foundation (project 23-11-00014).
[1] Federrath Ch. The turbulent formation of stars // Physics Today. – 2018. – V. 71 (6). – P. 38-42.
[2] Kulikov I. GPUPEGAS: a new GPU-accelerated hydrodynamic code for numerical simulations of interacting
galaxies // The Astrophysical Journal Supplement Series. – 2014. – V. 214. – Article Number 12.
[3] Kulikov I., Chernykh I., Snytnikov A., Glinskiy B., Tutukov A. AstroPhi: A code for complex simulation
of the dynamics of astrophysical objects using hybrid supercomputers // Computer Physics Communications. –
2015. – V. 186. – P. 71-80.
[4] Kulikov I., Chernykh I., Tutukov A. A New Hydrodynamic Code with Explicit Vectorization Instructions
Optimizations that Is Dedicated to the Numerical Simulation of Astrophysical Gas Flow.