Please use this identifier to cite or link to this item:
http://hdl.handle.net/10995/92459
Title: | Hydrodynamical Simulation of Astrophysical Flows: High-Performance GPU Implementation |
Authors: | Akimova, E. Misilov, V. Kulikov, I. Chernykh, I. |
Issue Date: | 2019 |
Publisher: | Institute of Physics Publishing |
Citation: | Akimova E. Hydrodynamical Simulation of Astrophysical Flows: High-Performance GPU Implementation / E. Akimova, V. Misilov, I. Kulikov, I. Chernykh. — DOI 10.1088/1742-6596/1336/1/012014 // Journal of Physics: Conference Series. — 2019. — Vol. 1. — Iss. 1336. — 12014. |
Abstract: | We present a new hydrodynamical code GPUPEGAS 2.0 for 3D simulation of astrophysical flows using the GPUs. This code is an extension of GPUPEGAS code developed in 2014 for simulation of interacting galaxies. GPUPEGAS 2.0 is based on the Authors' numerical method of high order of accuracy for smooth solutions with small dissipation of the solution in discontinuities. The high order of accuracy and small dissipation are achieved by using the piecewise-linear representation of the physical variables in each dimension. The Rusanov flux allows one to simply vectorize the solution of the Riemann problem. The code was implemented for the cluster supercomputers NKS-30T (Siberian Supercomputer Center, SB RAS) and Uran (Institute of Mathematics and Mechanics, UrB RAS) using the hybrid MPI+CUDA technology. To avoid the compute capability-specific implementations of reduction routines, the Thrust library was used. The optimal parameters for kernel function were found for the three-dimensional computation grid. The Sedov point blast problem was used as a main test one. The numerical experiment was performed to simulate the hydrodynamics of the type II supernova explosion for the grid size of 2563. A set of experiments was performed to study performance and scalability of the developed code. The performance of 25 GFLOPS was achieved using a single Tesla M2090 GPU. The speedup of 3 times was achieved using a node with 4 GPUs. By using 16 GPUs, 70% scalability was achieved. © 2019 IOP Publishing Ltd. All rights reserved. |
Keywords: | ASTROPHYSICS GRAPHICS PROCESSING UNIT NUMERICAL METHODS NUMERICAL MODELS PIECEWISE LINEAR TECHNIQUES PROGRAM PROCESSORS SCALABILITY SUPERCOMPUTERS SUPERNOVAE ASTROPHYSICAL FLOWS GPU IMPLEMENTATION NUMERICAL EXPERIMENTS PERFORMANCE AND SCALABILITIES PHYSICAL VARIABLES PIECEWISE LINEAR REPRESENTATION SUPERNOVA EXPLOSION THREE-DIMENSIONAL COMPUTATIONS MAGNETOHYDRODYNAMICS |
URI: | http://hdl.handle.net/10995/92459 |
Access: | info:eu-repo/semantics/openAccess |
SCOPUS ID: | 85076218260 |
PURE ID: | 11444387 |
ISSN: | 1742-6588 |
DOI: | 10.1088/1742-6596/1336/1/012014 |
metadata.dc.description.sponsorship: | Russian Science Foundation, RSF: 18-11-00044 The work of Igor Kulikov and Igor Chernykh was supported by Russian Science Foundation (project no. 18-11-00044). |
RSCF project card: | 18-11-00044 |
Appears in Collections: | Научные публикации, проиндексированные в SCOPUS и WoS CC |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
10.1088-1742-6596-1336-1-012014.pdf | 2,38 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.