Parallel multigrid on hierarchical hybrid grids: a performance Study on current high performance computing clusters
Author(s): Gmeiner B, Köstler H, Stürmer M, Rüde U
Publication year: 2014
Journal issue: 1
Pages range: 217-240
This article studies the performance and scalability of a geometric multigrid solver implemented within the hierarchical hybrid grids (HHG) software package on current high performance computing clusters up to nearly 300,000 cores. HHG is based on unstructured tetrahedral finite elements that are regularly refined to obtain a block-structured computational grid. One challenge is the parallel mesh generation from an unstructured input grid that roughly approximates a human head within a 3D magnetic resonance imaging data set. This grid is then regularly refined to create the HHG grid hierarchy. As test platforms, a BlueGene/P cluster located at Jülich supercomputing center and an Intel Xeon 5650 cluster located at the local computing center in Erlangen are chosen. To estimate the quality of our implementation and to predict runtime for the multigrid solver, a detailed performance and communication model is developed and used to evaluate the measured single node performance, as well as weak and strong scaling experiments on both clusters. Thus, for a given problem size, one can predict the number of compute nodes that minimize the overall runtime of the multigrid solver. Overall, HHG scales up to the full machines, where the biggest linear system solved on Jugene had more than one trillion unknowns. Copyright © 2012 John Wiley & Sons, Ltd.
FAU Authors / FAU Editors How to cite
APA: Gmeiner, B., Köstler, H., Stürmer, M., & Rüde, U. (2014). Parallel multigrid on hierarchical hybrid grids: a performance Study on current high performance computing clusters. Concurrency and Computation-Practice & Experience, 26(1), 217-240. https://dx.doi.org/10.1002/cpe.2968
MLA: Gmeiner, Björn, et al. "Parallel multigrid on hierarchical hybrid grids: a performance Study on current high performance computing clusters." Concurrency and Computation-Practice & Experience 26.1 (2014): 217-240.