Recording of a plenary presentation during the PASC15 Conference. www.pasc15.org
Abstract
Algorithmic adaptations are required to use anticipated exascale hardware near its potential, since the code base has been engineered to squeeze out flops. Instead, algorithms must now squeeze out synchronizations, memory, and transfers, while extra flops on locally cacheable data represent small costs in time and energy. Today’s scalable solvers, in particular, exploit frequent global synchronizations. After decades of programming model stability with bulk synchronous processing (BSP), new programming models and new algorithmic capabilities (to take advantage, e.g., of forays in data assimilation, inverse problems, and uncertainty quantification) must be co-designed with the hardware. We briefly recap the architectural constraints, highlight some work at KAUST, and outline future directions.
Biography
David Keyes is Director of the Extreme Computing Research Center at KAUST. He earned a BSE in aerospace and mechanical sciences from Princeton in 1978 and PhD in applied mathematics from Harvard in 1984. Keyes works at the interface between parallel computing and the numerical analysis of PDEs, with a focus on scalable implicit solvers. Newton-Krylov-Schwarz (NKS), Additive Schwarz Preconditioned Inexact Newton (ASPIN), and Algebraic Fast Multipole (AFM) methods are methods he helped name and popularize. Before joining KAUST as a founding dean in 2009, he led scalable solver software projects in the ASCI and SciDAC programs of the US Department of Energy, ran university collaboration programs at NASA’s ICASE and the LLNL’s ISCR, and taught at Columbia, Old Dominion, and Yale Universities. He is a Fellow of SIAM and AMS.
Chair: Olaf Schenk (Università della Svizzera italiana, Switzerland)