Large scale simulation on high performance computers is a major field of research at the interface between mathematics and computer science motivated by the great challenges arising in domains such as structural analysis, fluid-dynamics, electromagnetism, chemistry or geophysics. Enhancing the quality of numerical libraries is a major step toward the efficient use of current and next generation supercomputers and, therefore, represents an element of great importance to achieve the analysis of large scale problems, with ever increasing accuracy. This demands the development of novel algorithms that are capable of efficiently use the architectural features of modern supercomputers and, at the same time, the development of programming models and techniques that can conveniently map these algorithms onto the underlying architecture. One research activity of the APO team is dedicated to the development of high performance, parallel solvers for sparse linear systems since this is often the most time-consuming computation in many large-scale simulations. If the solution of linear systems of order a few million was a challenging task ten years ago, a linear system can today have more than a billion unknowns for the target problems arising in many of the previously mentioned applications.
Within this framework, the APO team is actively involved in the MUMPS (MUltifrontal Massively Parallel Solver)) project in collaboration with CERFACS (Toulouse), ENS-Lyon (LIP) and Université de Bordeaux. MUMPS performs Gaussian elimination to compute the solution of sparse linear equations on distributed memory machines. The MUMPS project started in 1996 with the PARASOL Project which was an Esprit IV European project. MUMPS, was named after the target of massive parallelism using the MPI message passing protocol. The first features of MUMPS were developed to meet the requirements of the industrial partners most of whom worked with finite-element models. Since then, MUMPS has evolved into an efficient, general purpose and reliable tool providing a rich set of unique features. Today MUMPS is both a software platform and a research project providing a solid ground for the development of many new and innovative techniques for the parallel solution of large sparse linear systems. MUMPS is freely distributed under the Cecill-C license (compatible with LGPL) at the following addresses: http://mumps.enseeiht.fr/ and http://graal.ens-lyon.fr/MUMPS/. The software is downloaded more than a 1000 times per year, installed in many national and international supercomputing centers and included in the software repositories of the most common Linux distributions.
The team is the main developer of qr_mumps – a direct solver for sparse linear systems based on the multifrontal QR factorization. qr_mumps is a parallel, multithreaded software based on the OpenMP standard and is specifically designed for multicore architectures. Parallelism is achieved by dividing the workload into fine grained tasks that are arranged in a Direct Acyclic Graph (DAG). The execution of these tasks is guided by an asynchronous and dynamic data-flow programming model which provides high efficiency and scalability. The package is distributed under the LGPL license.
APO is also pursuing the study and development of novel, hybrid solution techniques for large scale sparse linear systems, specifically the Block-Cimmino method which is an iterative, row projection method in which the original linear system is divided into subsystems. At every iteration, this method computes one projection per subsystem and uses these projections to construct an approximation to the solution of the linear system. The Block-Cimmino method is a linear stationary iterative method with a symmetric and positive definite (SPD) iteration matrix. Therefore, its rate of convergence can be accelerated with the use of Block Conjugate Gradient method (Block-CG). The main target of this research is the implementation of a parallel distributed Block-Cimmino method where the Cimmino iteration matrix is used as a preconditioner for the Block-CG.
The APO team is also deeply involved in the GridTLSE platform GridTLSE (http://gridtlse.org) platform – an expertise site which aims to provide access to a range of sparse linear direct solvers and assists users in selecting the most appropriate solver for their problem. GridTLSE makes use of grid computing technology to perform the related computations. Determining the most appropriate values for the input parameters of a specific sparse linear solver is quite complex and combinatorial by nature, which makes the use of grid computing technology very attractive. TLSE platform has been designed by using the "Distributed Interactive Engineering Toolbox (DIET)" grid middleware to manage the various sparse solvers and tools installed over heterogeneous computers. GridTLSE has been supported by ANR-SOLSTICE (ANR-06-CIS6-01, 2007-2010), LEGO (ANR-05-CIGC-11 2005-2009) and ANR Cosinus COOP (ANR-09-COSI-001-04, 2009-20011) and is now used in production environments.
All of the above activities are conducted in collaboration with several national and international collaborators: CERFACS (in the context of the IRIT-CERFACS joint laboratory), INRIA - ENS-Lyon, Université de Bordeaux, Lawrence Berkeley National Laboratory, SEISCOPE consortium, Università di Roma “Tor Vergata”, University of Padua, University of Strathclyde. In addition to the support of public programs (France Berkeley, European Esprit IV, Egide-Aurora, France-Israël and French ANR), our research was made possible thanks to the collaboration or support of industrial partners (CNES, CEDRAT, EADS, EDF, ESI Group, Free Field Technologies, Samtech S.A., TOTAL, VIBRATEC). Finally we want to thank the institutions that have provided access to their parallel machines: Centre Informatique National de l'Enseignement Superieur (CINES), CERFACS, CICT-CALMIP (Centre Interuniversitaire de Calcul de Toulouse), Institut du Développement et des Ressources en Informatique Scientifique (IDRIS), Lawrence Berkeley National Laboratory, Laboratoire de l'Informatique du Parallelisme, INRIA Rhones-Alpes, INRIA Bordeaux-sud ouest, PARALLAB.