Advances, Systems and Applications
From: A view of programming scalable data analysis: from clouds to exascale
Programming Models | Languages | Libraries/APIs | Pros and Fallacies |
---|---|---|---|
Distributed memory | Charm++, Legion, High Performance Fortran (HPF), ECL, PaRSEC | MPI, BSP, Pig Latin, AllScale, | Distributed memory languages/APIs are very close to the Exascale hardware model. Systems in this class consider and deal with communication latency however data exchange costs are the main source of overhead. Except AllScale, and some MPI version, systems in this class do not manage network and CPU failures. |
Shared memory | TBB, Cilk++ | OpenMP, OmpSs | Shared memory models do not map efficiently on Exascale systems, extensions have been proposed to perform better dealing with synchronization and network failures. No single convincing solution till now exists. |
Partitioned memory | UPC, Chapel, X10, CAF | GA, SHMEM, DASH, OpenSHMEM, GASPI | The local memory model is very useful but combination with global/shared memory mechanisms introduce too much overhead. GASPI is the only system in this class enabling applications to recover from failures. |
Hybrid models | UPC + MPI, C++/MPI, | MPI + OpenMP, Spark-MPI, FLUX, EMPI4Re, DPLASMA, | Hybrid models facilitate the mapping to the hardware architectures, however the different programming routines compete for resources making hard to control concurrency and contention. Resilient mechanisms are harder to implement because of the mixing of different constructs and data models. |