When is enough, enough? With so many Parallel Programming Technologies, is it Time to Focus on Consolidating them?
Programming Models & Languages
Scientific Software Development
TimeWednesday, June 19th4pm - 5pm
DescriptionWhen it comes to parallel programming technologies most people in the HPC community agree that the most popular ones are not ideal, however, that’s about all we can agree on! Whether it be classical HPC technologies such as MPI and OpenMP, those built on explicit parallel models such as OmpSs, Legion, GPI-Space, UPC++, Charm++, HPX, Chapel, and GASPI, those targeting accelerators such as OpenACC, OpenCL, CUDA, or domain specific languages, there are very many choices when it comes to writing parallel code. But all of these require significant investment from an application programmer, not just to learn but also risks associated in adoption for their application. So maybe it is unsurprising that, even though there are very many programming options, developers still frequently opt for the lowest common denominator of basic OpenMP or MPI v1. There is a saying, better the devil you know, and even though classical parallel technologies might not be perfect, at-least their ubiquity means that they are well supported, their future assured and programmers to some extent know what to expect. There is no single, silver bullet, technology, but whilst it can be argued a choice of parallel programming models is advantageous, crucially this approach spreads the community’s effort out rather thinly. This panel will be focussed around the question of whether we should be looking more closely at consolidating and combining existing parallel programming technologies, standardisation to enable better interoperability and what sort of parallel programming technologies as a community we should be getting behind.