General Chairs

Sridutt Bhalachandra, Lawrence Berkeley National Laboratory, USA
Sunita Chandrasekaran, University of Delaware, USA

Guido Juckeland, HZDR, Germany

Program Chairs

Christopher Daley, Lawrence Berkeley National Laboratory, USA
Jose M Monsalve Diaz, Argonne National Laboratory, USA
Verónica G. Melesse Vergara, Oak Ridge National Laboratory, USA

COVID-19 Update: SC22 Planning for an In-person Event

SC22 is planning for an In-person Event. Our workshop has been scheduled for Friday, 18 November 2022 from 8:30 am – 12 pm CST.

About the Workshop

The inclusion of accelerators in HPC systems is becoming well established and there are many examples of successful deployments today. As we look forward, we expect this trend to continue: accelerators will become even more widely used, a larger fraction of the system compute capability will be delivered by accelerators, and there will be an even tighter coupling of components in a compute node. This change in the HPC system landscape was enabled by both the increasing capability and usability of accelerators, like GPUs. Technology enablers included higher bandwidth memory technologies with larger capacities to fit more of an application’s working set, hardware-managed caches, and the ability to access CPU data without the need for explicit data management. As a result, scientific software developers are being offered with a rich platform to exploit the multiple levels of parallelism in their applications.

In today’s HPC environment, systems with heterogeneous node architectures providing multiple levels of parallelism are omnipresent. Further, the next generation of systems may feature GPU-like accelerators combined with other accelerators to provide improved performance for a wider variety of application kernels. This would introduce further complexity to application programmers because different programming languages and frameworks (e.g CUDA and HIP) may be required for each architectural component in a compute node. This type of specialization complicates maintenance and portability to other systems. Thus, the importance of programming approaches that can provide performance, scalability, and portability and exploit the maximum available parallelism is increasing. It is highly desirable that programmers are able to keep a single code base to help ease maintenance and avoid the need to debug and optimize multiple versions of the same code.

Exploiting the maximum available parallelism out of such systems necessitates refactoring applications and using a programming approach that can make use of the accelerators. Historically, the favored portable approaches, and the sole focus of our earlier workshops, were OpenMP offloading and OpenACC, both based on directives. Today, we recognize the evolution of other options to adapt to heterogeneity and, starting in 2021, we extended the workshop to include the use of standard Fortran/C++, SYCL, DPC++, Kokkos, and RAJA are among several alternatives that can provide scalable as well as portable solutions without compromising on performance. A programmer’s expectation from the software community is to deliver solutions that would allow maintenance of a single code base whenever possible, thus avoiding duplicate effort across programming models and architectures.

Software abstraction-based programming models such as OpenMP and OpenACC have been serving this purpose over the past several years and are likely to represent one path forward. These programming models address the ‘X’ component in a hybrid MPI+X programming approach by providing programmers high-level directives and delegating some burden to the compiler. With the increased importance of other programming models to be considered in this workshop (e.g. SYCL, DPC++, Kokkos, and RAJA), there may be other challenges and opportunities to efficiently distribute computations across multiple nodes.

Our intent is to share methods and case studies demonstrating programmability, performance and performance portability across architectures for a multitude of distributed HPC, data, and AI workloads.

Workshop Important Deadlines (Tentative)

  • Paper Submission Deadline: August 26, 2022
  • Author notification: September 30, 2022
  • Workshop Ready Deadline: October 31, 2022
  • Camera Ready Deadline: December 9, 2022

Workshop format

WACCPD is a workshop centered on peer-reviewed and published technical papers. The workshop will have one invited talk in the morning. We will also gave an invited talk to start the afternoon session and a panel towards the end of the workshop. In case of reduction to a half-day workshop, we would skip the afternoon session, i.e., the second invited talk and the panel, to leave enough room for paper presentations.

Impressions from the 2019 Workshop


Theme by HermesThemes

Copyright © 2022 WACCPD 2022. All Rights Reserved