Publication
HPCC/SmartCity/DSS 2013
Conference paper

Data decomposition for code parallelization in practice: What do the experts need?

View publication

Abstract

Parallelizing serial software systems in order to run in a High Performance Computing (HPC) environment presents many challenges to developers. In particular, the extant literature suggests the task of decomposing large-scale data applications is particularly complex and time-consuming. In order to take stock of the state of practice of data decomposition in HPC, we conducted a two-phased study. Firstly, using focus group methodology we conducted an exploratory study at a software laboratory with an established track record in HPC. Based on the findings of this first phase, we designed a survey to assess the state of practice among experts in this field around the world. Our study shows that approximately 75% of parallelized applications use some form of data decomposition. Furthermore, data decomposition was found to be the most challenging phase in the parallelization process, consuming approximately 40% of the total time. A key finding of our study is that experts do not use any of the available tools and formal representations, and in fact, are not aware of them. We discuss why existing tools have not been adopted in industry and based on our findings, provide a number of recommendations for future tool support. © 2013 IEEE.

Date

13 Nov 2013

Publication

HPCC/SmartCity/DSS 2013

Share