Exciting news! We're transitioning to the Statewide California Earthquake Center. Our new website is under construction, but we'll continue using this website for SCEC business in the meantime. We're also archiving the Southern Center site to preserve its rich history. A new and improved platform is coming soon!

Enabling Large-scale Scientific Workflows on Petascale Resources Using MPI Master/Worker

Mats Rynge, Gideon Juve, Karan Vahi, Scott Callaghan, Gaurang Mehta, Philip J. Maechling, & Ewa Deelman

Published July 2012, SCEC Contribution #1636

Computational scientists often need to execute large, loosely-coupled parallel applications such as workflows and bags of tasks in order to do their research. These applications are typically composed of many, short-running, serial tasks, but frequently demand large amounts of computation and storage. In order to produce results in a reasonable time, scientists would like to execute these applications using petascale resources. In the past this has been a challenge because petascale systems are not designed to execute such workloads efficiently. In this paper we describe a new approach to executing large, fine-grained workflows on distributed petascale systems. Our solution involves partitioning the workflow into independent subgraphs, and then submitting each subgraph as a self-contained MPI job to remote resources. We describe how the partitioning and job management has been implemented in the Pegasus Workflow Management System. We also explain how this approach provides an end to end solution for challenges related to system architecture, queue policies and priorities, and application reuse and development. Finally, we describe how the system is being used to enable the execution of a very large seismic hazard analysis application on XSEDE resources.

Citation
Rynge, M., Juve, G., Vahi, K., Callaghan, S., Mehta, G., Maechling, P. J., & Deelman, E. (2012, 7). Enabling Large-scale Scientific Workflows on Petascale Resources Using MPI Master/Worker. Oral Presentation at XSEDE Conference 2012. doi: 10.1145/2335755.2335846.