Efficient Primitives for Creating and Scheduling Parallel Computations


We give a brief overview of our ongoing work on developing expressive and efficient abstractions for parallel programming on multicore platforms. We propose a programming interface for expressing a parallel computation dynamically, as a directed acyclic graph (DAG). The DAG consists of tasks and dependencies between them. Because our interface lets the DAG take shape as the computation unfolds, the programmer can describe a variety of computations, including those expressible with existing parallel-computing paradigms, such as fork-join, spawn-sync, and parallel futures. In some parallel applications, such as parallel, load-balancing garbage collectors and graph-connectivity algorithms, performance can be improved by reducing the cost of synchronizing parallel tasks. Our interface gives the programmer a few critical building blocks to aid in reducing such synchronization costs. In particular, through the interface, the programmer can specify, on a per-task basis, the strategy that should be employed for detecting when tasks become ready. We have implemented our interface and a number of strategies in a C++ scheduler.


Umut A. Acar, Arthur Charguéraud, and Mike Rainey
DAMP: Workshop on Declarative Aspects and Applications of Multicore Programming, January 2012