Fast Optimal Task Graph Scheduling by Means of an Optimized Parallel A*-Algorithm

The development of high speed local networks and cheap, but also powerful PCs, lead to an extensive use of PCs as building blocks in modern parallel computer systems. In order to exploit the available resources at the best, any program has to be split into parallel executable tasks, which have to be scheduled to the available processing elements. The need for data communication between these tasks leads to dependencies, which strongly effect the schedule. In this paper, we consider task graphs that take computation and communication costs into account. For a completely meshed homogeneous computing system with a fixed number of processing elements, we compute schedules with minimum schedule length. Our contribution consists of parallelizing an informed search algorithm for calculating optimal schedules based on the IDA∗algorithm, a memory-saving derivative of the well known A∗-algorithm. Due to the resulting memory requirements, the application of the A∗algorithm is restricted to task graph scheduling problems with a quite small number of tasks. In contrast, the IDA∗-algorithm can compute optimal schedules for more complex task graphs.