StarPU Handbook
|
Bitmap | This section describes the bitmap facilities provided by StarPU |
Codelet And Tasks | This section describes the interface to manipulate codelets and tasks |
CUDA Extensions | |
Data Interfaces | |
Data Management | This section describes the data management facilities provided by StarPU. We show how to use existing data interfaces in Data Interfaces, but developers can design their own data interfaces if required |
Out Of Core | |
Data Partition | |
Expert Mode | |
Explicit Dependencies | |
FFT Support | |
FxT Support | |
Implicit Data Dependencies | In this section, we describe how StarPU makes it possible to insert implicit task dependencies in order to enforce sequential data consistency. When this data consistency is enabled on a specific data handle, any data access will appear as sequentially consistent from the application. For instance, if the application submits two tasks that access the same piece of data in read-only mode, and then a third task that access it in write mode, dependencies will be added between the two first tasks and the third one. Implicit data dependencies are also inserted in the case of data accesses from the application |
Initialization and Termination | |
Insert_Task | |
Theoretical Lower Bound on Execution Time | Compute theoretical upper computation efficiency bound corresponding to some actual execution |
MIC Extensions | |
Miscellaneous Helpers | |
Modularized Scheduler Interface | |
MPI Support | |
Multiformat Data Interface | |
OpenCL Extensions | |
OpenMP Runtime Support | This section describes the interface provided for implementing OpenMP runtimes on top of StarPU |
Parallel Tasks | |
Performance Model | |
Profiling | |
Running Drivers | |
SCC Extensions | |
Scheduling Contexts | StarPU permits on one hand grouping workers in combined workers in order to execute a parallel task and on the other hand grouping tasks in bundles that will be executed by a single specified worker. In contrast when we group workers in scheduling contexts we submit starpu tasks to them and we schedule them with the policy assigned to the context. Scheduling contexts can be created, deleted and modified dynamically |
Scheduling Policy | TODO. While StarPU comes with a variety of scheduling policies (see Task Scheduling Policy), it may sometimes be desirable to implement custom policies to address specific problems. The API described below allows users to write their own scheduling policy |
Standard Memory Library | |
Task Bundles | |
Task Lists | |
Threads | This section describes the thread facilities provided by StarPU. The thread function are either implemented on top of the pthread library or the Simgrid library when the simulated performance mode is enabled (SimGrid Support) |
Toolbox | The following macros allow to make GCC extensions portable, and to have a code which can be compiled with any C compiler |
StarPU-Top Interface | |
Tree | This section describes the tree facilities provided by StarPU |
Versioning | |
Workers’ Properties | |
Scheduling Context Hypervisor - Building a new resizing policy | |
Scheduling Context Hypervisor - Regular usage |