yt.utilities.parallel_tools.parallel_analysis_interface.
parallel_objects
(objects, njobs=0, storage=None, barrier=True, dynamic=False)[source]¶This function dispatches components of an iterable to different processors.
The parallel_objects function accepts an iterable, objects, and based on the number of jobs requested and number of available processors, decides how to dispatch individual objects to processors or sets of processors. This can implicitly include multi-level parallelism, such that the processor groups assigned each object can be composed of several or even hundreds of processors. storage is also available, for collation of results at the end of the iteration loop.
Calls to this function can be nested.
This should not be used to iterate over datasets –
DatasetSeries
provides a much nicer
interface for that.
Parameters: | objects : iterable
njobs : int
storage : dict
barrier : bool
dynamic : bool
|
---|
Examples
Here is a simple example of iterating over a set of centers and making slice plots centered at each.
>>> for c in parallel_objects(centers):
... SlicePlot(ds, "x", "Density", center = c).save()
...
Here’s an example of calculating the angular momentum vector of a set of spheres, but with a set of four jobs of multiple processors each. Note that we also store the results.
>>> storage = {}
>>> for sto, c in parallel_objects(centers, njobs=4, storage=storage):
... sp = ds.sphere(c, (100, "kpc"))
... sto.result = sp.quantities["AngularMomentumVector"]()
...
>>> for sphere_id, L in sorted(storage.items()):
... print c[sphere_id], L
...