MPI¶
Collective Behavior¶
openPMD-api is designed to support both serial as well as parallel I/O. The latter is implemented through the Message Passing Interface (MPI).
A collective operation needs to be executed by all MPI ranks of the MPI communicator that was passed to openPMD::Series
.
Contrarily, independent operations can also be called by a subset of these MPI ranks.
For more information, please see the MPI standard documents, for example MPI-3.1 in “Section 2.4 - Semantic Terms”.
Functionality |
Behavior |
Description |
---|---|---|
|
collective |
open and close |
|
collective |
read and write |
|
independent |
declare and open |
|
collective |
explicit open |
|
independent |
declare, open, write |
|
independent |
declare, open, write |
|
backend-specific |
declare, write |
|
independent |
open, reading |
|
independent |
write |
|
independent |
read |
- 1(1,2,3,4)
Individual backends, e.g. HDF5, will only support independent operations if the default, non-collective behavior is kept. (Otherwise these operations are collective.)
- 2
HDF5 only supports collective attribute definitions/writes; ADIOS1 and ADIOS2 attributes can be written independently. If you want to support all backends equally, treat as a collective operation.
- 3
We usually open iterations delayed on first access. This first access is usually the
flush()
call after astoreChunk
/loadChunk
operation. If the first access is non-collective, an explicit, collectiveIteration::open()
can be used to have the files already open.
Tip
Just because an operation is independent does not mean it is allowed to be inconsistent.
For example, undefined behavior will occur if ranks pass differing values to ::setAttribute
or try to use differing names to describe the same mesh.
Efficient Parallel I/O Patterns¶
Note
This section is a stub. We will improve it in future versions.
Write as large data set chunks as possible in ::storeChunk
operations.
Read in large, non-overlapping subsets of the stored data (::loadChunk
).
Ideally, read the same chunk extents as were written, e.g. through ParticlePatches
(example to-do).
See the implemented I/O backends for individual tuning options.