I am trying to understand how to handle race conditions between MPI processes in problems with time stepping. I couldn’t find any example that does any explicit thing in this regard for example with using a barrier after error calculation or any other synchronization. Is this handled under the hood?would love to learn more about this!
The MPI syncronization with DOLFINx happens in a few places. The most common places is:
After assembly (as there is some scattering/gathering of values across processes)
After solving the linear system.
A few places you will see explicit barriers in DOLFINx is when reading in file data (before distributing it with create_mesh), for instance when reading in gmsh data.
You see this with a search through the dolfinx repo: Code search results · GitHub
Other synchronization can be seen when scatter_reverse, scatter_forward, or the PETSc GhostUpdate function is used.