Race conditions in MPI for time-stepping problems

Hey everyone

I am trying to understand how to handle race conditions between MPI processes in problems with time stepping. I couldn’t find any example that does any explicit thing in this regard for example with using a barrier after error calculation or any other synchronization. Is this handled under the hood?would love to learn more about this!

Thanks

The MPI syncronization with DOLFINx happens in a few places. The most common places is:

  1. After assembly (as there is some scattering/gathering of values across processes)
  2. After solving the linear system.

A few places you will see explicit barriers in DOLFINx is when reading in file data (before distributing it with create_mesh), for instance when reading in gmsh data.
You see this with a search through the dolfinx repo: Code search results · GitHub

Other synchronization can be seen when scatter_reverse, scatter_forward, or the PETSc GhostUpdate function is used.

1 Like

Thanks! That clears things up

It might be helpful if this was mentioned in the demos, since I imagine other new users could wonder about the same thing.