MPI FEniCS Hangs on File output for Rank 0

I am solving the same problem on multiple processes with varying initial conditions. On each iteration I would like to collect the average of the solutions on rank 0 and then write them to file on rank 0 as well.

However, when I try to perform certain actions under the if mpi_rank == 0: condition, FEniCS hangs and I have to force quit the code.

Here is a MWE that shows my problem:

from dolfin import *

mpi_rank = MPI.rank(MPI.comm_world)
mpi_size = MPI.size(MPI.comm_world)

mesh = UnitSquareMesh(MPI.comm_self,2,2)

P1 = FiniteElement("P", mesh.ufl_cell(), 1)

W = FunctionSpace(mesh, P1)

N = Function(W)
m = Function(W)

m.vector()[:] = mpi_rank

mm = MPI.comm_world.reduce(m.vector()[:]/mpi_size,root=0)

if mpi_rank == 0:
     N.vector()[:] = mm
     File("average.pvd") << N

If I remove the File(“average.pvd”) << N line the code completes and if I print out the values of the function N it correctly has the average only on rank 0, but hangs with this line.

Of course this code works if I just do this on all processes but this is obviously not efficient with a lot of processes.

You can specify a communicator in the File constructor:

File(MPI.comm_self,"average.pvd") << N

(Otherwise, it defaults to MPI.comm_world.) I’ll also mention an apparent typo in the MWE, where mpi_size is set using MPI.rank instead of MPI.size.

2 Likes