Failed to read XDMF mesh in parallel with dolfinx 0.7.1

Hi, I am using dolfinx 0.7.1 and find that I can’t read a mesh from xdmf file in parallel. However, it works fine in serial. Is there something wrong with the mesh partition function? Or I make some mistake in the API of the updated function? Here is the minimal code:

from dolfinx import io, mesh
from mpi4py import MPI

msh = mesh.create_rectangle(comm=MPI.COMM_WORLD,
                            points=((0.0, 0.0), (2.0, 1.0)), n=(32, 16),

with io.XDMFFile(msh.comm, "out_mesh/mesh.xdmf", "w") as file:

read_mesh = io.XDMFFile(MPI.COMM_WORLD, "out_mesh/mesh.xdmf" ,"r")

When I run this code with mpirun the error is as follows:

(0): ERROR: SCOTCH_dgraphInit: Scotch compiled with SCOTCH_PTHREAD and program not launched with MPI_THREAD_MULTIPLE
Traceback (most recent call last):
  File "/data/nas_rxtd1/Lingyue/dolfinx_0.7/", line 4, in <module>
    msh = mesh.create_rectangle(comm=MPI.COMM_WORLD,
  File "/home/shenlingyue/anaconda3/envs/fenicsx-0.7/lib/python3.11/site-packages/dolfinx/", line 541, in create_rectangle
    mesh = _cpp.mesh.create_rectangle_float64(comm, points, n, cell_type, partitioner, diagonal)
RuntimeError: ParMETIS_V3_PartKway failed. Error code: -4
1 Like

I thought this issue was fixed with: Import mpi4py early for MPI initialisation by garth-wells · Pull Request #2826 · FEniCS/dolfinx · GitHub
@jackhale did we backport this?

Python dolfinx does not initialise MPI by design. Instead this is delegated to mpi4py which has a properly designed interface for this:

0.7.1 modifies all demos and Python files to import mpi4py first. Users should copy this approach in their codes.

1 Like

Thanks, now the code works.