Creating XDMF files in parallel

Hello everyone,

I want to save solution fields obtained from dolfinx in a XDMF file using the dolfinx.io.XDMFFile class.

I execute the following MWE on two cores in parallel with the usual command mpirun -n 2 /home/miniconda3/envs/fenicsx-bis/bin/python test.py:

from mpi4py import MPI
import dolfinx.io as io

comm = MPI.COMM_WORLD

filename = "test.xdmf"
if comm.rank == 0:
    ffile = io.XDMFFile(comm, filename, 'w')
    ffile.close()
    

However, it raises this error:

Abort(679551630) on node 1 (rank 1 in comm 496): Fatal error in internal_Allreduce_c: Message truncated, error stack:
internal_Allreduce_c(4173)...................: MPI_Allreduce_c(sendbuf=0x7ffc320b53e4, recvbuf=0x7ffc320b4f58, count=1, MPI_INT, MPI_MAX, comm=0x84000001) failed
MPID_Allreduce(475)..........................: 
MPIDI_Allreduce_allcomm_composition_json(391): 
MPIDI_Allreduce_intra_composition_gamma(599).: 
MPIDI_POSIX_mpi_allreduce(313)...............: 
MPIR_Allreduce_impl(4813)....................: 
MPIR_Allreduce_allcomm_auto(4726)............: 
MPIC_Sendrecv(309)...........................: 
MPIC_Wait(91)................................: 
MPIR_Wait(785)...............................: 
MPIDIG_recv_type_init(77)....................: Message from rank 0 and tag 2 truncated; 4 bytes received but buffer size is 128

I set the environment variable HDF5_USE_FILE_LOCKING=FALSE as suggested on other forums.

I installed dolfinx, mpi4py, and mpich from conda-forge. I have the following versions installed in the fenics-bis conda environment in Ubuntu 24.04.1:

  • fenics-dolfinx 0.9.0
  • mpi4py 4.0.1
  • mpich 4.2.3

Thanks in advance

You can’t use COMM_WORLD and then only create the file on one rank when running I parallel
You would need to call

filename = "test.xdmf"
ffile = io.XDMFFile(comm, filename, 'w')
ffile.close()
1 Like

Problem solved. Thank you @dokken! I mixed up with COMM_SELF and COMM_WORLD communicators…