Write XDMF file in parallel using comm_self

Hi everyone,
I have a (hopefully) quick question.
Writing a function in parallel using XDMFFile with comm_world works great, but for my project I need to define the mesh using comm_self and in this case I get an error. Here I prepared a minimal code reproducing the issue:

import fenics

comm = fenics.MPI.comm_self

file_name = "runtime/test_xdmf_write/data/foo.xdmf"
out_file = fenics.XDMFFile(comm, file_name)

mesh = fenics.UnitSquareMesh(comm, 50, 50)
V = fenics.FunctionSpace(mesh, "CG", 1)
foo = fenics.project(fenics.Constant(1.), V)

out_file.write(foo, 0)

And here is the error I get:

$ mpirun -n 2 python3 test_xdmf_write.py 

HDF5-DIAG: Error detected in HDF5 (1.10.0-patch1) MPI-process 0:
  #000: ../../../src/H5F.c line 491 in H5Fcreate(): unable to create file
    major: File accessibilty
    minor: Unable to open file
  #001: ../../../src/H5Fint.c line 1168 in H5F_open(): unable to lock the file or initialize file structure
    major: File accessibilty
    minor: Unable to open file
  #002: ../../../src/H5FD.c line 1821 in H5FD_lock(): driver lock request failed
    major: Virtual File Layer
    minor: Can't update object
  #003: ../../../src/H5FDsec2.c line 939 in H5FD_sec2_lock(): unable to flock file, errno = 11, error message = 'Resource temporarily unavailable'
    major: File accessibilty
    minor: Bad file ID accessed

I guess this is probably due to parallel access to the same file. Does anyone know the correct way to do this?

Replace your filename with:
file_name = "runtime/test_xdmf_write/data/foo_{0:d}.xdmf".format(fenics.MPI.comm_world.rank)
This will then write a file for each process.

It works indeed! Thank you very much.
Just a curiosity: I noticed that this modification actually creates just one file:

$ ls runtime/test_xdmf_write/data
foo_0.h5  foo_0.xdmf

While I was expecting it to create one file for each process. Why is that so?

When I ran this using

mpirun -n 2 python3 code.py

it created two files, one for each process.
To me it seems like you are only running it in serial (such that fenics.MPI.comm_world.rank) is only 0. You should add print statements to your code printing MPI.comm_world.rank and MPI.comm_world.size to verify that your code is using multiple processes.

Yes, sorry, ti was just a stupid mistake. I definitely get the same as you.

Thank you again for your help!