Parallel XDMF read/write of mesh

Hi,

I am trying to write a mesh to a XDMF file for reading it back later on.
Everything works smoothly in serial. However, in parallel, the mesh partition changes once I read the mesh back. See the picture below (strange partition, by the way).

Do you know if this is a limitation of XDMF? How could I get the mesh with its original partition?

I also include here the code to reproduce it. It is enough to run it with 2 MPI processes and to load separately in ParaView the vtu files generated by every process.

from dolfinx import mesh, io
from mpi4py import MPI

comm = MPI.COMM_WORLD

cell_type = mesh.CellType.quadrilateral
mesh = mesh.create_unit_square(comm=comm, nx=3, ny=3, cell_type=cell_type)

with io.XDMFFile(comm, "mesh.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh)
with io.XDMFFile(comm, "mesh.xdmf", "r") as xdmf:
    mesh2 = xdmf.read_mesh()

with io.VTKFile(comm, "mesh.pvd", "w") as vtk:
    vtk.write_mesh(mesh)
with io.VTKFile(comm, "mesh2.pvd", "w") as vtk:
    vtk.write_mesh(mesh2)

Thanks!

That’s known. adios4dolfinx by @dokken has recently add a feature to read/write partitions in Add read from partition by jorgensd · Pull Request #81 · jorgensd/adios4dolfinx · GitHub, which may be helpful.

1 Like

Thanks for your prompt answer, @francesco-ballarin.
However, reading that pull-request, it seems is not going to work for me, as it states

The local order of the cells can be re-ordered.

My motivation for this is not only write/read a particular mesh. But, I want to write/read a fem.Function
defined over a (potentially high-degree) finite element space, with a degree 1 mesh. Beyond the fact that there is a degree mismatch between space and mesh degrees (what DOLFINx writers don’t support), I cannot find any functionality for reading fem.Functions from files.

So, my workaround strategy is to write/read the degree 1 mesh using existing functionalities and write/read the rest of information separately. For instance, I save the vector of coefficients using PETSc viewers. However, changes in the mesh partition (or the possible element reordering), break the correspondence between mesh (and space built on top) and vector of coefficients.

Any suggestion is welcome.
Thanks!

adios4dolfinx does exactly this thing, read functions defined in any function space supported in dolfinx into DOLFINx at a later stage.

I’ve got several documented examples on how to read/write meshes and functions with adios4dolfinx at: ADIOS4DOLFINx - A framework for checkpointing in DOLFINx — ADIOS2Wrappers

Many thanks, @dokken . I wasn’t aware of this project and seems exactly what I need.
Just one question, can I trust that the ordering of the function space dofs will be preserved?
Together with the functions I am storing associated PETSc matrices whose ordering is linked to the function spaces, and a re-ordering would break this link.

No, i would not guarantee that.
Is there a reason for storing the matrices? If you store relevant coefficients and meshes they are easily reconstructed.

I guess the Closest you would get to storing order is
http://jsdokken.com/adios4dolfinx/docs/original_checkpoint.html
But i am not sure the Ghost ordering will be persistent across runs

However, from what I understand, in the example Writing a function checkpoint — ADIOS2Wrappers, the check

np.testing.assert_allclose(u_ref.x.array, u_in.x.array, atol=1e-14)

requires that the dofs ordering before and after writing/reading should remain invariant.

Those matrices are basis projections from FEM to spline discretizations, and expensive to compute, so I would like to store them for future reuse.

Thanks for your help, @dokken. Very appreciated.

No, as the mesh is read in again before making u_ref, and thus both cells and degrees of freedom are re-ordered.

I general, you cannot expect the ordering to be consistent in parallel, as parallel does not specify the number of processes you want to use. Say you create the matrices with 5 processes, then you would like to be able to read them in on 3 processes, however, as the mesh will have a different distribution on 3 processes, you would have to move data from one process to another to transfer that information.

Of course, all of the information about transferring the dof values is available in the internals of adios4dolfinx, and should be possible to extract to transfer the current matrices from its output format to the new distributed layout.