I am trying to write a mesh to a XDMF file for reading it back later on.
Everything works smoothly in serial. However, in parallel, the mesh partition changes once I read the mesh back. See the picture below (strange partition, by the way).
Do you know if this is a limitation of XDMF? How could I get the mesh with its original partition?
I also include here the code to reproduce it. It is enough to run it with 2 MPI processes and to load separately in ParaView the vtu files generated by every process.
from dolfinx import mesh, io
from mpi4py import MPI
comm = MPI.COMM_WORLD
cell_type = mesh.CellType.quadrilateral
mesh = mesh.create_unit_square(comm=comm, nx=3, ny=3, cell_type=cell_type)
with io.XDMFFile(comm, "mesh.xdmf", "w") as xdmf:
xdmf.write_mesh(mesh)
with io.XDMFFile(comm, "mesh.xdmf", "r") as xdmf:
mesh2 = xdmf.read_mesh()
with io.VTKFile(comm, "mesh.pvd", "w") as vtk:
vtk.write_mesh(mesh)
with io.VTKFile(comm, "mesh2.pvd", "w") as vtk:
vtk.write_mesh(mesh2)
Thanks for your prompt answer, @francesco-ballarin.
However, reading that pull-request, it seems is not going to work for me, as it states
The local order of the cells can be re-ordered.
My motivation for this is not only write/read a particular mesh. But, I want to write/read a fem.Function
defined over a (potentially high-degree) finite element space, with a degree 1 mesh. Beyond the fact that there is a degree mismatch between space and mesh degrees (what DOLFINx writers don’t support), I cannot find any functionality for reading fem.Functions from files.
So, my workaround strategy is to write/read the degree 1 mesh using existing functionalities and write/read the rest of information separately. For instance, I save the vector of coefficients using PETSc viewers. However, changes in the mesh partition (or the possible element reordering), break the correspondence between mesh (and space built on top) and vector of coefficients.
Many thanks, @dokken . I wasn’t aware of this project and seems exactly what I need.
Just one question, can I trust that the ordering of the function space dofs will be preserved?
Together with the functions I am storing associated PETSc matrices whose ordering is linked to the function spaces, and a re-ordering would break this link.
No, i would not guarantee that.
Is there a reason for storing the matrices? If you store relevant coefficients and meshes they are easily reconstructed.
No, as the mesh is read in again before making u_ref, and thus both cells and degrees of freedom are re-ordered.
I general, you cannot expect the ordering to be consistent in parallel, as parallel does not specify the number of processes you want to use. Say you create the matrices with 5 processes, then you would like to be able to read them in on 3 processes, however, as the mesh will have a different distribution on 3 processes, you would have to move data from one process to another to transfer that information.
Of course, all of the information about transferring the dof values is available in the internals of adios4dolfinx, and should be possible to extract to transfer the current matrices from its output format to the new distributed layout.