Reading back XDMF file with time stamps on dolfinx

I’ve saved an XDMF file of a mesh with a function assigned to each point at certain times using time-stamps. Now, I’d like to read that file and retrieve the values of that function at each saved time. Is this possible? I’m using dolfinx.

The code for writing the file goes as follows:

#Function space
V = fem.FunctionSpace(mesh, ("Lagrange", 1))

#Time array where solution SOL is defined
times = np.shape(SOL)[1]

#Function to assign values
f = fem.Function(V)

#Write 
with XDMFFile(MPI.COMM_WORLD, "surf_fib.xdmf", "w") as xdmf:
  xdmf.write_mesh(mesh)

  for i in range(len(times)):
    f.x.array[:]=SOL[:,i] #Solution at time times[i]
    xdmf.write_function(f, times[i]) #Save solution 

Thanks in advance for any help.

I needed something similar for a nonlinear solid mechanics problem (I only show the relevant lines below):

u_file = XDMFFile(MPI.COMM_WORLD, "displacement.xdmf", "w")
u_file.write_mesh(msh)

for n in range(1, 10):
    num_its, converged = solver.solve(u)
    u_file.write_function(u, n)

I would suggest using petsc directly: I/O from XDMF/HDF5 files in dolfin-x - #23 by hawkspar

Thanks for the help. :smiley:

I found that meshio could do the trick (had to make sure points where mapped properly from the original mesh when passing to dolfinx, though).

Yeah I just ran across a similar issue ; dolfinx is sometimes inconsistent with mesh partitioning. What I observed on my end :

  1. Read a .xxx file (using meshio)
  2. Convert it to .xdmf (meshio)
  3. Read it back (dolfinx)
  4. Save its partition (which part of the mesh is owned by what processor) (dolfinx)
  5. Do stuff, save data, end first program (PetscBinaryIO)
  6. Begin second program, read .xdmf mesh (dolfinx)
  7. Save 2nd partition (dolfinx)
  8. Attempt to read data back (PetscBinaryIO)

Running this process twice in a parallel fashion on a .msh file works, but not a .xmf file. dolfinx is inconsistent in its first and second partition - it splits the mesh the same but changes which proc accesses what, which changes vector size and breaks PetscBinaryIO in turn.

Long story short, careful of the original mesh format fed to meshio, it matters.

1 Like

An even simpler case of inconsistency can be seen here:

import ufl
from dolfinx.io import XDMFFile
from dolfinx.fem import FunctionSpace, Function
from mpi4py.MPI import COMM_WORLD as comm

for i in range(2):
	with XDMFFile(comm, "mesh.xdmf", "r") as file:
		mesh = file.read_mesh(name="Grid")

	FE_constant=ufl.FiniteElement("DG",mesh.ufl_cell(),0)
	W = FunctionSpace(mesh,FE_constant)

	partition=Function(W)
	partition.x.array[:]=comm.rank

	with XDMFFile(comm, f"partition{i}.xdmf", "w") as xdmf:
		xdmf.write_mesh(mesh)
		xdmf.write_function(partition)


As you can see, the partitioning of the mesh is random, which poses I/O problems.

The simplest workaround I could find is simply to read the mesh once per program.

Hello,
I am trying to read back the xdmf like:

io = PetscBinaryIO.PetscBinaryIO(complexscalars=True)
f = io.readBinaryFile( "mesh.xdmf")

But I get the error:
objecttype = self._classid[header]
~~~~~~~~~~~~~^^^^^^^^
KeyError: 1010792557

During handling of the above exception, another exception occurred:
OSError: Invalid PetscObject CLASSID or object not implemented for python

Make a minimal reproducible example in a new topic.