Write_checkpoint rewrite mesh

I am solving an unsteady fluid mechanics problems with a large mesh (almost 9 million cells). I am using two different methods to save the timeseries data. One is through write and the other with write_checkpoint.

I am storing these timeseries data in two formats primarily for visualization and postprocessing purposes. It is my understanding that write only allows for visualization, and cannot utilized for reading the data again into fenics for further post-processing after the end of the simulation. With write_checkpoint, I am able read the data into Fenics but running to storage issue because of the mesh being rewritten for every timestep. For example, for my case with 9 million cells I am generating nearly 350GB of data for a 20,000 timestep run with data being saved at intervals of 100 timesteps.

Is there a way to avoid the rewrite mesh issue in write_checkpoint and save on storage or utilize the data from write for post-processing?

See for instance How to write a XDMF file which has mesh static over time?

Hi Dokken,

I tried that. But write_checkpoint ignores the rewrite_function_mesh=False flag. rewrite_function_mesh works only with write but not with write_checkpoint.

I am post-processing the data to calculate Wall Shear Stress and other metrics after the end of the simulation. I use read_checkpoint to read the data in the post-processing script.

You are right, in write checkpoint, you cannot reuse the mesh. Have you tried using HDF5File (it is not possible to open in paraview, But can do check pointing: https://bitbucket.org/fenics-project/dolfin/raw/946dbd3e268dc20c64778eb5b734941ca5c343e5/python/test/unit/io/test_HDF5_series.py)
By browsing the source code https://bitbucket.org/fenics-project/dolfin/raw/946dbd3e268dc20c64778eb5b734941ca5c343e5/python/test/unit/io/test_HDF5_series.py it seems to reuse the mesh

1 Like

Here’s a MWC for the problem at hand.

writer = {}
writer[key] = dolfin.XDMFFile(MPI.comm_world, value)
writer[key].parameters["flush_output"] = True
writer[key].parameters["functions_share_mesh"] = True
writer[key].parameters["rewrite_function_mesh"] = False

components = {"u0": u_[0], "u1": u_[1], "u2": u_[2], "p": p_}
        for key in components.keys():
            if tstep == store_data:
                writer[key].write_checkpoint(components[key], components[key].name(), tstep,
                                             XDMFFile.Encoding.HDF5, append=False)
            else:
                writer[key].write_checkpoint(components[key], components[key].name(), tstep,
                                             XDMFFile.Encoding.HDF5, append=True)

Even after the specifying the parameters, the h5 files contain the mesh information for each timestep.

In the post-processing step, I use:

        file_path_x = case_path / "u0.xdmf"
        file_path_y = case_path / "u1.xdmf"
        file_path_z = case_path / "u2.xdmf"
        mesh_path = case_path / "mesh.xdmf"

Followed by

f_0.read_checkpoint(u0, "u0", file_counter)
f_1.read_checkpoint(u1, "u1", file_counter)
f_2.read_checkpoint(u2, "u2", file_counter)

I haven’t tried using HDF5File. I can try it out. Is it possible to read HDF5File into Fenics for the post-processing part?

Yes, see the link in the previous post.

Okay, I will try it out and shall get back. Thank you dokken!!