The tests now contain tons of example of how to write different functions to file, and I believe it is thoroughly tested (500 test cases, variations of function spaces, polynomial degrees, real/complex and precision of mesh geometry/function).
I still have some problems understanding the need to re-read the mesh though.
So suppose I have a field to read that was written by one process in some preprocessing step (e.g. fiber directions), and subsequently another field that was written from a parallel layout (e.g. a pre-deformation state from another simulation).
In both cases, the mesh indices/node orders would be different, right? So the first field would get invalidated and the functions would need to be re-defined, right? Or do I miss something here?
I see the point that data needs to be stored together with some information about the mesh/ordering, yes. But wouldn’t it be beneficial to also have a reader that can map its re-ordered data to the base mesh? Otherwise it won’t be possible to read functions from different sources, even though they have self-contained information (relating a node to a value).
So I currently have this workaround by querying the igi = msh.geometry.input_global_indices and then doing the mapping, which at least works if I don’t change the order of the mesh internally. Since in the end, I will always have to deal with data to read that comes from different sources (possibly even another program that provides nodal values, etc.)
How would one do that? It would mean a global search to figure out what nodes that are in each cell and match them (as global ordering is different depending on the number of processes used).
Why not just keep on expanding the checkpoint with more information, ie if you need more data, read it in through the mesh and data with the checkpoint create new data, and save all to a new checkpoint.
If you want to read data depending on nodal values for other programs, you would have to implement a method to match nodes by comparing coordinates.
That is out of scope for the checkpointing. I would probably use nonmatching mesh interpolation to map data between the meshes.
@marchirschvogel I’ve released a new version of adios4dolfinx, v0.7.2 that features multi-function and multi-timestep capabilities, along with MeshTags-support and major performance improvements
@dokken Thanks, I’ll have a look! I guess this certainly will help to store input data in a compact way.
With regards to the other aspect of different functions stored by different processes/programs, I guess the desire would be to at least be able to read any field that is stored in the same ordering logic as the base mesh. So, any nodal data I write has the same ordering as the mesh I provide. Hence, the same logic of re-distributing/ordering the mesh upon read-in can be applied to the field provided (as .xdmf/.h5 file). I don’t know how feasible that would be, it just sounds somehow natural to me - especially when it comes to exchanging input data etc.
Store a solution to file (f). let us call this mesh mesh_1.
Read data from another file (based on the mesh_0) and use with f from mesh_1
You would have to store more information than I currently do. You would have to store the original cell index and the original geometry indices.
It is not infeasible, but it would slow down storage of checkpoints (and reading them) a bit, as one of the operations would have to deal with this reordering (reading a global array on every process).
I think I could make this work if there is a single MeshFunction in the file. As you have currently written it (with mesh and boundaries in the same file), the XDMFFile is not really well defined, as it looks like: