Hi everyone,
I am working with the old FEniCS in a cluster with a distributed file system. I’d like to save Functions and Meshes on the distributed file system and then collect and gather all the data later.
To be clearer, I’d like to do something like:
"""
Distributed write
"""
import fenics
comm = MPI.comm_world
rank = comm_world.Get_rank()
# define Mesh in parallel
mesh = UnitSquareMesh(comm, 100, 100)
V = fenics.FunctionSpace(mesh, "CG", 1)
exp = fenics.Expression("x[0] < 0.5 ? 1. : 0.", degree=1)
# define Function in parallel
u = fenics.interpolate(exp, V)
# Define a file for each process
mesh_xdmf = XDMFFile(f"p{rank}_mesh.xdmf")
u_xdmf = XDMFFile(f"p{rank}_u.xdmf")
# Distributed write
mesh_xdmf.write(mesh)
u_xdmf.write_checkpoint(u, "u", 0, fenics.XDMFFile.Encoding.HDF5, False)
And then be able to collect everything and write the unfragmented data on a different location.
Is it possible?
I don’t understand what you want to do here. You say you want to create a file for each process, but it should contain all the information from all processes?
Or are you trying to write only the data local to each process to a unique file defined by the rank?
If you do not need the data for checkpointing reasons, I guess File (with .pvd extension) should give you what you want in legacy dolfin by default (by creating pvtu files where each corresponds to a partition)