Writing and reading the meshfunction and solution in parallel

Dear community,
I am trying to write and read the mesh function and solution in HDF5 interface. The writing step works fine either in serial or parallel but when I try to read the mesh function and solution from written “mesh_mark_and_solution.h5” file, the cell ids get changed and also the solution does not get assigned same as it was written.
However the issue gets resolved if I do both reading and writing with serial or use same number of processors in

mpirun -np 9 python3 write_h5_file_solution.py 
mpirun -np 9 python3 read_h5_file_solution.py 

The code for writing data is following:

from dolfin import *


mesh_cube = UnitCubeMesh(5,5,5)

mesh_and_solution = HDF5File(mesh_cube.mpi_comm(),'mesh_mark_and_solution.h5', 'w')


mesh_function = MeshFunction("size_t", mesh_cube, mesh_cube.topology().dim(), 0)
domain = AutoSubDomain( lambda x, bndry:  between( x[0], (0 , 0.5) ))

domain.mark(mesh_function,1)
mesh = MeshView.create(mesh_function, 1)

result_file_mesh = XDMFFile('mesh_to_write.xdmf')
result_file_mesh.write(mesh)

Sol_space = FunctionSpace(mesh, "Lagrange", 1) 
sol = Function(Sol_space, name="Sol")

expression = Expression(" 100*x[2]*x[2]*x[2]",degree=2)
sol.assign(interpolate(expression, Sol_space))

mesh_and_solution.write(mesh_function,"/mesh_function/mesh_function_0") 
mesh_and_solution.write(sol, "/solution/sol_0")

result_file = XDMFFile('solution_to_write.xdmf')
result_file.write(sol, t=float(0));

mesh_and_solution.close()

While the separate code file for reading the code is following:

from dolfin import *

mesh_cube = UnitCubeMesh(5,5,5)

mesh_and_solution = HDF5File(mesh_cube.mpi_comm(),'mesh_mark_and_solution.h5', 'r')

mesh_function = MeshFunction("size_t", mesh_cube, mesh_cube.topology().dim(), 0)
mesh_and_solution.read(mesh_function,"/mesh_function/mesh_function_0")

mesh = MeshView.create(mesh_function, 1)

result_file_mesh = XDMFFile('mesh_after_reading.xdmf')
result_file_mesh.write(mesh);

Sol_space = FunctionSpace(mesh, "Lagrange", 1) 
sol = Function(Sol_space, name="Sol")

mesh_and_solution.read(sol,"/solution/sol_0")


result_file = XDMFFile('solution_after_reading.xdmf')
result_file.write(sol, t=float(0));

mesh_and_solution.close()

What I am interested in is that no matter what is the choice of either serial or certain processors in parallel I opt for, I want to have same results every time I write and especially read the data. Thanks in advance! :slight_smile:

I would suggest you read in your mesh as well, i.e. add:

mesh_and_solution.write(mesh_cube, "/my_mesh")

to the write out script, and

mesh_cube = Mesh()
mesh_and_solution = HDF5File(
    mesh_cube.mpi_comm(), 'mesh_mark_and_solution.h5', 'r')
mesh_and_solution.read(mesh_cube, "/my_mesh", False)

Thanks @dokken . So instead of making cube mesh in my reading code, I tried to read it from HDF5File too as you suggested but the issue is same:

from dolfin import *


mesh_cube = UnitCubeMesh(5,5,5)

mesh_and_solution = HDF5File(mesh_cube.mpi_comm(),'mesh_mark_and_solution.h5', 'w')

mesh_and_solution.write(mesh_cube, "/my_mesh")

mesh_function = MeshFunction("size_t", mesh_cube, mesh_cube.topology().dim(), 0)
domain = AutoSubDomain( lambda x, bndry:  between( x[0], (0 , 0.5) ))

domain.mark(mesh_function,1)
mesh = MeshView.create(mesh_function, 1)

result_file_mesh = XDMFFile('mesh_to_write.xdmf')
result_file_mesh.write(mesh)

Sol_space = FunctionSpace(mesh, "Lagrange", 1) 
sol = Function(Sol_space, name="Sol")

expression = Expression(" 100*x[2]*x[2]*x[2]",degree=2)
sol.assign(interpolate(expression, Sol_space))

mesh_and_solution.write(mesh_function,"/mesh_function/mesh_function_0") 
mesh_and_solution.write(sol, "/solution/sol_0")

result_file = XDMFFile('solution_to_write.xdmf')
result_file.write(sol, t=float(0));

mesh_and_solution.close()
from dolfin import *


mesh_cube = Mesh()


mesh_and_solution = HDF5File(
    mesh_cube.mpi_comm(), 'mesh_mark_and_solution.h5', 'r')
    
mesh_and_solution.read(mesh_cube, "/my_mesh", False)
    
mesh_function = MeshFunction("size_t", mesh_cube, mesh_cube.topology().dim(), 0)
mesh_and_solution.read(mesh_function,"/mesh_function/mesh_function_0")

mesh = MeshView.create(mesh_function, 1)

result_file_mesh = XDMFFile('mesh_after_reading.xdmf')
result_file_mesh.write(mesh);

Sol_space = FunctionSpace(mesh, "Lagrange", 1) 
sol = Function(Sol_space, name="Sol")

mesh_and_solution.read(sol,"/solution/sol_0")


result_file = XDMFFile('solution_after_reading.xdmf')
result_file.write(sol, t=float(0));

mesh_and_solution.close()

The issue might be that you are using MeshView, as the code I proposed works if the function lives on the full mesh (replace mesh with mesh_cube in the definition of the function space). @cdaversin might be able to shed some light on this.

Thanks @dokken for suggestions. I would be grateful to have some direction from @cdaversin :slight_smile:

Hi! Would there be any alternative way to do achieve the same objective or may be something additional to MeshView ?

Other than the MeshView I have also tried using SubMesh i.e. mesh = SubMesh(mesh_cube, mesh_function, 1) but it seems SubMesh is not compatible with the mpi. To be specific, if writing in serial while reading using mpi can be done, it would be still enough to solve my issue. :slight_smile:

Traceback (most recent call last):
  File "read_h5_file_solution.py", line 16, in <module>
    mesh = SubMesh(mesh_cube, mesh_function, 1)
RuntimeError: 

*** -------------------------------------------------------------------------
*** DOLFIN encountered an error. If you are not able to resolve this issue
*** using the information listed below, you can ask for help at
***
***     fenics-support@googlegroups.com
***
*** Remember to include the error message listed below and, if possible,
*** include a *minimal* running example to reproduce the error.
***
*** -------------------------------------------------------------------------
*** Error:   Unable to perform operation in parallel.
*** Reason:  SubMesh::init is not yet working in parallel.
***          Consider filing a bug report at https://bitbucket.org/fenics-project/dolfin/issues.
*** Where:   This error was encountered inside log.cpp.
*** Process: 9
*** 
*** DOLFIN version: 2019.2.0.dev0
*** Git changeset:  3eacdb46ed4e6dcdcbfb3b2f5eefa73f32e1b8a8
*** -------------------------------------------------------------------------