I’m running a reaction-diffusion simulation with fenics (c++) on a cluster with 24 mpi processes. The mesh is created via the following python script and stored in a local folder:
def rect_mesh(w, b, res):
mesh = RectangleMesh(Point(0, 0), Point(w, b), res, res, diagonal="right")
path = "../mesh/rect-" + str(w) + "on" + str(b) + "-res-" + str(res) + ".h5"
file = HDF5File(MPI.comm_world, path, 'w')
file.write(mesh, "/mesh")
If I run my code the script doesn’t return from the hdf5.read(…) function when the mesh reaches a certain size. If i run it with a mesh with dimensionality 10 on 10 with 170 cells in each direction (60k elements, 120k dof) it runs normal. But when I run the code with a 10 on 10 mesh with 244 cells in each direction (120k elements, 240k dof), the hdf5.read(…) function doesn’t return and the script is stuck.
std::shared_ptr<dolfin::Mesh> mesh = std::make_shared<dolfin::Mesh>();
auto hdf5 = dolfin::HDF5File(MPI_COMM_WORLD, pathToMesh, std::string("r"));
hdf5.read(*mesh, "/mesh", false);
Is this a bug or does anyone know the reason or a workaround for this?