The functionality is meant for problems where you have already saved your mesh with
HDF5File in parallel, and next time you run it, want to use the same partioning (to save time). Thus, this means that if you create your mesh in serial, there is no partitioning.
You can observe this with the following minimal example:
from dolfin import *
filename = "mesh.h5"
mesh0 = UnitSquareMesh(20, 20)
mesh_file = HDF5File(mesh0.mpi_comm(), filename, "w")
# Read from file
mesh1 = Mesh()
mesh_file = HDF5File(mesh0.mpi_comm(), filename, "r")
mesh_file.read(mesh1, "/my_mesh", True)
from mpi4py import MPI
if MPI.COMM_WORLD.rank == 0:
infile = h5py.File(filename, "r")
mesh = infile["/my_mesh"]
partition = mesh["topology"].attrs["partition"]
if you run this for one process, you obtain
and three processes:
[ 0 265 536]
This means that the first rank owns cells [0, 265), rank two [265, 536), rank three [536, 800).
If you want to add such a partitioning scheme to your mesh created in serial, you can use
h5py to say what range of the cell topology belongs to each process