Create Mesh object or xdmf file in parallel

Dear all,

I have my own parallel (MPI-based) mesh generator in C++/Python, I’m wonder that if is it possible to directly create fenice/dolfin mesh object using the data in each process and solve it? I tried below MME:

import os, sys
import numpy as np
import fenics as fe
import my_own_mesh_generator as pqm
pqm.MPI_initialize(len(sys.argv), sys.argv) # initiate MPI
fem = pqm.FEM2D()                           # create mesh generator object
fem.mesh_generator(sys.argv)                # generate mesh
pqm.MPI_barrier()                           # end parallel mesh generation
# Now I have the nodes and elements of each subdomain in each process.
# I hope saving whole mesh into one single file in xdmf format.
comm = fe.MPI.comm_world
rank = fe.MPI.rank(comm)
size = fe.MPI.size(comm)
nodes, cells, total_node, total_cell, Local2Global_node_id = fem.mesh_generator.mesh.info
# The only way I know for saving mesh as xdmf is creating a fenics mesh then writing as below
mesh = fe.Mesh(comm)
editor = fe.MeshEditor()
editor.open(mesh, type="triangle", tdim=2, gdim=2)
editor.init_vertices_global(nodes.shape[0], total_node)
editor.init_cells_global(cells.shape[0], total_cell)
for k, pt in enumerate(nodes):
    editor.add_vertex_global(k, Local2Global_node_id[k], pt)
for k, cell in enumerate(cells):
    editor.add_cell(k, cell)
editor.close()
with fe.XDMFFile(comm, 'test_mesh.xdmf') as outfile:
    outfile.write(mesh)

But I got this error:

*** Error:   Unable to get shared mesh entities.
*** Reason:  Shared mesh entities have not been computed for dim 0.
*** Where:   This error was encountered inside MeshTopology.cpp.
*** Process: 0
*** 
*** DOLFIN version: 2019.1.0
*** Git changeset:  77c758717da1cb8017bf95109363bb1a91a4f8e5

I read one issue report link similar to my question, any update on that? Or this feature is impossible with FEniCS?

Thank you.

Hi @chris, do you have any suggestion?
I hope running whole simulation from mesh generation to solution in one parallel program. If not, I am also satisfied with writing xdmf file in parallel.
I found the slides named Parallel I/O and Parallel Refinement, which may similar to what I want. If possible, may I have the source code of cnr12/python/bigmesh.py? I tried but cannot find it.
Thanks.
Ming

I would suggest writing your mesh to xdmf, as it is quite wasteful to regenerate the mesh for every run (especially for large problems ). This is also a simple way of handling the parallel io, as the mesh can be written to xdmf in serial and read it in in parallel

Thanks @dokken. Is there a method/example for writing mesh to xdmf in parallel?

I would say the simplest is to generate the mesh in serial, and load it in parallel.

For anyone who has similar demand as me, I figured out a way which uses mpi4py and h5py to write h5 and xdmf file in parallel.

import os, sys
import numpy as np
import mpi4py, h5py
comm = mpi4py.MPI.COMM_WORLD
rank = comm.rank
model_name = 'my_test'
file = h5py.File(model_name + '.h5', 'w', driver='mpio', comm=comm)
MeshGroup = file.create_group('Mesh')
meshGroup = MeshGroup.create_group('mesh')
geom = meshGroup.create_dataset('geometry', (9, 2), dtype='float64')
if rank is 0:
    geom[0] = [0, 0]
    geom[1] = [0.5, 0]
    geom[2] = [1, 0]
elif rank is 1:
    geom[3] = [0, 0.5]
    geom[4] = [0.5, 0.5]
elif rank is 2:
    geom[5] = [1, 0.5]
    geom[6] = [0, 1]
elif rank is 3:
    geom[7] = [0.5, 1]
    geom[8] = [1, 1]
# I think the 'partition' attribute is not necessary
# geom_num_partition = np.array([rank*3], 'uint')
# geom_all_partition = np.zeros(size, 'uint')
# comm.Allgather(geom_num_partition, geom_all_partition)
# geom.attrs['partition'] = geom_all_partition
topo = meshGroup.create_dataset('topology', (8, 3), dtype='i')
if rank is 0:
    topo[0] = [3, 4, 7]
    topo[1] = [3, 6, 7]
elif rank is 1:
    topo[[2, 3]] = [[0, 1, 4], [0, 3, 4]]
elif rank is 2:
    topo[4:6, :] = [[1, 2, 5], [1, 4, 5]]
elif rank is 3:
    topo[6:8, :] = [[4, 5, 8], [4, 7, 8]]
# The same, this 'partition' attribute is not necessary
# topo_num_partition = np.array([rank*2], 'uint')
# topo_all_partition = np.zeros(size, 'uint')
# comm.Allgather(topo_num_partition, topo_all_partition)
# topo.attrs['partition'] = topo_all_partition
if rank is 0:
    outfile = open(model_name + '.xdmf', 'w')
    outfile.write('<?xml version="1.0"?>\n<!DOCTYPE Xdmf SYSTEM "Xdmf.dtd" []>\n<Xdmf Version="3.0" xmlns:xi="http://www.w3.org/2001/XInclude">\n')
    outfile.write('  <Domain>\n    <Grid Name="mesh" GridType="Uniform">\n')
    outfile.write('      <Topology NumberOfElements="' + str(topo.shape[0]) + '" TopologyType="Triangle" NodesPerElement="3">\n')
    outfile.write('        <DataItem Dimensions="' + str(topo.shape[0]) + ' 3" NumberType="UInt" Format="HDF">' + model_name + '.h5:/Mesh/mesh/topology</DataItem>\n')
    outfile.write('      </Topology>\n      <Geometry GeometryType="XY">\n')
    outfile.write('        <DataItem Dimensions="' + str(geom.shape[0]) + ' 2" Format="HDF">' + model_name + '.h5:/Mesh/mesh/geometry</DataItem>\n')
    outfile.write('      </Geometry>\n    </Grid>\n  </Domain>\n</Xdmf>\n')
file.close()

You need to enable h5py --parallel, and the above code generates xdmf and h5 files with the mesh same to mesh = fe.UnitSquareMesh(comm, 2, 2).