Hello,
I have run into some weird issues when assembling a scalar:
MWE script:
from mpi4py import MPI
from dolfinx import io, fem, __version__
import ufl
if MPI.COMM_WORLD.rank == 0:
print(__version__)
path = 'path/to/mesh.msh'
domain, cell_tags, facet_tags = io.gmshio.read_from_msh(path, MPI.COMM_WORLD, 0, gdim=3)
dS = ufl.Measure("dS", domain=domain, subdomain_data=facet_tags)
scalar_value = fem.assemble_scalar(fem.form(8*dS(2)))
print(scalar_value)
When I run this with mpirun -n 1 python3 issue_dS.py
I get this output which seems correct.
0.8.0
Info : Reading 'meshes/urbanek/mesh.msh'...
Info : 75 entities
Info : 224103 nodes
Info : 1255543 elements
Info : Done reading 'meshes/urbanek/mesh.msh'
0.8593967035684853
When I run this with mpirun -n 2 python3 issue_dS.py
I get this output which is also correct.
0.8.0
Info : Reading 'meshes/urbanek/mesh.msh'...
Info : 75 entities
Info : 224103 nodes
Info : 1255543 elements
Info : Done reading 'meshes/urbanek/mesh.msh'
0.47619510276556726
0.38313327447585555
But when I increase the number of processses even more, such as mpirun -n 20 python3 issue_dS.py
, the program hangs in this state and never finishes:
0.8.0
Info : Reading 'meshes/urbanek/mesh.msh'...
Info : 75 entities
Info : 224103 nodes
Info : 1255543 elements
Info : Done reading 'meshes/urbanek/mesh.msh'
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
I do not know if this is issue with the mesh or with dolfinx. I am doing something wrong? The issue arises when I increase the number of processors above 3. I think it happens when the mesh is partitioned in such a way that there no facets with mark 2 present in some of the processes.
The mesh has been generated using gmsh from a step file. Here is a Link to the mesh (58MB).
Is this issue in dolfinx? If so I will report it on Github.
Thank you for any reply.