Refined mesh and "dS" term in parallel: RuntimeError: Cannot compute interior facet integral over interprocess facet

Please help to understand and resolve this error message:
RuntimeError: Cannot compute interior facet integral over interprocess facet. Please use ghost mode shared facet when creating the mesh

The error

  • arises when running the code in parallel
  • does not arise when running on one processor, including refinement, BUT compromises the solution - as can be seen on nicely tiled nonphysical contours drawn by matplotlib
  • does not arise when commenting out the refinement (both serial and parallel runs)

I may be missing some communication on mesh after refining.

Please see the MWE:
(weak form reduced into very basic linear and nonphysical but working equation just to reproduce the error)

import numpy as np
from mpi4py import MPI
from petsc4py import PETSc
from ufl import TestFunction, TrialFunction, dx, ds, dS, inner, grad, jump, avg
import dolfinx
from dolfinx.fem import functionspace, Function
from dolfinx.fem.petsc import LinearProblem
from dolfinx.mesh import create_unit_square, refine, compute_incident_entities

comm = MPI.COMM_WORLD

################
### MESH
################
Nx, Ny = 12, 12 # Mesh size
mesh = create_unit_square(comm, Nx, Ny, ghost_mode = dolfinx.cpp.mesh.GhostMode.shared_facet)

################
### REFINE
################
mesh.topology.create_entities(1)
#mark cells
cells = np.arange(mesh.topology.index_map(mesh.topology.dim).size_local, dtype=np.int32) # all cells
edges = compute_incident_entities(mesh.topology, cells, 2, 1)
mesh, _, _ = refine(mesh, edges)
#mesh, _, _ = refine(mesh) #  all cells as well

################
V = functionspace(mesh, ("Lagrange", 1)) 
#V = functionspace(mesh, ("DG", 1)) 
u = TrialFunction(V)
v = TestFunction(V)

################
### WEAK FORM
################
a = u*v*dx
L = avg(v)*dS
problem = LinearProblem(a, L)
uh = problem.solve()

##########
#u.x.scatter_forward()
#
## Add values from ghost regions and accumulate them on the owning process
#u.x.scatter_reverse(dolfinx.la.InsertMode.add)

and the actual error message:

mpirun -n 2 python MWE.py 
  File "/data/FEM/fenics/dolfinx_0.10.0/MPI/refine_dS/MWE.py", line 40, in <module>
    problem = LinearProblem(a, L)
  File "/home/jnk/.local/lib/python3.13/site-packages/dolfinx/fem/petsc.py", line 839, in __init__
    self._L = _create_form(
              ~~~~~~~~~~~~^
        L,
        ^^
    ...<2 lines>...
        jit_options=jit_options,
        ^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/home/jnk/.local/lib/python3.13/site-packages/dolfinx/fem/forms.py", line 335, in form
    return _create_form(form)
  File "/home/jnk/.local/lib/python3.13/site-packages/dolfinx/fem/forms.py", line 329, in _create_form
    return _form(form)
  File "/home/jnk/.local/lib/python3.13/site-packages/dolfinx/fem/forms.py", line 304, in _form
    f = ftype(
        module.ffi.cast("uintptr_t", module.ffi.addressof(ufcx_form)),
    ...<5 lines>...
        mesh,
    )
RuntimeError: Cannot compute interior facet integral over interprocess facet. Please use ghost mode shared facet when creating the mesh
Exception ignored in: <function LinearProblem.__del__ at 0x74cbf5522d40>
Traceback (most recent call last):
  File "/home/jnk/.local/lib/python3.13/site-packages/dolfinx/fem/petsc.py", line 881, in __del__
Exception ignored in: <function LinearProblem.__del__ at 0x7036448d6d40>
Traceback (most recent call last):
  File "/home/jnk/.local/lib/python3.13/site-packages/dolfinx/fem/petsc.py", line 881, in __del__
    self._solver.destroy()
AttributeError: 'LinearProblem' object has no attribute '_solver'
    self._solver.destroy()
AttributeError: 'LinearProblem' object has no attribute '_solver'
--------------------------------------------------------------------------
prterun detected that one or more processes exited with non-zero status,
thus causing the job to be terminated. The first process to do so was:

   Process name: [prterun-archlinux-595914@1,0]
   Exit code:    1
--------------------------------------------------------------------------

Dolfinx version is 0.10.0
Thank you!

You should use

mesh, _, _ = refine(mesh, edges, partitioner=dolfinx.mesh.create_cell_partitioner(GhostMode.shared_facet))
2 Likes

Thank you, this resolves the MPI “shared facet” error as requested.

The other problem regarding the refinement remains (unrelated to MPI) - the solution is corrupted, or at least different when using initial square mesh of size 24x24 (1152 numcells) vs. using 12x12 refined into 24x24 (1152 numcells) - as illustrated by picture in my original post.
Regardless of meaningfulness of the weak form I guess those meshes and relevant function spaces should behave the same.

I am not sure what you are trying to compute with:

as the meshes are different.
Consider:

import numpy as np
from mpi4py import MPI
from ufl import TestFunction, TrialFunction, dx, SpatialCoordinate, cos, avg, dS
import dolfinx
from dolfinx.fem import functionspace
from dolfinx.fem.petsc import LinearProblem
from dolfinx.mesh import create_unit_square, refine, compute_incident_entities

refine_mesh = False
comm = MPI.COMM_WORLD

################
### MESH
################
if refine_mesh:
    Nx, Ny = 12, 12  # Mesh size
else:
    Nx, Ny = 24, 24
mesh = create_unit_square(
    comm, Nx, Ny, ghost_mode=dolfinx.cpp.mesh.GhostMode.shared_facet
)

################
### REFINE
################
if refine_mesh:
    mesh.topology.create_entities(1)
    # mark cells
    cells = np.arange(
        mesh.topology.index_map(mesh.topology.dim).size_local, dtype=np.int32
    )  # all cells
    edges = compute_incident_entities(mesh.topology, cells, 2, 1)
    mesh, _, _ = refine(
        mesh,
        edges,
        partitioner=dolfinx.mesh.create_cell_partitioner(
            dolfinx.mesh.GhostMode.shared_facet
        ),
    )

################
V = functionspace(mesh, ("Lagrange", 1))
u = TrialFunction(V)
v = TestFunction(V)

################
### WEAK FORM
################
a = u * v * dx
x = SpatialCoordinate(mesh)
f = x[0] + 2 * cos(x[1])
L = avg(v) * dS
# L = f * v * dx
problem = LinearProblem(
    a,
    L,
    petsc_options={
        "ksp_type": "preonly",
        "pc_type": "lu",
        "pc_factor_mat_solver_type": "mumps",
        "ksp_error_if_not_converged": True,
    },
)
uh = problem.solve()

with dolfinx.io.XDMFFile(MPI.COMM_WORLD, "u.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh)
    xdmf.write_function(uh)

Running the code with refine_mesh=False
you get


while with
refine_mesh=True, you get:

which is a different mesh.
Using a smooth function, say

x = SpatialCoordinate(mesh)
f = x[0] + 2 * cos(x[1])
L = f * v * dx

you get a smooth solution, regardless of the option, so this is down to the RHS of your form.
What are you trying to compute?

Right, smooth RHS was working as expected (projection of expression into solution function), I just replaced it by L = avg(v) * dS to reproduce the error I faced when trying to reformulate a quite complex and nonlinear system into discontinuous Galerking inspired by demo_navier-stokes, including jumps and averages.
The error was independent on P1/DG function spaces, which is probably not the case of other aspects of the problem.

I wouldn’t like to bother anybody here to provide me the guidance in mathematics.
I was just worried of different behavior of 2 expectedly identical meshes, so if the meshes are different, the question is answered.

If you further modified the mesh diagonal to left / right :

mesh = create_unit_square(
   comm, Nx, Ny,
   ghost_mode=dolfinx.cpp.mesh.GhostMode.shared_facet,
   diagonal = dolfinx.cpp.mesh.DiagonalType.left_right
)

you really get a similar behavior (at least the portion where the meshes overlap), so no concern about the mesh anymore.
Really appreciate your prompt help!

For the same error, but when a mesh is imported with gmshio.read_from:msh, i.e.:

gmshio.read_from_msh(filename+'.msh', MPI.COMM_WORLD, gdim=3)

How should be GhostMode be implemented?

As seen by the API: dolfinx.io.gmshio — DOLFINx 0.9.0 documentation
you can send in a partitioner, as done in refine, i.e. send in:

dolfinx.mesh.create_cell_partitioner(
            dolfinx.mesh.GhostMode.shared_facet
        )

to read_from_msh