I do 2D Poisson/drift-diffusion semiconductor device modeling using legacy dolfin, as implemented in Simudo. I have had a devil of a bug that I’ve spent months tracking down, and I have narrowed it down to the following simple MWE. The real problem I care about is a nonlinear problem on a mixed space; it occurs consistently when I use meshes that I generate with pygmsh/gmsh and does not occur when I use meshes that we make manually (by specifying explicit points/triangles).
With that said, here’s the issue. I make a very simple mesh, divided into three regions. In this case, I have only 4 triangles per region, and the figure shows the numbering of the cells.
I make a dx
measure restricted to only some of the regions (I use 1,3 in the below example). If I add a term including dS
into the weak form, one or more cells from one region behave as though they were in a different region.
My simple mesh is available here:
mesh.xdmf mesh.h5
And here is an MWE:
import dolfin
import numpy as np
mesh = dolfin.Mesh()
with dolfin.XDMFFile("mesh.xdmf") as xdmf:
xdmf.read(mesh)
cell_markers = dolfin.MeshFunction('size_t', mesh, mesh.topology().dim())
xdmf.read(cell_markers, "cv")
# Define integration measures on subdomains
dx = dolfin.dx(domain=mesh, subdomain_data=cell_markers)
dx_middle = dx(2)
dx_else = dx((1,3))
dS = dolfin.dS(domain=mesh) #internal facet measure
# Set up the DG0 space
DG0_space = dolfin.FunctionSpace(mesh, "DG", 0)
u = dolfin.Function(DG0_space)
v = dolfin.TestFunction(DG0_space)
# Here is our weak form. It's singular in this case,
# so we're not going to try to solve it but just look at the assembled matrix.
F = u * v * dx_else
# The next term should have no effect at all. Not only is it
# multiplied by zero, but it contains no trial functions, so even
# if the constant were nonzero, it would still only contribute to the rhs vector b.
# But if it is included, the assembled matrix changes.
include_dS = True
if include_dS:
F += dolfin.Constant(0.0) * v("+") * dS
#This is a linear problem, but let's treat it as a nonlinear one:
J = dolfin.derivative(F,u)
bcs=[]
assembler = dolfin.fem.assembling.SystemAssembler(J,F,bcs)
A = dolfin.PETScMatrix()
assembler.assemble(A)
A_array = A.array()
#Find which dofs are in the middle region:
middle_region = [0.1,0.2]
def region_dofs(space):
dofmap = space.dofmap()
region_dofs = []
for cell in dolfin.cells(mesh):
cell_dofs = dofmap.cell_dofs(cell.index())
cell_coords = cell.get_vertex_coordinates()
cell_coords = np.reshape(cell_coords, (-1,2))
if np.all(middle_region[0] <= cell_coords[:,0]) and np.all(cell_coords[:,0]<=middle_region[1]):
region_dofs.extend(cell_dofs)
return list(set(region_dofs))
dofs = DG0_space.dofmap().dofs() # len 12
middle_dofs = region_dofs(DG0_space) # len 4
# Rows of the A matrix correspond to test functions v.
# The test functions corresponding to the middle region (marked with a *) should
# have no nonzeros. When include_dS is False, that happens correctly.
# When include_dS is True, however, two cells seem to swap places, and one cell in the middle
# region (#3) gets a nonzero while one cell in the outer region (#7) gets a zero
# Note that for other setups of this problem, the swapping has occurred between cells 2 and 7.
print(f"Include_dS: {include_dS}")
print(f"* marks cell in the middle region.")
for ind, i in enumerate(dofs):
nonzero_row = np.nonzero(A_array[i,:])[0]
extra = "*" if i in middle_dofs else ""
print(f"Row corresponding to dof {i}{extra} has {len(nonzero_row)} nonzeros. Should be {0 if i in middle_dofs else 1}.")
When include_dS
is False
, the output is:
Include_dS: False
* marks cell in the middle region.
Row corresponding to dof 0* has 0 nonzeros. Should be 0.
Row corresponding to dof 1* has 0 nonzeros. Should be 0.
Row corresponding to dof 2* has 0 nonzeros. Should be 0.
Row corresponding to dof 3* has 0 nonzeros. Should be 0.
Row corresponding to dof 4 has 1 nonzeros. Should be 1.
Row corresponding to dof 5 has 1 nonzeros. Should be 1.
Row corresponding to dof 6 has 1 nonzeros. Should be 1.
Row corresponding to dof 7 has 1 nonzeros. Should be 1.
Row corresponding to dof 8 has 1 nonzeros. Should be 1.
Row corresponding to dof 9 has 1 nonzeros. Should be 1.
Row corresponding to dof 10 has 1 nonzeros. Should be 1.
Row corresponding to dof 11 has 1 nonzeros. Should be 1.
As it should be.
When include_dS
is True
, however, the output should be identical but in fact is:
Include_dS: True
* marks cell in the middle region.
Row corresponding to dof 0* has 0 nonzeros. Should be 0.
Row corresponding to dof 1* has 0 nonzeros. Should be 0.
Row corresponding to dof 2* has 1 nonzeros. Should be 0.
Row corresponding to dof 3* has 0 nonzeros. Should be 0.
Row corresponding to dof 4 has 1 nonzeros. Should be 1.
Row corresponding to dof 5 has 1 nonzeros. Should be 1.
Row corresponding to dof 6 has 1 nonzeros. Should be 1.
Row corresponding to dof 7 has 0 nonzeros. Should be 1.
Row corresponding to dof 8 has 1 nonzeros. Should be 1.
Row corresponding to dof 9 has 1 nonzeros. Should be 1.
Row corresponding to dof 10 has 1 nonzeros. Should be 1.
Row corresponding to dof 11 has 1 nonzeros. Should be 1.
Where you can see that row 2 and row 7 have exchanged. In this case, the A
matrix is only 12x12, so one can inspect the whole thing and see that only the diagonal values are nonzeros, they are just in the wrong locations.
I am mystified. Is there a problem with this mesh that you can see? Or something that goes wrong when using both dS
and domain-restricted dx
in a weak form, but only with meshes from pygmsh/gmsh? I know that dolfinx
is preferred at this point, but we have a large amount of code relying on legacy dolfin, so the switch would not be easy.
This problem also occurs without using dolfin.derivative
. So, for example, letting
u0 = dolfin.TrialFunction(DG0_space)
F0 = u0 * v * dx_else
F0 += dolfin.Constant(0.0) * v("+") * dS
assembler = dolfin.fem.assembling.SystemAssembler(dolfin.lhs(F0),dolfin.rhs(F0),bcs)
A0 = dolfin.PETScMatrix()
assembler.assemble(A0)
A0_array = A0.array()
gives the same problem.
I am running dolfin 2019.2.0.dev0 installed with apt
in an arm64 Ubuntu jammy docker container on Apple silicon. I have also reproduced this issue in a similar docker container on mantic running dolfin 2019.2.0.13.dev0.
Any help greatly appreciated!