Inconsistent results in parallel computation

Hi all, I find my results are somehow inconsistent when I run the code in parallel. Here is the minimal code about the 2D elasticity.

import dolfinx
import numpy as np
from mpi4py import MPI
from dolfinx.cpp.mesh import CellType
import ufl

# design domain
Lx = 2 # length of the domain in x-direction
Ly = 1 # length of the domain in y-direction
Nelex = 2 # number of elements in x-direction
Neley = 2 # number of elements in y-direction

# material parameters
E = 1 # elastic modulus
nu = 0.3 # Poisson's ratio

mesh = dolfinx.RectangleMesh(MPI.COMM_WORLD, [np.array([0, 0, 0]), np.array([Lx, Ly, 0])], \
    [Nelex, Neley], CellType.quadrilateral)
V = dolfinx.VectorFunctionSpace(mesh, ("CG", 1))

# supports
def clamped_boundary(x):
    return np.isclose(x[0], 0)
fdim = mesh.topology.dim - 1
boundary_facets = dolfinx.mesh.locate_entities_boundary(mesh, fdim, clamped_boundary)
u_D = dolfinx.Function(V)
with u_D.vector.localForm() as loc:
    loc.set(0)
bc = dolfinx.DirichletBC(u_D, dolfinx.fem.locate_dofs_topological(V, fdim, boundary_facets))

# loads
T = dolfinx.Constant(mesh, (0, 1)) # traction
f = dolfinx.Constant(mesh, (0, 0)) # body force
# identifying the facets contained in each boundary
boundaries = [(1, lambda x: np.isclose(x[0], 0)),
              (2, lambda x: np.isclose(x[0], Lx)),
              (3, lambda x: np.isclose(x[1], 0)),
              (4, lambda x: np.isclose(x[1], Ly))]
# loop through all the boundary conditions to identify the facets
facet_indices, facet_markers = [], []
for (marker, locator) in boundaries:
    facets = dolfinx.mesh.locate_entities(mesh, fdim, locator)
    facet_indices.append(facets)
    facet_markers.append(np.full(len(facets), marker))
facet_tag = dolfinx.MeshTags(mesh, fdim, np.array(np.hstack(facet_indices),dtype=np.int32), \
    np.array(np.hstack(facet_markers),dtype=np.int32))
import ufl
ds = ufl.Measure("ds", domain=mesh, subdomain_data=facet_tag)

lambda_ = E*nu / (1+nu) / (1-2*nu) # first Lame constant
mu = E / (2*(1+nu)) # second Lame constant (shear modulus)
def epsilon(u):
    return ufl.sym(ufl.grad(u))
def sigma(u):
    return lambda_ * ufl.nabla_div(u) * ufl.Identity(u.geometric_dimension()) + 2*mu*epsilon(u)

u = ufl.TrialFunction(V)
v = ufl.TestFunction(V)
a = ufl.inner(sigma(u), epsilon(v)) * ufl.dx
L = ufl.dot(f, v) * ufl.dx + ufl.dot(T, v) * ds(2)
problem = dolfinx.fem.LinearProblem(a, L, bcs=[bc], petsc_options={"ksp_type": "preonly", "pc_type": "lu"})

uh = problem.solve()
print(uh.compute_point_values().round(3))

When I run the code in only one processor, it can give us the right result:

[[ 0.    -0.   ]
 [-0.     0.   ]
 [-0.     7.8  ]
 [-5.492  8.384]
 [-0.     0.   ]
 [ 5.492  8.384]
 [ 7.541 23.749]
 [-0.    23.702]
 [-7.541 23.749]]

However, if we run it in parallel with two processors, it gives us the wrong results (and it seems these results are twice of the above correct result exactly)

[[  0.     -0.   ]
 [ -0.      0.   ]
 [ -0.     15.6  ]
 [-10.984  16.769]
 [ -0.      0.   ]
 [ 10.984  16.769]
 [ 15.082  47.497]
 [ -0.     47.404]
 [-15.082  47.497]]
[[-15.082  47.497]
 [ -0.     47.404]
 [ 15.082  47.497]
 [-10.984  16.769]
 [ -0.     15.6  ]
 [ 10.984  16.769]
 [ -0.      0.   ]
 [ -0.      0.   ]
 [  0.     -0.   ]]

It seems the error comes from the terms related to the tractions T. If we suppress the traction and add some body forces:

# loads
T = dolfinx.Constant(mesh, (0, 0)) # traction
f = dolfinx.Constant(mesh, (0, 1)) # body force

we can obtain the consistent results now:
Single processor:

[[ 0.    -0.   ]
 [-0.     0.   ]
 [-0.     8.233]
 [-4.561  8.689]
 [-0.     0.   ]
 [ 4.561  8.689]
 [ 5.632 19.931]
 [-0.    19.978]
 [-5.632 19.931]]

Two processors:

[[ 0.    -0.   ]
 [-0.     0.   ]
 [-0.     8.233]
 [-4.561  8.689]
 [-0.     0.   ]
 [ 4.561  8.689]
 [ 5.632 19.931]
 [-0.    19.978]
 [-5.632 19.931]]
[[-5.632 19.931]
 [-0.    19.978]
 [ 5.632 19.931]
 [-4.561  8.689]
 [-0.     8.233]
 [ 4.561  8.689]
 [-0.     0.   ]
 [-0.     0.   ]
 [ 0.    -0.   ]]

So could anyone please tell me where the bug is? Thanks!

I think the issue is in the creation of your mesh tag. The indices has to be sorted. See: Hyperelasticity — FEniCSx tutorial
I.e

marked_facets = np.hstack([left_facets, right_facets])
marked_values = np.hstack([np.full(len(left_facets), 1, dtype=np.int32), np.full(len(right_facets), 2, dtype=np.int32)])
sorted_facets = np.argsort(marked_facets)
facet_tag = dolfinx.MeshTags(mesh, mesh.topology.dim-1, marked_facets[sorted_facets], marked_values[sorted_facets])
1 Like

Hi Dokken, sorting facets and values does work! In addition, I found that replacing dolfinx.mesh.locate_entities with dolfinx.mesh.locate_entities_boundary also works.

Initially, I used dolfinx.mesh.locate_entities without sorting facets and values based on the tutorial Setting multiple Dirichlet, Neumann, and Robin conditions — FEniCSx tutorial (jorgensd.github.io). So I am not sure if something is also missing in that tutorial.

The tutorial should probably use sorting as well. I’ll fix that

1 Like