I’m building my load vector from various sources, assembled independently, and lifting/applying BC’s subsequently. To get this to work in parallel, ghost valued need to be updated appropriately, but I’m failing to see the logic behind when to do this.
I’ve taken the Poisson demo to illustrate the issue:
from mpi4py import MPI
from petsc4py.PETSc import ScalarType # type: ignore
from petsc4py import PETSc
import numpy as np
import ufl
from dolfinx import fem, mesh
from dolfinx.fem.petsc import assemble_matrix, assemble_vector, apply_lifting, set_bc
# Function space and mesh
msh = mesh.create_unit_square(MPI.COMM_WORLD, 16, 16)
V = fem.functionspace(msh, ("Lagrange", 1))
# Boundary conditions
marker=lambda x: np.isclose(x[0], 0.0) | np.isclose(x[0], 2.0)
facets = mesh.locate_entities_boundary(msh,dim=(msh.topology.dim - 1),marker=marker)
dofs = fem.locate_dofs_topological(V=V, entity_dim=1, entities=facets)
bc = fem.dirichletbc(value=ScalarType(0), dofs=dofs, V=V)
# Variational problem
u = ufl.TrialFunction(V)
v = ufl.TestFunction(V)
x = ufl.SpatialCoordinate(msh)
f = 10 * ufl.exp(-((x[0] - 0.5) ** 2 + (x[1] - 0.5) ** 2) / 0.02)
g = ufl.sin(5 * x[0])
a = ufl.inner(ufl.grad(u), ufl.grad(v)) * ufl.dx
L0 = ufl.inner(f, v) * ufl.dx
L1 = ufl.inner(g, v) * ufl.ds
# Build ksp solver
A = assemble_matrix(fem.form(a), bcs=[bc])
A.assemble()
ksp = PETSc.KSP().create(MPI.COMM_WORLD)
ksp.setOperators(A)
opts = PETSc.Options()
opts["ksp_type"] = "preonly"
opts["pc_type"] = "lu"
opts["pc_factor_mat_solver_type"] = "mumps"
ksp.setFromOptions()
# Assemble RHS
b0 = assemble_vector(fem.form(L0))
# b0.ghostUpdate(addv=PETSc.InsertMode.ADD, mode=PETSc.ScatterMode.REVERSE) ## GHOST B0
b1 = assemble_vector(fem.form(L1))
# b1.ghostUpdate(addv=PETSc.InsertMode.ADD, mode=PETSc.ScatterMode.REVERSE) ## GHOST B1
b = b0
b.axpy(1.0, b1)
b.ghostUpdate(addv=PETSc.InsertMode.ADD, mode=PETSc.ScatterMode.REVERSE) ## GHOST B-prelift
# b = assemble_vector(fem.form(L0 + L1)) # THIS DOES WORK
apply_lifting(b, [fem.form(a)], bcs=[[bc]])
b.ghostUpdate(addv=PETSc.InsertMode.ADD, mode=PETSc.ScatterMode.REVERSE) ## GHOST B-postlift
set_bc(b, [bc])
# Solve problem
uh = fem.Function(V)
ksp.solve(b, x = uh.x.petsc_vec)
uh.x.scatter_forward()
checksum = msh.comm.allreduce(fem.assemble_scalar(fem.form(uh*uh* ufl.dx)), op=MPI.SUM)
if MPI.COMM_WORLD.rank == 0:
print(checksum) # Should be invariant to mpirun -n [nprocs]
assert np.isclose(checksum, 0.16199852713788038), "Result does not match mpirun -n 1 reference value"
Cases for which this does work:
mpirun -n 1(duh)- By defining
b = assemble_vector(fem.form(L0 + L1))and only updating ghosts after lifting. This is the ‘conventional’ way. - By updating the ghost values only on
b1(notb0) and updating the ghosts ofbafter applying the lifting (i.e., by uncommenting the line## GHOST B1). This does not make sense to me: Why updating twice? Why onlyb1?
Cases for which this confusingly does not work:
- Only updating the ghost values of
band only after lifting. This is what I would have expected to work. The process-localbcontributions are all finished, and only then do we cross-communicate. Why does this not work? - Updating the ghost values on
b0and updating the ghosts ofbafter applying the lifting (i.e., by uncommenting the line## GHOST B0) → Why does it work for updating onlyb1but not for updating onlyb0? - Any other combination of ghost updating.