Thanks for the answers! I finally solved by doing two things before entering the Newton loop:
- Initialize the vector
u
with the boundary conditions. - Homogenize the boundary conditions.
With these changes, I get the same residuals than the NonlinearVariationalSolver (although to get the same values, I need to initialize u
before using the default solver). The code looks like this:
def NewtonSolver(G, u, bcs, dG):
tol = 1e-10
error = 1.
it = 0
for bc in bcs:
bc.apply(u.vector())
bc.homogenize()
while error > tol and it < 10:
# Assemble
dg, g = d.assemble_system(dG, G, bcs)
# PETSc solver
dGp = d.as_backend_type(dg).mat()
x, b = dGp.getVecs()
b.setValues(range(g.size()), g)
ksp = PETSc.KSP().create()
ksp.setType('cg')
ksp.setOperators(dGp)
ksp.solve(b, x)
error = g.norm('l2')
print('Iteration:', str(it), '; Error: %3.3e' % error)
if error < tol:
break
# Update
u.vector()[:] = u.vector()[:] - x
it += 1
I still have the question about the difference between assemble_system
and assemble
and apply
and how assemble_system
diagonalizes the matrix.
@nate what I need is to sum certain values to the assembled tangent and residual matrix, so the example you mention does not work for me. I already achieve this using petsc4py
. I don’t know what is the underlying SNES, could you explain that? Maybe there is a better way of doing what I am doing.