I would implement the nonlinear solver loop by myself, in dolfin-x style something like
# create solver
ksp = PETSc.KSP().create(comm)
ksp.setType("preonly")
ksp.getPC().setType("lu")
ksp.getPC().setFactorSolverType("superlu_dist")
while it < maxiter:
# assemble rhs vector
r = assemble_vector(varform)
apply_lifting(r, [jac], [dbcs], x0=[u.vector], scale=-1.0)
r.ghostUpdate(addv=PETSc.InsertMode.ADD, mode=PETSc.ScatterMode.REVERSE)
set_bc(r, dbcs, x0=u.vector, scale=-1.0)
# assemble system matrix
K = assemble_matrix(jac, dbcs)
K.assemble()
# solve linear system
ksp.setOperators(K)
ksp.solve(-r, del_u.vector)
# get residual and increment norm
struct_res_norm = r.norm()
struct_inc_norm = del_u.vector.norm()
# update solution
u.vector.axpy(1.0, del_u.vector)
u.vector.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)
if comm.rank == 0:
print("%i %.4e %.4e" % (it, struct_res_norm, struct_inc_norm))
sys.stdout.flush()
# increase iteration index
it += 1
# check if converged
if struct_res_norm <= tolres and struct_inc_norm <= tolinc:
break
else:
if comm.rank == 0:
print("Newton did not converge after %i iterations!" % (it))
sys.stdout.flush()
and then would do a “pseudo-transient continuation”-like regularization (PTC) on the system matrix, like K += alpha * I, in PETSc done with
K.shift(alpha)
where you initialize alpha
by some scalar value and update it after each iteration with
alpha *= struct_res_norm/struct_res_norm_last
Like this, the closer you get to the solution, the more the method behaves like a standard Newton.
I also use an adaptive scheme where alpha
is only set to non-zero if there is a NAN in the res norm (can be checked with numpy’s isnan(struct_res_norm)
).
Especially for hyperelasticity, this literally always helps me.
Best,
Marc