 # Custom Newton Solver problem with Dirichlet conditions

Hello everyone,

For some further applications, I need to access to the assembled system at each Newton iteration of a nonlinear hyperelasticity problem. Therefore I build my own Newton Solver using `petsc4py`:

``````def NewtonSolver(G, u, bcs, dG):
tol = 1e-10
error = 1.
it = 0
while error > tol and it < 10:
# Assemble system
dg, g = d.assemble_system(dG, G, bcs)

# Manual BC
#g = d.assemble(G)
#dg = d.assemble(dG)

# Apply BC
#for bc in bcs:
#bc.apply(g)
#bc.apply(dg)

error = g.norm('l2')
print('Iteration:', str(it), '; Error: %3.3e' % error)
if error < tol:
break

# PETSc solver
dGp = d.as_backend_type(dg).mat()
x, b = dGp.getVecs()
b.setValues(range(g.size()), g)
ksp = PETSc.KSP().create()
ksp.setType('cg')
ksp.setOperators(dGp)
ksp.solve(b, x)

# Update
u.vector()[:] = u.vector()[:] - x
it += 1
``````

And I used it to solve a simple example of a square with zero displacement at the left boundary and a traction or a prescribed displacement at the right boundary (the following code is for the second example, but it can be easily changed to the first one):

``````#%% Mesh
E = 250.
nu = 0.3

mu = E/(2.0*(1.0 + nu))
lmbda = E*nu/((1.0 + nu)*(1.0 - 2.0*nu))

mesh = d.UnitSquareMesh(3,3)

class C2(d.SubDomain):
def inside(self,x,on_boundary):
return d.near(x,1)
class C4(d.SubDomain):
def inside(self,x,on_boundary):
return d.near(x,0)

c2 = C2()
c4 = C4()

#%% Function Spaces and Functions
V = d.VectorFunctionSpace(mesh, 'CG', 1)

u = d.Function(V)
du = d.TrialFunction(V)
delta_u = d.TestFunction(V)

#%% Boundary Conditions
boundary_markers = d.MeshFunction("size_t", mesh, mesh.topology().dim() - 1)

c2.mark(boundary_markers, 2)
c4.mark(boundary_markers, 4)

d.File('bc.pvd') << boundary_markers

u0 = d.Constant((0., 0.))
u1 = d.Constant((0.1, 0.))
bc4 =  d.DirichletBC(V, u0, boundary_markers, 4)
bc2 =  d.DirichletBC(V, u1, boundary_markers, 2)

ds = d.Measure("ds", domain = V.mesh(), subdomain_data=boundary_markers)

bcs = [bc2, bc4]

#%% NL problem
# Volume
dim = len(u)

I = d.Identity(dim)             # Identity tensor
J  = d.det(F)
FiT = d.inv(F.T)

PK1 = mu*F + (lmbda*d.ln(J) - mu)*FiT

T = d.Constant((30.,0.))
G = d.inner(PK1, gradu)*d.dx #- d.dot(T, delta_u)*ds(2)

dG = d.derivative(G, u)
``````

I am comparing the results with the NonlinearVariationalSolver:

``````#%% Solve
u.rename('u','u')
# Fenics solver
problem = d.NonlinearVariationalProblem(G, u, bcs, dG)
solver = d.NonlinearVariationalSolver(problem)
solver.solve()
d.File('u0.pvd') << u

# Custom solver
NewtonSolver(G, u, bcs, dG)
d.File('u1.pvd') << u
``````

The problem with traction at the right end works very well, although the residual norm for the last steps is not exactly the same as in the Fenics solver:

NonlinearVariationalSolver:

``````      Solving nonlinear variational problem.
Newton iteration 0: r (abs) = 1.581e+01 (tol = 1.000e-10) r (rel) = 1.000e+00 (tol = 1.000e-09)
Newton iteration 1: r (abs) = 1.563e+00 (tol = 1.000e-10) r (rel) = 9.883e-02 (tol = 1.000e-09)
Newton iteration 2: r (abs) = 1.169e-02 (tol = 1.000e-10) r (rel) = 7.395e-04 (tol = 1.000e-09)
Newton iteration 3: r (abs) = 1.215e-06 (tol = 1.000e-10) r (rel) = 7.685e-08 (tol = 1.000e-09)
Newton iteration 4: r (abs) = 7.951e-14 (tol = 1.000e-10) r (rel) = 5.028e-15 (tol = 1.000e-09)
Newton solver finished in 4 iterations and 4 linear solver iterations.
``````

Custom Solver:

``````Iteration: 0 ; Error: 1.581e+01
Iteration: 1 ; Error: 1.563e+00
Iteration: 2 ; Error: 1.169e-02
Iteration: 3 ; Error: 1.235e-06
Iteration: 4 ; Error: 6.880e-12
``````

Now, my problem is when I put a non-zero Dirichlet condition at the right end. The default fenics solver works perfect:

``````      Solving nonlinear variational problem.
Newton iteration 0: r (abs) = 5.418e+01 (tol = 1.000e-10) r (rel) = 1.000e+00 (tol = 1.000e-09)
Newton iteration 1: r (abs) = 4.638e-01 (tol = 1.000e-10) r (rel) = 8.560e-03 (tol = 1.000e-09)
Newton iteration 2: r (abs) = 5.118e-04 (tol = 1.000e-10) r (rel) = 9.445e-06 (tol = 1.000e-09)
Newton iteration 3: r (abs) = 7.032e-10 (tol = 1.000e-10) r (rel) = 1.298e-11 (tol = 1.000e-09)
Newton solver finished in 3 iterations and 3 linear solver iterations.
``````

But my custom solver fails:

``````Iteration: 0 ; Error: 5.418e+01
Iteration: 1 ; Error: 6.535e+01
Iteration: 2 ; Error: 8.172e+01
Iteration: 3 ; Error: 1.067e+02
Iteration: 4 ; Error: 1.474e+02
Iteration: 5 ; Error: 2.192e+02
``````

I think my solver fails because I am missing something on how boundary conditions should be handled. In fact, in the update is something wrong because at each iteration the vector `x` is `0.1` at the prescribed dofs of the right end which makes that `u` gets bigger and bigger at each iteration. Before I was using

``````        g = d.assemble(G)
dg = d.assemble(dG)

# Apply BC
for bc in bcs:
bc.apply(g)
bc.apply(dg)
``````

instead of ` dg, g = d.assemble_system(dG, G, bcs)`, but that is even worst:

``````Iteration: 0 ; Error: 2.000e-01
Iteration: 1 ; Error: nan
``````

So, I had two questions, the first one is if someone has an idea of what I am doing wrong and how to fix it.

The second one is why is different to use `assemble_system` instead of `assemble` and then `apply` in this case. I have been doing some digging in this last issue and `assemble_system` maintains the assembled matrix symmetric. Can someone explain to me how `assemble_system` does that?

Thank you in advance and sorry for the long post, but there were a lot of things.

Hello,
when using a Newton method with non-zero prescribed DirichletBC you should compute the first iterate with your bcs and for the next ones compute them with the same DirichletBC but with zero value, so that you will not accumulate the imposed value at each iteration.

Also, you might want to take a look here https://fenicsproject.org/qa/536/newton-method-programmed-manually/ for getting a correct residual with inhomogeneous boundary conditions

Or use this with a PETScSNESSolver where you can get access to the underlying SNES (which I assume is what you want).

Thanks for the answers! I finally solved by doing two things before entering the Newton loop:

1. Initialize the vector `u` with the boundary conditions.
2. Homogenize the boundary conditions.

With these changes, I get the same residuals than the NonlinearVariationalSolver (although to get the same values, I need to initialize `u` before using the default solver). The code looks like this:

``````def NewtonSolver(G, u, bcs, dG):
tol = 1e-10
error = 1.
it = 0

for bc in bcs:
bc.apply(u.vector())
bc.homogenize()

while error > tol and it < 10:
# Assemble
dg, g = d.assemble_system(dG, G, bcs)

# PETSc solver
dGp = d.as_backend_type(dg).mat()
x, b = dGp.getVecs()
b.setValues(range(g.size()), g)
ksp = PETSc.KSP().create()
ksp.setType('cg')
ksp.setOperators(dGp)
ksp.solve(b, x)

error = g.norm('l2')
print('Iteration:', str(it), '; Error: %3.3e' % error)
if error < tol:
break

# Update
u.vector()[:] = u.vector()[:] - x
it += 1
``````

I still have the question about the difference between `assemble_system` and `assemble` and `apply` and how `assemble_system` diagonalizes the matrix.

@nate what I need is to sum certain values to the assembled tangent and residual matrix, so the example you mention does not work for me. I already achieve this using `petsc4py`. I don’t know what is the underlying SNES, could you explain that? Maybe there is a better way of doing what I am doing.

The underlying `SNES` is the underlying PETSc nonlinear solver.