Non linear solver fails to converge. Is there a thumb-rule to avoid this in the future?

Hello there, I’ve been trying to solve a non-linear (scalar) problem with FeniCS.

The first thing I did was “follow the demo tutorial” .

In my script the compiler complaint that there was no Jacobian matrix defined, so I went after two pages original post that helped me and the reference from a response, Now my code runs but fails to converge. I will share a minimal description. The script is based 90% on the demo file ft09_reaction_system.py,
I just change the reaction function for a non-linear one and deleted the advection terms…

# Define variational problem as follows F = a(u,v) - f
F = ((u_1 - u_n1) / dt)*v_1*dx + Dr*dot(grad(u_1), grad(v_1))*dx + (u_1 / (u_1 - C)) * e * nu *  u_2*v_1*dx  \
  + ((u_2 - u_n2) / dt)*v_2*dx + Dn*dot(grad(u_2), grad(v_2))*dx + (u_1 / (u_1 - C)) * nu *  u_2*v_2*dx

J       = derivative(F, u)
problem = NonlinearVariationalProblem(F, u, [], J)
solver  = NonlinearVariationalSolver(problem)
prm     = solver.parameters

prm['nonlinear_solver']                                     = 'newton'
prm['newton_solver']['linear_solver']                       = 'gmres'
prm['newton_solver']['preconditioner']                      = 'jacobi'
prm['newton_solver']['relaxation_parameter']                = 1.0
prm['newton_solver']['krylov_solver']['maximum_iterations'] = int(1e3)
prm['newton_solver']['krylov_solver']['absolute_tolerance'] = 1e-20

solver.solve()

After the solver.solve() I receive diverging results and failure, Does anybody has an idea on how to tackle the problem?

Just to test my script, the code snippet shared before is located before the temporal loop… So the nonlinear problem takes data from the initial condition…

1 Like

Nonlinear convergence difficulties can require some trial-and-error to resolve. The first thing to try is probably a line search strategy. See, e.g., this discussion for some code examples showing how to access PETSc’s line search implementation through FEniCS’s solver API. For unsteady problems, it can also help to decrease the time step size.

Thanks for the advice, I tried reducing the time step… And “opposite to the natural insight”, reducing the time step was bad… if I increase the time step the solver converges…

With your advice of linesearch the solver converges faster, let me show you the output of both strategies.

This is your advice output

Solving nonlinear variational problem.
  0 SNES Function norm 4.616734455949e+06 
  1 SNES Function norm 9.713210005112e+07 
  2 SNES Function norm 5.420165017350e+07 
  3 SNES Function norm 1.013955540511e+09 
  4 SNES Function norm 1.675752085915e+09 
  5 SNES Function norm 8.621986552209e+09 
  6 SNES Function norm 9.257841083178e+09 
  7 SNES Function norm 5.577522639036e+09 
  8 SNES Function norm 3.668841564451e+09 
  9 SNES Function norm 3.721764540547e+05 
 10 SNES Function norm 2.210550339590e-01 
 11 SNES Function norm 5.075221396003e-07 
  PETSc SNES solver converged in 11 iterations with convergence reason CONVERGED_FNORM_RELATIVE.

This is the other(default) solver’s output

Solving nonlinear variational problem.
  Newton iteration 0: r (abs) = 4.617e+06 (tol = 1.000e-10) r (rel) = 1.000e+00 (tol = 1.000e-09)
  Newton iteration 1: r (abs) = 1.705e+15 (tol = 1.000e-10) r (rel) = 3.692e+08 (tol = 1.000e-09)
  Newton iteration 2: r (abs) = 8.592e+14 (tol = 1.000e-10) r (rel) = 1.861e+08 (tol = 1.000e-09)
  Newton iteration 3: r (abs) = 4.228e+14 (tol = 1.000e-10) r (rel) = 9.159e+07 (tol = 1.000e-09)
  Newton iteration 4: r (abs) = 1.987e+14 (tol = 1.000e-10) r (rel) = 4.304e+07 (tol = 1.000e-09)
  Newton iteration 5: r (abs) = 8.988e+13 (tol = 1.000e-10) r (rel) = 1.947e+07 (tol = 1.000e-09)
  Newton iteration 6: r (abs) = 3.669e+13 (tol = 1.000e-10) r (rel) = 7.948e+06 (tol = 1.000e-09)
  Newton iteration 7: r (abs) = 1.183e+13 (tol = 1.000e-10) r (rel) = 2.561e+06 (tol = 1.000e-09)
  Newton iteration 8: r (abs) = 2.181e+12 (tol = 1.000e-10) r (rel) = 4.724e+05 (tol = 1.000e-09)
  Newton iteration 9: r (abs) = 1.061e+11 (tol = 1.000e-10) r (rel) = 2.299e+04 (tol = 1.000e-09)
  Newton iteration 10: r (abs) = 2.782e+08 (tol = 1.000e-10) r (rel) = 6.026e+01 (tol = 1.000e-09)
  Newton iteration 11: r (abs) = 1.056e+04 (tol = 1.000e-10) r (rel) = 2.287e-03 (tol = 1.000e-09)
  Newton iteration 12: r (abs) = 2.057e-02 (tol = 1.000e-10) r (rel) = 4.455e-09 (tol = 1.000e-09)
  Newton iteration 13: r (abs) = 2.051e-07 (tol = 1.000e-10) r (rel) = 4.442e-14 (tol = 1.000e-09)
  Newton solver finished in 13 iterations and 583 linear solver iterations.

What do you think about those convergences? (big - big - big - small…)
I will still keep the post open, in case the solution is not-physical…

Hi,

First thing I have noticed is that you use iterative solver. That could possibly affect the convergence of your nonlinear solver. I would suggest you to try the direct solver (e.g. mumps) first, and only when you get good convergence to switch to iterative one. It is worth to mention that if you use iterative solver, you should check if the correct initial guess is used

parameters['krylov_solver']['nonzero_initial_guess'] = True

Next, since you have not specified the tolerances and number of iterations for nonlinear solver, I assume it uses default ones. This could also affect the convergence. Try specifying absolute and relative tolerance for nonelinar solver as follows

solver.parameters['relative_tolerance'] = 1e-5
solver.parameters['absolute_tolerance'] = 1e-5

Furthermore, beside tweaking the tolerances or using the direct linear solver, I would suggest you to switch to affine-covariant error-oriented linesearch (see more about this method here and here). For my problem (drift-diffusion equations coupled with Poisson equation), I got an improvement in convergence when I started using it. You can switch to it by adding following line in the code:

PETScOptions.set("snes_linesearch_type", "nleqerr")
solver.set_from_options()

I am not certain whether this improves convergence in general or not, but I guess it is worth to try.

Finally, I would suggest you to check out this post about linear and nonlinear solvers, and this one about designing your own Newton solver, which might be useful to you. More about convergence of nonlinear solvers and iterative linear solver can be found here and here, respectively.

3 Likes

Dear @bmf thanks for the great piece of advice, very illustrative indeed. I will give a try to the options.

And also thanks @kamensky for your input, very valuable…

I know I’ve marked the question as answered. The next situation is more like an appendix, I am sure It is going to be useful for the community.

I have re-rewritten the variational equation, with the intention to avoid NaNs (divide by zero)…

F = (u_1 - u_n1)*v_1*dx + dt * Dr*dot(grad(u_1), grad(v_1))*dx - dt * (u_1 / (u_1 - C)) * e * nu *  u_2*v_1*dx  \
  + (u_2 - u_n2)*v_2*dx + dt * Dn*dot(grad(u_2), grad(v_2))*dx + dt * (u_1 / (u_1 - C)) * nu *  u_2*v_2*dx

Now, dt is a factor… no longer a quotient member. I will appreciate comments regarding Implicit integration, as far as I am concerned the tutorial file ft09_reaction_system.py performs explicit time integration. Imagine that I want to apply implicit Euler, how should I tweak the script?

Thanks.-

I will appreciate comments regarding Implicit integration, as far as I am concerned the tutorial file ft09_reaction_system.py performs explicit time integration. Imagine that I want to apply implicit Euler, how should I tweak the script?

Assuming u_1 and u_2 are the current time step’s unknowns, and u_n1 and u_n2 are the previous time step’s solution, then the given residual is already the implicit Euler method. In explicit Euler, the terms other than the time derivative would use u_n1 and u_n2, and the problem to solve at each time step would be linear (viz., inverting a mass matrix). That is one way to avoid nonlinear convergence issues, but, for a parabolic diffusion problem like this, implicit methods are typically preferred, since an explicit scheme would be subject to a harsh stability condition, where time step would need to scale like element diameter squared, and computational cost would grow rapidly with refinement.

1 Like

Dear @kamensky, thanks for confirming that the transient part is being solved implicit.