Preconditioner not working or bug?

I am running the Stokes equation demo from https://fenicsproject.org/olddocs/dolfin/1.6.0/python/demo/documented/stokes-iterative/python/documentation.html

My question is I got the exactly same convergence outputs with or without preconditioner P.
Did I do something wrong? How to fix it?
Thanks in advance!!!

And I made some simplifications to the above source code.

from dolfin import *
import matplotlib.pyplot as plt


# Load mesh
mesh = UnitCubeMesh(4, 4, 4)

# Define function spaces
P2 = VectorElement("Lagrange", mesh.ufl_cell(), 2, dim=3)
P1 = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
TH = P2 * P1
W = FunctionSpace(mesh, TH)

# Boundaries
def right(x, on_boundary): return x[0] > (1.0 - DOLFIN_EPS)
def left(x, on_boundary): return x[0] < DOLFIN_EPS
def top_bottom(x, on_boundary):
    return x[1] > 1.0 - DOLFIN_EPS or x[1] < DOLFIN_EPS

# No-slip boundary condition for velocity
noslip = Constant((0.0, 0.0, 0.0))
bc0 = DirichletBC(W.sub(0), noslip, top_bottom)

# Inflow boundary condition for velocity
inflow = Expression(("-sin(x[1]*pi)", "0.0", "0.0"), degree=2)
bc1 = DirichletBC(W.sub(0), inflow, right)

# Collect boundary conditions
bcs = [bc0, bc1]

# Define variational problem
(u, p) = TrialFunctions(W)
(v, q) = TestFunctions(W)
f = Constant((0.0, 0.0, 0.0))
a = inner(grad(u), grad(v))*dx + div(v)*p*dx + q*div(u)*dx
L = inner(f, v)*dx

# Form for use in constructing preconditioner matrix
b = inner(grad(u), grad(v))*dx + p*q*dx

# Assemble system
A,  bb = assemble_system(a, L, bcs)

# Assemble preconditioner system
P, btmp = assemble_system(b, L, bcs)

# Create Krylov solver and AMG preconditioner
solver = KrylovSolver("gmres", "hypre_amg")
solver.parameters["relative_tolerance"] = 1.0E-10
solver.parameters["absolute_tolerance"] = 1.0E-12
solver.parameters["error_on_nonconvergence"] = True
solver.parameters["monitor_convergence"] = True
solver.parameters["monitor_convergence"] = True

# Associate operator (A) and preconditioner matrix (P)
solver.set_operators(A, P) # ---------------------------------- I got the exactly same result with below. 
# solver.set_operator(A) # ---------------------------------- without preconditioner P 

# Solve
U = Function(W)
solver.solve(U.vector(), bb)
1 Like

You want to setup the solver e.g as follows solver = KrylovSolver("gmres", "hypre_amg"). Here HYPRE’s algebraic multigrid will be used to compute the approximate action of inverse of the matrix you’re passing in as P.

1 Like

Thanks @MiroK.
I add the "hypre_amg" and control the convergence like below

solver.parameters["relative_tolerance"] = 1.0E-10
solver.parameters["absolute_tolerance"] = 1.0E-12
solver.parameters["error_on_nonconvergence"] = True
solver.parameters["monitor_convergence"] = True
solver.parameters["monitor_convergence"] = True

But I got the below weird result.

  1. Use solver.set_operators(A, P),


    For this iteration, why would it stop? I mean I set relative tolerance to 1E-10 and absolute tolerance to 1E-12.

  2. Use solver.set_operator(A),


    For this iteration, it’s more absurd. I cannot understand at all.

The results are “correct” since fenics / petsc uses the preconditioned residual in order to check the convergence criterion. Also, the petsc faq says that this discrepancy can happen, e.g., for saddle point problems: https://www.mcs.anl.gov/petsc/documentation/faq.html#kspdiverged

You could circumvent this by applying the preconditioner from the right hand side, since you will then always have an unpreconditioned norm in your convergence check. A MWE could look like this:

from fenics import *
from petsc4py import PETSc

opts = PETScOptions
# clean the options, in case something was set before
opts.clear()

opts.set('ksp_type', 'gmres')
opts.set('pc_type', 'hypre')
opts.set('pc_hypre_type', 'boomeramg')
opts.set('ksp_pc_side', 'right')

# Here, you define your variational problem, assemble the system, bcs, etc.

# set up the solver:
ksp = PETSc.KSP().create()
ksp.setFromOptions()
A = as_backend_type(A).mat()
b = as_backend_type(b).vec()
# W is the FunctionSpace
x = PETSc.Vec().createSeq(W.dim())
# The precodnitioner matrix P is optional
ksp.setOperators(A, P)
ksp.setUp()
ksp.solve(b, x)
# do a convergence test
if ksp.getConvergedReason() < 0:
    raise SystemExit('Krylov solver did not converge. Reason: ' + str(ksp.getConvergedReason()))

U.vector()[:] = x[:]

Im not 100 % sure how fenics handles things in the background, but it should be similar to this, and I didn’t experience much overhead, so this should be fine.
You can find more info for the PETScOptions here: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html
(these are command line options for petsc)

3 Likes

Hello all!

For setting a good environment for precondinitores should we always assemble the preconditioner matrix like is done with b this demo?

Form for use in constructing preconditioner matrix

b = inner(grad(u), grad(v))dx + pq*dx

Am I being naive to think that PETSc is doing itself just by choosing one of the preconditioners available?

Sorry about my lack of knowledge, I just started studying Krylov spaces and preconditioners.

You do have to both assemble the preconditioner to a matrix, and also to specify that this matrix should be used as preconditioner by the solver, which is done with the command

solver.setOperators(A, P)

where A is your system matrix and P the preconditioner matrix, and solver is a KrylovSolver or PETScKrylovSolver. (Or you can set it as I have done in the previous post using petsc4py directly).

If you do not specify the preconditioner (assembling alone is not sufficient) then fenics will always use the system matrix in order to build the preconditioner. In order to see that the correct matrix is used as preconditioner you can set

PETScOptions.set('ksp_view')

which should give you the information of what petsc does exactly.
And also some small remark: This ‘ksp_view’ command might only work if you use a KrylovSolver or PETScKrylovSolver, and call

solver.set_from_options()

before the solve.

1 Like