I am testing the SOR preconditioner with the linear solver gmres on a 6 by 6 block matrix ( from solving 6
physical variables). For SOR, if \omega = 1, this is just the Gauss Seidel method. I am wondering, in FEniCS, when using SOR, whether we using the Block Gauss-Seidel or the usual Gauss Seidel preconditioner?
If it is using the usual Gauss-Seidel as the preconditioning matrix, is there a way to use Block Gauss-Seidel preconditioner for solving this block matrix?
Thank you in advance!
As petsc is the default linear algebra backend, I recommend having a look at: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCSOR.html and https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCBJACOBI.html#PCBJACOBI
Thank you, I read this many times, but still not 100 % sure if I am correct.
Is the point-block SOR mentioned in the documentation for PCSOR just block Gauss-Seidel?
If so, I guess for matrix type seqaij, PETSc is using the usual Gauss-Seidel by default, and we are allowed to change the parameter \omega, and for matrix type seqbaij, it is using Block Gauss-Seidel, which does not allow us to change the parameter \omega.
In FEniCS, I see that the matrix type is seqaij. So I guess if I want to use block Gauss-Seidel preconditioner, I need to change the matrix type to seqbaij? Do you happen to know any method of converting matrix type in FEniCS?
Thank you for your time in advance!
I guess you can interface directly with PETSc as shown below:
from dolfin import *
from petsc4py import PETSc
mesh = UnitSquareMesh(10,10)
V = FunctionSpace(mesh, "CG", 1)
u, v = TrialFunction(V), TestFunction(V)
x = SpatialCoordinate(mesh)
f = x[0]*x[1]
a = inner(u,v)*dx
l = f*v*dx
A = assemble(a)
b = assemble(l)
uh = Function(V)
uh_petsc = uh.vector().vec()
A_petsc = as_backend_type(A).mat()
b_petsc = as_backend_type(b).vec()
ksp = PETSc.KSP().create(mesh.mpi_comm())
ksp.setOperators(A_petsc)
ksp.setType("cg")
ksp.getPC().setType("sor")
ksp.solve(b_petsc, uh_petsc)
ksp.view()
which gives you the following output:
KSP Object: 1 MPI processes
type: cg
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
type: sor
type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=121, cols=121
total: nonzeros=761, allocated nonzeros=761
total number of mallocs used during MatSetValues calls=0
not using I-node routines
As you would like to convert your matrix, I suggest doing the following (works in serial)
uh2 = Function(V)
uh_petsc2 = uh2.vector().vec()
B=PETSc.Mat().create(comm=PETSc.COMM_SELF)
B.setType(PETSc.Mat.Type.SBAIJ)
B.createSBAIJ(A_petsc.size, A_petsc.block_size, csr=A_petsc.getValuesCSR())
ksp = PETSc.KSP().create(mesh.mpi_comm())
ksp.setOperators(B)
ksp.setType("cg")
ksp.getPC().setType("sor")
ksp.solve(b_petsc, uh_petsc2)
ksp.view()
assert(np.allclose(uh2.vector().get_local(), uh.vector().get_local()))
Thank you. The matrix type was changed to seqbaij properly. However I don’t see any difference in the linear iterations using SOR after changing the type, so I might be wrong about saying that to use block Gauss-Seidel we need to change the matrix type.