Preconditioner for indefinite linear system

I am trying to solve a matrix system of the form

\left [\array{A & B\\ -B & A}\right ]\left \{ \array{u \\ v}\right \} = \left\{ \array{ 0 \\ b}\right \}

that comes from a discretization of the Helmholtz equation in 3 dimensions using curl-conforming (Nedelec) tetrahedral elements. The matrix block A is symmetric, indefinite from discretising the equation

\nabla\times\nabla\times\mathbf{E} - k_0^2 \mathbf{E} = 0

subject to an impedance/excitation boundary condition
\mathbf{n}\times\nabla\times\mathbf{E} = -jk_0\eta_0(2\mathbf{n}\times\mathbf{H}^{source})-jk_0(\mathbf{n}\times\mathbf{n}\times\mathbf{E})

which forms the sparse block matrix B.

The MUMPS direct solver produces the correct solution for a number of geometries, as long as the number of DoFs is not too large to exhaust the memory capacity of my computer. I have tried to implement the GMRES solver to solve the same problems, but without success. The solver runs without error, but never converges. The iteration error stagnates. I tried the ILU, SOR, Jacobi, Hypre-amg, etc. preconditioners, but no luck.

I have two questions:

  1. Is there a preconditioner that I can import into the solver that will speed convergence of this type of problem?
  2. Or, can I use a coarse solution (from a coarser mesh, lower order interpolation, solved using direct method) as a preconditioner for a finer solution (using GMRES or another suitable iterative solver)? Is there an example of this I could use as a template?
2 Likes

As far as I’m aware, domain decomposition is used to some success for the time harmonic Maxwell system. And, as you’ve observed, typically used “black box” methods do not converge.

You could also look into the AMS preconditioner in PETSc/HYPRE but I doubt this will help you for your formulation. DOLFIN demo here.

In terms of literature, check out the works by Ralf Hiptmair and Jinchao Xu.

Thanks for references. I had a look and it looks like a very challenging problem. The usual variational approach gives rise to a saddle-point problem that is difficult to solve using iterative methods.

I am taking a look at least-squares formulation of Maxwell’s equations that produce symmetric-positive definite discrete operators at the expense of doubling the number of unknowns. The usual preconditioned CG solver should work in this case. I’ll let you know how it goes.

1 Like

The least-squares approach seems to be worth considering for these types of problems. I decided to start with the simpler scalar Helmholtz problem (acoustic wave equation).

\nabla^2 p + k^2 p = 0

We pose the problem as two first order DEs
\begin{array}{l} \nabla\cdot\mathbf{u} - jkp = 0\\ \nabla p - jk\mathbf{u} = 0\end{array}
and then form the residual
\mathit{R} = \vert\!\vert\nabla\cdot\mathbf{u} - jkp\vert\!\vert^2_\Omega+\vert\!\vert\nabla p - jk\mathbf{u}\vert\!\vert^2_\Omega + \vert\!\vert\mathbf{n\cdot u}\vert\!\vert^2_{\partial\Omega_h}+\vert\!\vert \mathbf{n\cdot u} - (p - p_{inc})\vert\!\vert^2_{\partial\Omega_{in}}+\vert\!\vert \mathbf{n\cdot u} - p\vert\!\vert^2_{\partial\Omega_{out}},
where \Omega is the 3-D problem domain, \partial\Omega_h is the boundary domain where a hard boundary (zero normal velocity) is specified, \partial\Omega_{in} is where we impress an incoming wave and \partial\Omega_{out} is an outgoing wave boundary condition corresponding to a Robin boundary condition on the pressure variable p in the usual Galerkin formulation.

The resulting system of equations is symmetric and positive definite. Performing the usual Ritz-like minimization yields a system where solution by the CG algorithm (using Jacobi preconditioning) works as it should. The price we pay is the need to solve for more DoFs.

For those who are interested, the Fenics code is found at this link.

Will update as I move ahead. (I’m doing this whenever I get a spare moment). Cheers!

5 Likes

Got the least-squares approach to work for the acoustic wave equation that includes inlet and outlet radiation conditions. There is no need for special saddle-point preconditioners, since the least-squares functional is symmetric and positive-definite. The conjugate gradient solver with Jacobi preconditioning works reliably. Solved the horn problem below with 1.8 million DoFs. Used a mixed element where pressure is approximated using Lagrange functions and velocity is represented using Raviart-Thomas functions.

HornVel

import meshio
msh = meshio.read("AcousticRadiation3.msh")

meshio.write("mesh.xdmf", meshio.Mesh(points=msh.points, cells={"tetra": msh.cells["tetra"]}))
from dolfin import *
import numpy as np
import matplotlib.pyplot as plt
import os, sys, traceback

k0 = 4.0

class Hard(SubDomain):
    def inside(self, x, on_boundary):
        return on_boundary

class InputBC(SubDomain):
    def inside(self, x, on_boundary):
        return on_boundary and near(x[2], -3.0, tol)

class OutputBC(SubDomain):
    def inside(self, x, on_boundary):
        rr = sqrt(x[0] * x[0] + x[1] * x[1] + x[2] * x[2])
        rra = sqrt(x[0] * x[0] + x[1] * x[1])
        return on_boundary and (near(rr, 2.0, 5.0e-2) and (x[2] >= 0.0))


mesh = Mesh()
with XDMFFile("mesh.xdmf") as infile:
    infile.read(mesh)
mvc = MeshValueCollection("size_t", mesh, 3)

info(mesh)
#plot(mesh)
#plt.show()

# Mark boundaries
sub_domains = MeshFunction("size_t", mesh, mesh.topology().dim() - 1)
sub_domains.set_all(3)
hard = Hard()
hard.mark(sub_domains, 0)
in_port = InputBC()
in_port.mark(sub_domains, 1)
out_port = OutputBC()
out_port.mark(sub_domains, 2)
File("BoxSubDomains.pvd").write(sub_domains)
# Set up function spaces
# For low order problem
cell = tetrahedron

VelocityFn = FiniteElement('RT', cell, 2) # Velocity
PressureFn = FiniteElement('Lagrange', cell, 2) # Lagrange element for pressure
element = FunctionSpace(mesh, MixedElement([VelocityFn, VelocityFn, PressureFn, PressureFn]))
u_r, u_i, p_r, p_i = TrialFunctions(element)
v_r, v_i, q_r, q_i = TestFunctions(element)

#surface integral definitions from boundaries
ds = Measure('ds', domain = mesh, subdomain_data = sub_domains)
# with source and sink terms
u0 = Constant((0.0, 0.0, 0.0)) #Hard V definition
p_src = Constant((1.0))

#Boundary condition dictionary
boundary_conditions = {0: {'Hard' : u0},
                       1: {'InputBC': p_src},
                       2: {'OutputBC': u0}}

n = FacetNormal(mesh)

#Build Hard boundary conditions for real and imaginary parts
bcs = []
integral_source = []
integrals_load =[]
for i in boundary_conditions:
    if 'Hard' in boundary_conditions[i]:
        bb1 = (inner(dot(n, v_r), dot(n, u_r)) + inner(dot(n, v_i), dot(n, u_i))) * ds(i)
        integrals_load.append(bb1)

# Build input BC source term and loading term
for i in boundary_conditions:
    if 'InputBC' in boundary_conditions[i]:
        r = boundary_conditions[i]['InputBC']
        bb1 = inner(dot(n, v_i) - q_i, r)  * ds(i)
        integral_source.append(bb1)
        bb1 = (inner(dot(n, v_r) - q_r, dot(n, u_r) - p_r) + inner(dot(n, v_i) - q_i, dot(n, u_i) -  p_i)) * ds(i)
        integrals_load.append(bb1)

for i in boundary_conditions:
    if 'OutputBC' in boundary_conditions[i]:
        r = boundary_conditions[i]['OutputBC']
        bb2 = (inner(dot(n, v_r) - q_r, dot(n, u_r) - p_r) + inner(dot(n, v_i) - q_i, dot(n, u_i) - p_i))  * ds(i)
        integrals_load.append(bb2)

a = (inner(div(v_r)+k0*q_i, div(u_r)+k0*p_i) + inner(div(v_i) - k0*q_r, div(u_i) - k0*p_r) + inner(grad(q_r) + k0*v_i, grad(p_r) + k0*u_i) +  inner(grad(q_i) - k0*v_r, grad(p_i) - k0*u_r)) * dx + sum(integrals_load)
L = sum(integral_source)

u1 = Function(element)
#solve(a == L, u1, bcs, solver_parameters = {'linear_solver' : 'mumps'}) 

vdim = u1.vector().size()
print ('Solution vector size = ',vdim)
u1.vector()[:] = np.random.uniform(-1, 1, vdim)
A, b = assemble_system(a, L, bcs)
solver = PETScKrylovSolver("cg","jacobi")
solver.parameters['absolute_tolerance'] = 1e-7
solver.parameters['relative_tolerance'] = 1e-12
solver.parameters['maximum_iterations'] = 10000
solver.parameters['monitor_convergence'] = True
solver.parameters['nonzero_initial_guess'] = False
solver.parameters['report'] = True
solver.set_operators(A, A)
solver.solve(u1.vector(),b)

w_r, w_i, z_r, z_i = u1.split(True)

fp = File("Preal.pvd")
fp << z_r
fp = File("Pimag.pvd")
fp << z_i
fp = File("Ureal.pvd")
fp << z_r
fp = File("Uimag.pvd")
fp << w_i
ut = z_r.copy(deepcopy=True)
vt = w_r.copy(deepcopy=True)

fp = File('PressureWaveFile.pvd')
for i in range(50):
    ut.vector().zero()
    ut.vector().axpy(sin(pi * i / 25.0), z_i.vector())
    ut.vector().axpy(cos(pi * i / 25.0), z_r.vector())
    fp << (ut, i)
fp = File('VelocityWaveFile.pvd')
for i in range(50):
    vt.vector().zero()
    vt.vector().axpy(sin(pi * i / 25.0), w_i.vector())
    vt.vector().axpy(cos(pi * i / 25.0), w_r.vector())
    fp << (vt, i)

sys.exit(0)
5 Likes

Working on the electromagnetic version of the time-harminic solver. I am using the first order Maxwell curl equations in an impedance-normalized form (i.e. \eta_0 = \sqrt{\mu_0 / \epsilon_0} = 1). I consider the case of inhomogeneous dielectrics only (hence \epsilon_r in the following equations). The wavenumber (spatial frequency) is k_0.

\nabla\times\mathbf{E} = -jk_0 \mathbf{H}
\nabla\times\mathbf{H} = jk_0\epsilon_r\mathbf{E}

Like in the acoustic case, I form the least-squares residual using the canonical first-order system and the relevant boundary conditions:
R = \vert\!\vert\nabla\times\mathbf{E}+jk_0\mathbf{H}\vert\!\vert^2_\Omega + \vert\!\vert\nabla\times\mathbf{H}-jk_0\epsilon_r\mathbf{E}\vert\!\vert^2_\Omega + \vert\!\vert\mathbf{n\times E}\vert\!\vert^2_{\partial\Omega_{PEC}}+\vert\!\vert\mathbf{n\cdot H}\vert\!\vert^2_{\partial\Omega_{PEC}} +
\hskip{1in}\vert\!\vert\mathbf{n\times ((E - E^{inc})\times n)} - Z_b \mathbf{ n\times H)}\vert\!\vert^2_{\partial\Omega_{InOut}}

Z_b is a problem-dependent normalized (to \eta_0) boundary impedance used to absorb outgoing waves. I am using curl-conforming Nédélec functions as the finite element interpolation space. Preliminary results look good, but I need to add more memory to my laptop to verify convergence! :wink: The boundary conditions are applied quite differently than in the usual Galerkin method. Moreover, low order interpolation tends to exhibit a non-physical loss. At least order 2 interpolation is required for decent performance. Order 3 is best, but generates huge matrix orders (with higher density as well).

Until next time!

6 Likes