Error goes down while cond number goes up

I am solving a very basic poisson equation. If I refine the mesh, the error goes down as expected. However the condition number goes up:

N: 2
error: 0.22149394535822542
cond: 15.56322662071126
N: 4
error: 0.03074190236918026
cond: 52.48333130760118
N: 8
error: 0.003932420860391938
cond: 178.02965569598211
N: 16
error: 0.0004943154804889275
cond: 636.1133651331177
N: 32
error: 6.187547361026483e-05
cond: 2381.7702791780166
N: 64
error: 7.737120543412375e-06
cond: 9192.78524325424
N: 128
error: 9.67223970900399e-07
cond: 36094.94754399094
from fenics import *
import numpy as np

def horner(*coeffs):
    N = len(coeffs)
    if N == 0:
        return "0.0"
    elif N == 1:
        return str(coeffs[0])
    else:
        a0 = coeffs[0]
        rest = coeffs[1:N]
        return "{} + x[0]*({})".format(a0, horner(*rest))


class Solver:
    def __init__(self, coeffs, N, 
                 element = FiniteElement("P", interval, 2)
                
                ):
        self.N = N
        self.coeffs = coeffs
        self.mesh = IntervalMesh(N, -1, 1)
    
        self.V  = FunctionSpace(self.mesh, element)
        v_phi = TestFunction(self.V)
        phi = TrialFunction(self.V)
    
        a0, a1, a2, a3, a4 = coeffs
        self.phi_exact = Expression(horner(a0, a1, a2, a3, a4), degree=4)
        rho = Expression(horner(-2*a2, -6*a3, -12*a4), degree=4)
        self.bc = DirichletBC(self.V, self.phi_exact, "on_boundary")
        
        self.A = inner(grad(v_phi), grad(phi))*dx
        self.f = inner(rho, v_phi)*dx
    
    def solve(self):
    
        phi = Function(self.V)
        solve(self.A == self.f, phi, bcs=self.bc)
        self.phi_sol = phi
        return self.phi_sol, self.phi_exact
    
    def plot(self):
        plot(self.phi_sol)
        plot(self.phi_exact, mesh=self.mesh, marker=".")
    
    def error(self):
        error = (self.phi_exact - self.phi_sol)**2*dx
        return sqrt(abs(assemble(error)))
    
    def cond(self):
        M = assemble(self.A)
        self.bc.apply(M)
        return np.linalg.cond(M.array())

coeffs = np.random.randn(5)
for N in [2**i for i in range(1,8)]:
    s = Solver(coeffs, N)
    s.solve()
    print("N: {}".format(N))
    print("error: {}".format(s.error()))
    print("cond: {}".format(s.cond()))

Is it expected that the condition number goes up? Is there a better way to compute the condition number, that does not convert the matrix to dense?

Condition number should grow like \mathcal{O}(h^{-\frac{2}{d}}) where d is the geometric dimension of the discretised FEM Poisson operator and h is a measure of the mesh cell size. Looks like you’ve got a reasonable result given that d = 1 in your code. Additionally, the error in the L_2 norm should be \mathcal{O}(h^{p+1}), where p is the piecewise polynomial approximation order.

Check out Saad’s book for more details.

1 Like

Thanks. Let me try to explain why the condition number grows despite the solver being stable and you can maybe comment, whether I am on the right track?

  • So stability means by definition that opnorm(inv(A)) is bounded above as the grid is refined. This is equivalent to the smallest eigenvalue of A being bounded below.
  • cond(A) is the biggest eigenvalue divided by the smallest eigenvalue. So the only way this can grow is, that the biggest eigenvalue of A grows. Grow of the biggest eigenvalue of A is expected, since refining the grid turns the basis functions into very steep peaks that get amplified a lot by differentiation. An eigenfunction to the biggest eigenvalue would look like a spiky variant of the sin function with wave length 2h.

Also the growing condition number is no numerical danger here. Only high eigenvalues of inv(A) are dangerous, but we don’t have them here. Does that sound correct?