Error in reproducing simple 1D example problem

Hello,

I am trying to reproduce the convergence behaviour for linear FE for the 1D Poisson problem with Robin boundary conditions on the interval (0,1), i.e., weak form a(u,v) = L(v) with
a(u,v) = \langle u',v'\rangle + \varepsilon^{-1}(u(0)v(0)+u(1)v(1)) \text{ and } L(v) = \langle f, v\rangle.

My code is appended below. From my lecture, I would expect that the energy norm of the error is O(h), while the L^2 norm is O(h^2). This works well in the beginning, however, after sometime the errors start to behave strange and go up again. (-> see result plot appended.) If my implementation is correct, why does this behaviour occur? If not, how can I avoid it and get even better accuracies? Thanks in advance.

import matplotlib.pyplot as plt
import numpy as np
from fenics import *

n_steps = 7
mesh_factor = 10

L2_error_list = []
energy_error_list = []
numbers = []

for step in range(n_steps):
    print('Step', step)
    n_parts = mesh_factor**(step+1)
    numbers.append(n_parts)

    mesh = IntervalMesh(n_parts, 0, 1)
    V = FunctionSpace(mesh, 'P', 1)

    #Define VP
    eps = 10**(-4)
    u = TrialFunction(V)
    v = TestFunction(V)
    f = Expression('pow((2*pi),2)*sin(2*pi*x[0]) + pow((4*pi),2)*sin(4*pi*x[0]) + pow((6*pi),2)*sin(6*pi*x[0])', degree = 2) 
    a = dot(grad(u),grad(v))*dx + pow(eps,-1)*(u*v*ds)
    L = f*v*dx

    sol = Function(V)
    solve(a==L, sol)
    true_sol = Expression('sin(2*pi*x[0]) + sin(4*pi*x[0]) + sin(6*pi*x[0])  + pow(1+2*eps,-1)*(-24*pi*eps*x[0]+12*pi*eps)', degree = 5, domain = mesh, eps = eps)


    #L2 error
    L2_squared = (sol- true_sol)**2*dx
    L2_error = sqrt(abs(assemble(L2_squared)))
    L2_error_list.append(L2_error)

    #Energy error
    energy_squared = dot(grad(sol)-grad(true_sol),grad(sol)-grad(true_sol))*dx + pow(eps,-1)*((sol-true_sol)*(sol-true_sol)*ds)
    energy_error = sqrt(assemble(energy_squared))
    energy_error_list.append(energy_error)

print('L2', L2_error_list) 
print('Energy', energy_error_list)

Here a graphical visualization of the values:
Combined_Convergence_Plot

You need to be careful when assembling the errors, as explained in chapter 5.5.4 of

1 Like

Thanks for the quick and helpful answer. Nevertheless, I still couldn’t solve my problem completely. Let’s consider only the L2 error. I now replaced my error calculation by

new_L2_error = errornorm(true_sol, sol, norm_type='L2')
new_L2_error_list.append(new_L2_error)

If I define my true solution with small degree (e.g., degree=1) the errors I obtain are indeed way to optimistic, as described in the tutorial. However, if I have degree=5 for example, as in the code example above, my implementation and the implementation using errornorm yield exactly the same L2 errors. In particular, I obtain a reduced convergence for very high number of degrees of freedom as in the plot above. What am I still doing wrong?

Still could not find an explanation for the unexpected error decrease after reading the tutorial section several times. Is there any other reason in my code that could case this behaviour?

Another issue in the code is the usage of Expression

which both gets interpreted as second order Lagrange functions, as you have specified degree=2.

This means that you at some point will loose accuracy due to the representation of f.
I would suggest you use x=SpatialCoordinate(mesh) to describe the spatial variation.

But you also have a general issue here that you are hitting machine precision, as your error is of order 1e-8, which means that the squared error is of order 1e-16 (which is machine precision).

It will also depend on what linear solver you are using (direct or iterative solver), and what tolerances that you are using.

You also have the issue that you are splitting an interval from 0 to 1 into smaller and smaller cells (10 million cells in the finest case). This means that a single cell has the size of 10^-7, which can also be a source of floating point issues.

1 Like