Convergence Failed for Potential flow via a box, with constant inlet

Below, I modified the official tutorial of “Computing convergence rates”, to compare exact and fenicsx solution of potential flow past a unit cube with constant neumann and dirichlet condition. (I only substitute the original demo function with the potential flow function, which is even simpler).

I only changed “def u_ex” and “def solve_poisson”

However, the L2 and H1 error both increase stably as the mesh goes finer! So the convergence cannot be achieved!!

I have tried everything, and I have been stuck here for 3 weeks. Appreciate any help!!

original script: Error control: Computing convergence rates — FEniCSx tutorial

# # Error control: Computing convergence rates
# Author: Jørgen S. Dokken, Hans Petter Langtangen, Anders Logg


# %%
from dolfinx import default_scalar_type, mesh, fem
from dolfinx.fem import (Expression, Function, functionspace,
                         assemble_scalar, dirichletbc, form, locate_dofs_geometrical, locate_dofs_topological)
from dolfinx.fem.petsc import LinearProblem
from dolfinx.mesh import create_unit_square, create_unit_cube, locate_entities_boundary

from mpi4py import MPI
from ufl import SpatialCoordinate, TestFunction, TrialFunction, div, dot, dx, grad, inner

import ufl
import numpy as np
from petsc4py.PETSc import ScalarType

U_inlet = 1.0
def u_ex(mod):
    return lambda x: U_inlet * x[0]


u_numpy = u_ex(np)
u_ufl = u_ex(ufl)


def solve_poisson(N=4, degree=3):
    # domain = mesh.create_unit_cube(MPI.COMM_WORLD, N, N, N, mesh.CellType.hexahedron)
    domain = mesh.create_unit_square(MPI.COMM_WORLD, N, N)
    x = SpatialCoordinate(domain)
    V = functionspace(domain, ("Lagrange", degree))

    # Define trial and test functions
    phi = ufl.TrialFunction(V)
    v = ufl.TestFunction(V)
    # Bilinear form (Laplace equation)
    a = ufl.inner(ufl.grad(phi), ufl.grad(v)) * ufl.dx
    # f = 0.0000000000000001
    g = U_inlet

    # get dimension
    tdim = domain.topology.dim
    #print("domain dimension is", tdim)
    fdim = tdim - 1

    domain.topology.create_connectivity(fdim, tdim)
    #boundary_facets = mesh.exterior_facet_indices(domain.topology)

    ## Linear form 
    # Neumann BC
    def boundary_x1(x):
        return np.isclose(x[0], 1.0)
    boundary_facets_x1 = mesh.locate_entities_boundary(domain, fdim, boundary_x1)
    mt = mesh.meshtags(domain, fdim, boundary_facets_x1, 1)
    ds = ufl.Measure("ds", domain=domain, subdomain_data=mt)
    L = ufl.inner(g, v) * ds(1)  # g = ∂φ/∂n = U_inlet = 1

    # Dirichlet BC new
    facets = mesh.locate_entities_boundary(
        domain,
        dim=fdim,
        marker=lambda x: np.isclose(x[0], 0.0),
    )
    dofs = fem.locate_dofs_topological(V=V, entity_dim=fdim, entities=facets)
    bc_dirichlet = fem.dirichletbc(value=ScalarType(0), dofs=dofs, V=V)

    # # Dirichlet BC on x=1
    # def boundary_x0(x):
    #     return np.isclose(x[0], 1.0)
    # boundary_dofs_x0 = fem.locate_dofs_geometrical(V, boundary_x0)
    # bc_dirichlet = fem.dirichletbc(1.0, boundary_dofs_x0, V)


    default_problem = fem.petsc.LinearProblem(a, L, bcs=[bc_dirichlet], petsc_options={"ksp_type": "preonly", "pc_type": "lu"})
    return default_problem.solve(), u_ufl(x)

# %% [markdown]
# Now, we can compute the error between the analyical solution `u_ex=u_ufl(x)` and the approximated solution `uh`. A natural choice might seem to compute `(u_ex-uh)**2*ufl.dx`.

# %%
uh, u_ex = solve_poisson(2)
print("uh:")
print(uh)

print("u_ex:")
print(u_ex)

comm = uh.function_space.mesh.comm
error = form((uh - u_ex)**2 * ufl.dx)
E = np.sqrt(comm.allreduce(assemble_scalar(error), MPI.SUM))
if comm.rank == 0:
    print(f"L2-error: {E:.2e}")

# %% [markdown]
# Sometimes it is of interest to compute the error fo the gradient field, $\vert\vert \nabla(u_e-u_h)\vert\vert$, often referred to as the $H_0^1$-norm of the error, this can be expressed as

# %%
eh = uh - u_ex
error_H10 = form(dot(grad(eh), grad(eh)) * dx)
E_H10 = np.sqrt(comm.allreduce(assemble_scalar(error_H10), op=MPI.SUM))
if comm.rank == 0:
    print(f"H01-error: {E_H10:.2e}")

# %% [markdown]
# ### Reliable error norm computation
# However, as this gets expanded to `u_ex**2 + uh**2 - 2*u_ex*uh`. If the error is small, (and the solution itself is of moderate size), this calculation will correspond to subtract two positive numbers `u_ex**2 + uh**2`$\sim 1$ and `2*u_ex*u`$\sim 1$ yielding a small number, prone to round-off errors.
# 
# To avoid this issue, we interpolate the approximate and exact solution into a higher order function space. Then we subtract the degrees of freedom from the interpolated functions to create a new error function. Then, finally, we assemble/integrate the square difference and take the square root to get the L2 norm.

# %%
def error_L2(uh, u_ex, degree_raise=3):
    # Create higher order function space
    degree = uh.function_space.ufl_element().degree
    family = uh.function_space.ufl_element().family_name
    mesh = uh.function_space.mesh
    W = functionspace(mesh, (family, degree + degree_raise))
    # Interpolate approximate solution
    u_W = Function(W)
    u_W.interpolate(uh)

    # Interpolate exact solution, special handling if exact solution
    # is a ufl expression or a python lambda function
    u_ex_W = Function(W)
    if isinstance(u_ex, ufl.core.expr.Expr):
        u_expr = Expression(u_ex, W.element.interpolation_points())
        u_ex_W.interpolate(u_expr)
    else:
        u_ex_W.interpolate(u_ex)

    # Compute the error in the higher order function space
    e_W = Function(W)
    e_W.x.array[:] = u_W.x.array - u_ex_W.x.array

    # Integrate the error
    error = form(ufl.inner(e_W, e_W) * ufl.dx)
    error_local = assemble_scalar(error)
    error_global = mesh.comm.allreduce(error_local, op=MPI.SUM)
    return np.sqrt(error_global)

# %% [markdown]
# ## Computing convergence rates
# Let us consider a sequence of mesh resolutions $h_0>h_1>h_2$, where $h_i=\frac{1}{N_i}$ we compute the errors for a range of $N_i$s

# %%
Ns = [4, 8, 16, 32, 64, 128, 256]
Es = np.zeros(len(Ns), dtype=default_scalar_type)
hs = np.zeros(len(Ns), dtype=np.float64)
for i, N in enumerate(Ns):
    uh, u_ex = solve_poisson(N, degree=1)
    comm = uh.function_space.mesh.comm
    # One can send in either u_numpy or u_ex
    # For L2 error estimations it is reccommended to send in u_numpy
    # as no JIT compilation is required
    eh = uh - u_ex
    error_H10 = form(dot(grad(eh), grad(eh)) * dx)
    E_H10 = np.sqrt(comm.allreduce(assemble_scalar(error_H10), op=MPI.SUM))
    # if comm.rank == 0:
    #     print(f"H01-error: {E_H10:.2e}")
    Es[i] = error_L2(uh, u_numpy)
    hs[i] = 1. / Ns[i]
    if comm.rank == 0:
        print(f"h: {hs[i]:.2e} L2-Error: {Es[i]:.2e} H01-error: {E_H10:.2e}")

# %% [markdown]
# If we assume that $E_i$ is of the form $E_i=Ch_i^r$, with unknown constants $C$ and $r$, we can compare two consecutive experiments, $E_{i-1}= Ch_{i-1}^r$ and $E_i=Ch_i^r$, and solve for $r$:
# ```{math}
# r=\frac{\ln(E_i/E_{i-1})}{\ln(h_i/h_{i-1})}
# ```
# The $r$ values should approach the expected convergence rate (which is typically the polynomial degree + 1 for the $L^2$-error.) as $i$ increases. This can be written compactly using `numpy`.

# %%
rates = np.log(Es[1:] / Es[:-1]) / np.log(hs[1:] / hs[:-1])
if comm.rank == 0:
    print(f"Rates: {rates}")

# %% [markdown]
# We also do a similar study for different orders of polynomial spaces to verify our previous claim.

# %%
degrees = [1, 2, 3, 4]
for degree in degrees:
    Es = np.zeros(len(Ns), dtype=default_scalar_type)
    hs = np.zeros(len(Ns), dtype=np.float64)
    for i, N in enumerate(Ns):
        uh, u_ex = solve_poisson(N, degree=degree)
        comm = uh.function_space.mesh.comm
        Es[i] = error_L2(uh, u_numpy, degree_raise=1)
        hs[i] = 1. / Ns[i]
        if comm.rank == 0:
            print(f"h: {hs[i]:.2e} Error: {Es[i]:.2e}")
    rates = np.log(Es[1:] / Es[:-1]) / np.log(hs[1:] / hs[:-1])
    if comm.rank == 0:
        print(f"Polynomial degree {degree:d}, Rates {rates}")

# %% [markdown]
# ### Infinity norm estimates
# We start by creating a function to compute the infinity norm, the max difference between the approximate and exact solution.

# %%
def error_infinity(u_h, u_ex):
    # Interpolate exact solution, special handling if exact solution
    # is a ufl expression or a python lambda function
    comm = u_h.function_space.mesh.comm
    u_ex_V = Function(u_h.function_space)
    if isinstance(u_ex, ufl.core.expr.Expr):
        u_expr = Expression(u_ex, u_h.function_space.element.interpolation_points())
        u_ex_V.interpolate(u_expr)
    else:
        u_ex_V.interpolate(u_ex)
    # Compute infinity norm, furst local to process, then gather the max
    # value over all processes
    error_max_local = np.max(np.abs(u_h.x.array - u_ex_V.x.array))
    error_max = comm.allreduce(error_max_local, op=MPI.MAX)
    return error_max

# %% [markdown]
# Running this for various polynomial degrees yields:

# %%
for degree in degrees:
    Es = np.zeros(len(Ns), dtype=default_scalar_type)
    hs = np.zeros(len(Ns), dtype=np.float64)
    for i, N in enumerate(Ns):
        uh, u_ex = solve_poisson(N, degree=degree)
        comm = uh.function_space.mesh.comm
        Es[i] = error_infinity(uh, u_numpy)
        hs[i] = 1. / Ns[i]
        if comm.rank == 0:
            print(f"h: {hs[i]:.2e} Error: {Es[i]:.2e}")
    rates = np.log(Es[1:] / Es[:-1]) / np.log(hs[1:] / hs[:-1])
    if comm.rank == 0:
        print(f"Polynomial degree {degree:d}, Rates {rates}")

# %% [markdown]
# We observe super convergence for second order polynomials, yielding a fourth order convergence.

Below is the output Log:

L2-error: 8.11e-15
H01-error: 1.39e-14
h: 2.50e-01 L2-Error: 3.61e-16 H01-error: 7.57e-16
h: 1.25e-01 L2-Error: 2.54e-15 H01-error: 4.26e-15
h: 6.25e-02 L2-Error: 9.76e-15 H01-error: 1.57e-14
h: 3.12e-02 L2-Error: 3.94e-14 H01-error: 6.29e-14
h: 1.56e-02 L2-Error: 1.82e-13 H01-error: 2.87e-13
h: 7.81e-03 L2-Error: 7.54e-13 H01-error: 1.19e-12
h: 3.91e-03 L2-Error: 3.05e-12 H01-error: 4.80e-12
Rates: [-2.81532724 -1.94061354 -2.01154925 -2.2075718  -2.05273341 -2.01701287]

The L2 error is of order 1e-12. this is after taking a square root, meaning that the actual error is 1e-24, far below machine precision.
This is because a linear solution can be exactly represented with the given finite element.

1 Like

Hi Dokken,

Appreciate your insignt! I thought about the machine error but did not take the square root into account. Thanks, then the stable increase of error is probably an accumulation of machine error.

I guess then for linear potential flow problem, convergence study may not be necessary.

By the way, thanks for the detailed tutorials you published online, which is useful for beginners.

Regards,

Mike

Hi Dokken,

I would like to follow up that I tested this problem with longer boundary length.
i.e.
The length of the square is increased from “1” to “1000000000”.
So now the overall error is also increased while this time it is supposed to be larger than the machine error.

But the error is still increasing. Do you think it is still normal for the linear potential flow?

Log:

h: 2.50e-01 Error: 1.22e-04
h: 1.25e-01 Error: 4.35e-04
h: 6.25e-02 Error: 1.73e-03
h: 3.12e-02 Error: 6.14e-03
h: 1.56e-02 Error: 2.36e-02
h: 7.81e-03 Error: 8.45e-02
h: 3.91e-03 Error: 2.92e-01
Polynomial degree 4, Rates [-1.83768656 -1.99386549 -1.82349725 -1.94268623 -1.83938539 -1.79122656]

The modified script is also pasted below for the sake of convenience:

# %% [markdown]
# # Error control: Computing convergence rates
# Author: Jørgen S. Dokken, Hans Petter Langtangen, Anders Logg
# 
# For any numerical method one of the most central questions is its *convergence rate*: How fast does the error go to zero when the resolution is increased (mesh size decreased).
# 
# For the finite element method, this usually corresponds to proving, theoretically or imperically, that the error $e=u_e-u_h$ is bounded by the mesh size $h$ to some power $r$, that is $\vert\vert e \vert\vert\leq Ch^r$ for some mesh independent constant $C$. The number $r$ is called the *convergence rate* of the method. Note that the different norms like the $L^2$-norm $\vert\vert e\vert\vert$ or the $H_0^1$-norm have different convergence rates.

# %% [markdown]
# ## Computing error norms
# We start by creating a manufactured problem, using the same problem as in [the solver configuration](./solvers.ipynb).
# 

# %%
from dolfinx import default_scalar_type, mesh, fem
from dolfinx.fem import (Expression, Function, functionspace,
                         assemble_scalar, dirichletbc, form, locate_dofs_geometrical, locate_dofs_topological)
from dolfinx.fem.petsc import LinearProblem
from dolfinx.mesh import create_unit_square, create_unit_cube, locate_entities_boundary

from mpi4py import MPI
from ufl import SpatialCoordinate, TestFunction, TrialFunction, div, dot, dx, grad, inner

import ufl
import numpy as np
from petsc4py.PETSc import ScalarType

U_inlet = 1.0
def u_ex(mod):
    return lambda x: U_inlet * x[0]


u_numpy = u_ex(np)
u_ufl = u_ex(ufl)


def solve_poisson(N=4, degree=3):
    # domain = mesh.create_unit_cube(MPI.COMM_WORLD, N, N, N, mesh.CellType.hexahedron)
    #domain = mesh.create_unit_square(MPI.COMM_WORLD, N, N)
    domain = mesh.create_rectangle(
        comm=MPI.COMM_WORLD,
        points=[(0.0, 0.0), (1000000000, 1000000000)],
        n=[N, N],
        cell_type=mesh.CellType.quadrilateral
    )
    x = SpatialCoordinate(domain)
    V = functionspace(domain, ("Lagrange", degree))

    # Define trial and test functions
    phi = ufl.TrialFunction(V)
    v = ufl.TestFunction(V)
    # Bilinear form (Laplace equation)
    a = ufl.inner(ufl.grad(phi), ufl.grad(v)) * ufl.dx
    # f = 0.0000000000000001
    g = U_inlet

    # get dimension
    tdim = domain.topology.dim
    #print("domain dimension is", tdim)
    fdim = tdim - 1

    #domain.topology.create_connectivity(fdim, tdim)
    #boundary_facets = mesh.exterior_facet_indices(domain.topology)

    ## Linear form 
    # Neumann BC
    def boundary_x1(x):
        return np.isclose(x[0], 1000000000.0)
    boundary_facets_x1 = mesh.locate_entities_boundary(domain, fdim, boundary_x1)
    mt = mesh.meshtags(domain, fdim, boundary_facets_x1, 1)
    ds = ufl.Measure("ds", domain=domain, subdomain_data=mt)
    L = ufl.inner(g, v) * ds(1)  # g = ∂φ/∂n = U_inlet = 1

    # Dirichlet BC new
    facets = mesh.locate_entities_boundary(
        domain,
        dim=fdim,
        marker=lambda x: np.isclose(x[0], 0.0),
    )
    dofs = fem.locate_dofs_topological(V=V, entity_dim=fdim, entities=facets)
    bc_dirichlet = fem.dirichletbc(value=ScalarType(0), dofs=dofs, V=V)

    # # Dirichlet BC on x=1
    # def boundary_x0(x):
    #     return np.isclose(x[0], 1.0)
    # boundary_dofs_x0 = fem.locate_dofs_geometrical(V, boundary_x0)
    # bc_dirichlet = fem.dirichletbc(1.0, boundary_dofs_x0, V)


    default_problem = fem.petsc.LinearProblem(a, L, bcs=[bc_dirichlet], petsc_options={"ksp_type": "preonly", "pc_type": "lu"})
    return default_problem.solve(), u_ufl(x)

# %% [markdown]
# Now, we can compute the error between the analyical solution `u_ex=u_ufl(x)` and the approximated solution `uh`. A natural choice might seem to compute `(u_ex-uh)**2*ufl.dx`.

# %%
uh, u_ex = solve_poisson(2)
print("uh:")
print(uh)

print("u_ex:")
print(u_ex)

comm = uh.function_space.mesh.comm
error = form((uh - u_ex)**2 * ufl.dx)
E = np.sqrt(comm.allreduce(assemble_scalar(error), MPI.SUM))
if comm.rank == 0:
    print(f"L2-error: {E:.2e}")

# %% [markdown]
# Sometimes it is of interest to compute the error fo the gradient field, $\vert\vert \nabla(u_e-u_h)\vert\vert$, often referred to as the $H_0^1$-norm of the error, this can be expressed as

# %%
eh = uh - u_ex
error_H10 = form(dot(grad(eh), grad(eh)) * dx)
E_H10 = np.sqrt(comm.allreduce(assemble_scalar(error_H10), op=MPI.SUM))
if comm.rank == 0:
    print(f"H01-error: {E_H10:.2e}")

# %% [markdown]
# ### Reliable error norm computation
# However, as this gets expanded to `u_ex**2 + uh**2 - 2*u_ex*uh`. If the error is small, (and the solution itself is of moderate size), this calculation will correspond to subtract two positive numbers `u_ex**2 + uh**2`$\sim 1$ and `2*u_ex*u`$\sim 1$ yielding a small number, prone to round-off errors.
# 
# To avoid this issue, we interpolate the approximate and exact solution into a higher order function space. Then we subtract the degrees of freedom from the interpolated functions to create a new error function. Then, finally, we assemble/integrate the square difference and take the square root to get the L2 norm.

# %%
def error_L2(uh, u_ex, degree_raise=3):
    # Create higher order function space
    degree = uh.function_space.ufl_element().degree
    family = uh.function_space.ufl_element().family_name
    mesh = uh.function_space.mesh
    W = functionspace(mesh, (family, degree + degree_raise))
    # Interpolate approximate solution
    u_W = Function(W)
    u_W.interpolate(uh)

    # Interpolate exact solution, special handling if exact solution
    # is a ufl expression or a python lambda function
    u_ex_W = Function(W)
    if isinstance(u_ex, ufl.core.expr.Expr):
        u_expr = Expression(u_ex, W.element.interpolation_points())
        u_ex_W.interpolate(u_expr)
    else:
        u_ex_W.interpolate(u_ex)

    # Compute the error in the higher order function space
    e_W = Function(W)
    e_W.x.array[:] = u_W.x.array - u_ex_W.x.array

    # Integrate the error
    error = form(ufl.inner(e_W, e_W) * ufl.dx)
    error_local = assemble_scalar(error)
    error_global = mesh.comm.allreduce(error_local, op=MPI.SUM)
    return np.sqrt(error_global)

# %% [markdown]
# ## Computing convergence rates
# Let us consider a sequence of mesh resolutions $h_0>h_1>h_2$, where $h_i=\frac{1}{N_i}$ we compute the errors for a range of $N_i$s

# %%
Ns = [4, 8, 16, 32, 64, 128, 256]
Es = np.zeros(len(Ns), dtype=default_scalar_type)
hs = np.zeros(len(Ns), dtype=np.float64)
for i, N in enumerate(Ns):
    uh, u_ex = solve_poisson(N, degree=1)
    comm = uh.function_space.mesh.comm
    # One can send in either u_numpy or u_ex
    # For L2 error estimations it is reccommended to send in u_numpy
    # as no JIT compilation is required
    eh = uh - u_ex
    error_H10 = form(dot(grad(eh), grad(eh)) * dx)
    E_H10 = np.sqrt(comm.allreduce(assemble_scalar(error_H10), op=MPI.SUM))
    # if comm.rank == 0:
    #     print(f"H01-error: {E_H10:.2e}")
    Es[i] = error_L2(uh, u_numpy)
    hs[i] = 1. / Ns[i]
    if comm.rank == 0:
        print(f"h: {hs[i]:.2e} L2-Error: {Es[i]:.2e} H01-error: {E_H10:.2e}")

# %% [markdown]
# If we assume that $E_i$ is of the form $E_i=Ch_i^r$, with unknown constants $C$ and $r$, we can compare two consecutive experiments, $E_{i-1}= Ch_{i-1}^r$ and $E_i=Ch_i^r$, and solve for $r$:
# ```{math}
# r=\frac{\ln(E_i/E_{i-1})}{\ln(h_i/h_{i-1})}
# ```
# The $r$ values should approach the expected convergence rate (which is typically the polynomial degree + 1 for the $L^2$-error.) as $i$ increases. This can be written compactly using `numpy`.

# %%
rates = np.log(Es[1:] / Es[:-1]) / np.log(hs[1:] / hs[:-1])
if comm.rank == 0:
    print(f"Rates: {rates}")

# %% [markdown]
# We also do a similar study for different orders of polynomial spaces to verify our previous claim.

# %%
degrees = [1, 2, 3, 4]
for degree in degrees:
    Es = np.zeros(len(Ns), dtype=default_scalar_type)
    hs = np.zeros(len(Ns), dtype=np.float64)
    for i, N in enumerate(Ns):
        uh, u_ex = solve_poisson(N, degree=degree)
        comm = uh.function_space.mesh.comm
        Es[i] = error_L2(uh, u_numpy, degree_raise=1)
        hs[i] = 1. / Ns[i]
        if comm.rank == 0:
            print(f"h: {hs[i]:.2e} Error: {Es[i]:.2e}")
    rates = np.log(Es[1:] / Es[:-1]) / np.log(hs[1:] / hs[:-1])
    if comm.rank == 0:
        print(f"Polynomial degree {degree:d}, Rates {rates}")

# %% [markdown]
# ### Infinity norm estimates
# We start by creating a function to compute the infinity norm, the max difference between the approximate and exact solution.

# %%
def error_infinity(u_h, u_ex):
    # Interpolate exact solution, special handling if exact solution
    # is a ufl expression or a python lambda function
    comm = u_h.function_space.mesh.comm
    u_ex_V = Function(u_h.function_space)
    if isinstance(u_ex, ufl.core.expr.Expr):
        u_expr = Expression(u_ex, u_h.function_space.element.interpolation_points())
        u_ex_V.interpolate(u_expr)
    else:
        u_ex_V.interpolate(u_ex)
    # Compute infinity norm, furst local to process, then gather the max
    # value over all processes
    error_max_local = np.max(np.abs(u_h.x.array - u_ex_V.x.array))
    error_max = comm.allreduce(error_max_local, op=MPI.MAX)
    return error_max

# %% [markdown]
# Running this for various polynomial degrees yields:

# %%
for degree in degrees:
    Es = np.zeros(len(Ns), dtype=default_scalar_type)
    hs = np.zeros(len(Ns), dtype=np.float64)
    for i, N in enumerate(Ns):
        uh, u_ex = solve_poisson(N, degree=degree)
        comm = uh.function_space.mesh.comm
        Es[i] = error_infinity(uh, u_numpy)
        hs[i] = 1. / Ns[i]
        if comm.rank == 0:
            print(f"h: {hs[i]:.2e} Error: {Es[i]:.2e}")
    rates = np.log(Es[1:] / Es[:-1]) / np.log(hs[1:] / hs[:-1])
    if comm.rank == 0:
        print(f"Polynomial degree {degree:d}, Rates {rates}")

# %% [markdown]
# We observe super convergence for second order polynomials, yielding a fourth order convergence.



Let’s consider how big the actual differences between the exact an approximate solution is:

Ns = [4, 8, 16, 32]
degrees = [1, 2, 3, 4]
for degree in degrees:
    Es = np.zeros(len(Ns), dtype=default_scalar_type)
    hs = np.zeros(len(Ns), dtype=np.float64)
    for i, N in enumerate(Ns):
        uh, u_ex = solve_poisson(N, degree=degree)
        comm = uh.function_space.mesh.comm
        Es[i] = error_L2(uh, u_numpy, degree_raise=1)
        hs[i] = 1.0 / Ns[i]
        import dolfinx

        T = uh.function_space
        vh = dolfinx.fem.Function(T)
        vh.interpolate(u_numpy)
        L2err = np.sqrt(
            dolfinx.fem.assemble_scalar(
                dolfinx.fem.form(inner(uh - vh, uh - vh) * ufl.dx)
            )
        )
        print(np.max(uh.x.array - vh.x.array), L2err)
    rates = np.log(Es[1:] / Es[:-1]) / np.log(hs[1:] / hs[:-1])
    if comm.rank == 0:
        print(f"Polynomial degree {degree:d}, Rates {rates}")

This for instance returns:

2.384185791015625e-07 128.09098750234364
3.6954879760742188e-06 1574.0055257737465
1.9311904907226562e-05 13232.290536794055
0.00010955333709716797 67459.80715783477
Polynomial degree 1, Rates [-3.95439945 -3.06798598 -2.34930771]
1.3877787807814457e-08 756.6908697144429
6.938893903907228e-09 17403.136914296294
3.469446951953614e-09 106976.7652906931
1.734723475976807e-09 200930.02663326045
Polynomial degree 2, Rates [-4.52376281 -2.61815205 -0.9095225 ]
1.3877787807814457e-08 17996.654161680588
6.938893903907228e-09 24014.164293442613
3.469446951953614e-09 168830.33213085553
1.734723475976807e-09 595742.9942927263
Polynomial degree 3, Rates [-0.41667325 -2.81758156 -1.81946802]
9.28640365600586e-05 60235.336481510465
0.0003108978271484375 213244.45995781783
0.0013023614883422852 873653.515750707
0.004230618476867676 2898922.449475045
Polynomial degree 4, Rates [-1.82513202 -2.03499636 -1.73047653]

where you observe that small absolute errors are amplified in an L2 error estimate, as you will be multiplying it by the area of the square (which is then 10^{18}.
By making your grid larger and larger (i.e. starting with L=1 and increasing it gradually, you will observe that the solution gets less accurate for large L).

1 Like

Hi Dokken,

Many thanks for your patience and prompt reply!

I can see that L2 error increases much faster than the max abs error.

Just probably one final question. I tested the code, it is seen that the max absolute error also goes up with the grid length L, and is seemingly going above the machine error at the last line in your log as “0.004230618476867676”. I wonder if this is also somehow due to accumulation of machine error?

I tried to asked Deepseek about this problem. The answer is:

==================================================

Key Reasons for Increasing Max Absolute Error:

  1. Solution Magnitude and Relative Error:

    • The exact solution ( u = U_{\text{inlet}} x_0 ) scales linearly with ( L ), reaching values up to ( L ).
    • Finite-precision arithmetic introduces relative errors (e.g., ( \sim 10^{-16} ) for double-precision). For ( L = 10^9 ), even a tiny relative error like ( 10^{-12} ) translates to an absolute error of ( 10^{-12} \times 10^9 = 10^{-3} ), matching your observed error of ( 0.004 ).
  2. Ill-Conditioned Systems:

    • The stiffness matrix entries depend on the mesh geometry. For large ( L ), the physical coordinates of nodes (e.g., ( x = 10^9 )) are stored with limited precision, leading to round-off errors during matrix assembly. These errors propagate through the solver, even with direct methods like LU.
  3. Boundary Condition Implementation:

    • The Neumann flux ( \partial \phi / \partial n = 1 ) is integrated over a boundary of length ( L ). The resulting load vector entries scale with ( L ), amplifying any numerical inaccuracies during linear system solving.
  4. Interpolation of Exact Solution:

    • When interpolating ( u_{\text{exact}} = x_0 ) into the finite element space, nodal coordinates with round-off errors (e.g., ( x = 10^9 + \delta )) introduce absolute errors ( \delta ) in ( u_{\text{exact}} ), directly affecting the infinity norm.

==================================================

Regards,

Mike

This is almost what I am showing above, just that one has to multiply by L^2 rather than L.

1 Like

Many thanks for the confirmation!