Computing second order derivatives for first order Lagrangian elements

You can compute an okay approximation yourself, but I don’t know what is the best practice for obtaining accuracy all the way out to the boundary of the domain. (The Laplacian can be seen as a distribution living on internal facets, that depends on the jump in first derivative, so its boundary values depend on the first derivative of an arbitrary extension.) Take a look at the following code example:

from dolfin import *

mesh = UnitIntervalMesh(400)
n = FacetNormal(mesh)
V = FunctionSpace(mesh,"Lagrange",1)

lap_u = TrialFunction(V)
v = TestFunction(V)
lap_uh = Function(V)

# Approximate some smooth function in CG1:
x = SpatialCoordinate(mesh)
u = cos(pi*x[0]) + 0.25*sin(3.0*pi*x[0]) + x[0]*x[0] + x[0]*x[0]*x[0]

# Note: If you pick a smooth function that, for whatever reason, you know
# a priori has zero Laplacian at the boundary, and apply a Dirichlet BC for
# `lap_BC`, you get higher convergence rates.
#
#u = sin(3.0*pi*x[0])

uh = project(u,V)

# The story is similar for either set of BCs on the Laplacian if you don't
# know the boundary values a priori.
lap_BC = DirichletBC(V,Constant(0.0),"on_boundary")
#lap_BC = DirichletBC(V,Constant(0.0),"false")

# Weak Laplacian of `uh` on $H^1$:
def weakLap(v):
    return -inner(grad(uh),grad(v))*dx + dot(grad(uh),n)*v*ds

# $L^2$ inner product:
def L2IP(u,v):
    return u*v*dx

# $L^2$ Riesz representation of Laplacian in `V`:
solve(L2IP(lap_u,v) == weakLap(v), lap_uh, lap_BC)

# Mass lumping in $L^2$ inner product gives a monotone result:
M_lumped_diag = assemble(L2IP(1.0,v))
lap_lumped = Function(V)
b = assemble(weakLap(v))
lap_BC.apply(b)
as_backend_type(lap_lumped.vector())\
    .vec().pointwiseDivide(as_backend_type(b).vec(),
                           as_backend_type(M_lumped_diag).vec())

# Check errors quantitatively:

# Both the lumped-mass and consistent-mass versions appear to converge at
# first order, assuming you don't know the boundary Laplacian beforehand,
# but the lumped-mass one has lower errors:
#
# EDIT:  1/2-order convergence; forgot the square root.
#
import math
print(math.sqrt(assemble((div(grad(u))-lap_lumped)**2*dx)))
print(math.sqrt(assemble((div(grad(u))-lap_uh)**2*dx)))

# Look at results:
from matplotlib import pyplot as plt
plot(div(grad(u)))
plot(lap_uh)
plot(lap_lumped)
plt.autoscale()
plt.show()

This paper

https://doi.org/10.1016/S0045-7825(98)00284-9

has a different approach, of basically projecting the flux (first derivatives) into CG1 and taking derivatives of that, although when I tried it, it still doesn’t solve the issue of poor approximations at the boundary of the domain.

2 Likes