I am solving a PDE in FunctionSpace(mesh, “Lagrange”, 1) and I would like to compute the Laplacian of the solution u. However, div(grad(u)) will return zero for such piecewise linear functions.
How can I implement this in fenics? Thank you!
I am solving a PDE in FunctionSpace(mesh, “Lagrange”, 1) and I would like to compute the Laplacian of the solution u. However, div(grad(u)) will return zero for such piecewise linear functions.
How can I implement this in fenics? Thank you!
You should use higher order element to get non-zero values for Laplacian. Take a look at this example:
from dolfin import *
mesh = UnitSquareMesh(10, 10)
V = FunctionSpace(mesh, 'Lagrange', 2)
expr = Expression('x[0]*x[0] + x[1]*x[1]',degree=2)
f = project(expr, V)
g = project(div(grad(f)),V)
qq = File("laplacian.pvd")
qq << g
It returns 4 as the value for Laplacian on the whole domain. If you use:
V = FunctionSpace(mesh, 'Lagrange', 1)
You will get zero on the entire domain.
Thank you, Leo. So there is no other possibility in fenics?
The third argument in your FunctionSpace corresponds to the polynomial degree of your element. From the mathematical point of view if it is equal to 1 it is first-order Lagrange basis function and taking second order derivative of a first order polynomial results in zero.
Oh, yes, I understand this.
However, taking into account function values from neighbouring elements, it would, of course, be possible to compute higher order derivatives. But this is not implemented, I guess…
Thank you, nevertheless!
You can compute an okay approximation yourself, but I don’t know what is the best practice for obtaining accuracy all the way out to the boundary of the domain. (The Laplacian can be seen as a distribution living on internal facets, that depends on the jump in first derivative, so its boundary values depend on the first derivative of an arbitrary extension.) Take a look at the following code example:
from dolfin import *
mesh = UnitIntervalMesh(400)
n = FacetNormal(mesh)
V = FunctionSpace(mesh,"Lagrange",1)
lap_u = TrialFunction(V)
v = TestFunction(V)
lap_uh = Function(V)
# Approximate some smooth function in CG1:
x = SpatialCoordinate(mesh)
u = cos(pi*x[0]) + 0.25*sin(3.0*pi*x[0]) + x[0]*x[0] + x[0]*x[0]*x[0]
# Note: If you pick a smooth function that, for whatever reason, you know
# a priori has zero Laplacian at the boundary, and apply a Dirichlet BC for
# `lap_BC`, you get higher convergence rates.
#
#u = sin(3.0*pi*x[0])
uh = project(u,V)
# The story is similar for either set of BCs on the Laplacian if you don't
# know the boundary values a priori.
lap_BC = DirichletBC(V,Constant(0.0),"on_boundary")
#lap_BC = DirichletBC(V,Constant(0.0),"false")
# Weak Laplacian of `uh` on $H^1$:
def weakLap(v):
return -inner(grad(uh),grad(v))*dx + dot(grad(uh),n)*v*ds
# $L^2$ inner product:
def L2IP(u,v):
return u*v*dx
# $L^2$ Riesz representation of Laplacian in `V`:
solve(L2IP(lap_u,v) == weakLap(v), lap_uh, lap_BC)
# Mass lumping in $L^2$ inner product gives a monotone result:
M_lumped_diag = assemble(L2IP(1.0,v))
lap_lumped = Function(V)
b = assemble(weakLap(v))
lap_BC.apply(b)
as_backend_type(lap_lumped.vector())\
.vec().pointwiseDivide(as_backend_type(b).vec(),
as_backend_type(M_lumped_diag).vec())
# Check errors quantitatively:
# Both the lumped-mass and consistent-mass versions appear to converge at
# first order, assuming you don't know the boundary Laplacian beforehand,
# but the lumped-mass one has lower errors:
#
# EDIT: 1/2-order convergence; forgot the square root.
#
import math
print(math.sqrt(assemble((div(grad(u))-lap_lumped)**2*dx)))
print(math.sqrt(assemble((div(grad(u))-lap_uh)**2*dx)))
# Look at results:
from matplotlib import pyplot as plt
plot(div(grad(u)))
plot(lap_uh)
plot(lap_lumped)
plt.autoscale()
plt.show()
This paper
https://doi.org/10.1016/S0045-7825(98)00284-9
has a different approach, of basically projecting the flux (first derivatives) into CG1 and taking derivatives of that, although when I tried it, it still doesn’t solve the issue of poor approximations at the boundary of the domain.