Hi everyone,

I’m using macOS Mojave (10.14.6), miniconda, and FEniCS 2018 with Python. I have a function u: \mathbb{R}^d \rightarrow \mathbb{R}, d = 1, 2. What I want to compute now is the norm / length of its gradient, so | \nabla u| = \sqrt{\nabla u \cdot \nabla u}. Looking only at the dot product, then this should be nonnegative. But if I take u as this function

then |\nabla u| and \nabla u \cdot \nabla u should have (possibly hughe) jumps at x = 0.3 and 0.7. Instead I get something like this (on a mesh with 49 grid points)

Could it be that I have done something wrong with dot and grad here? I get similar results when doing this with a function \mathbb{R}^2 \rightarrow \mathbb{R}.

from fenics import *
# constructing mesh on (0, 1)
N = 49;
mesh = UnitIntervalMesh(N); V = FunctionSpace(mesh, 'P', 1)
# (step)-function of interest
u = interpolate(Expression('x[0] >= 0.3 && x[0] <= 0.7 ? 2 : 1', degree = 2), V)
plot(u, title = 'u')
# plot dot product of grad(u) with itselfe


Thanks for any help.

What is being plotted here is not \vert\nabla u\vert itself, but an L^2 projection of it onto a linear C^0 finite element space. L^2 projection to linears does not preserve sign, and the result will have oscillations when projecting discontinuous functions. Some options to get monotone plots are projecting to a DG space or using lumped-mass projection to linears. See, e.g., the following modification of your MWE:

from fenics import *
# constructing mesh on (0, 1)
N = 49
mesh = UnitIntervalMesh(N); V = FunctionSpace(mesh, 'P', 1)
# (step)-function of interest
u = interpolate(Expression('x[0] >= 0.3 && x[0] <= 0.7 ? 2 : 1', degree = 2), V)

# Function to project and plot:

# Option 0: Consistent-mass projection to a continous space is oscillatory when
# projecting discontinuous functions.
plot(project(f))

# Option 1: Project to a DG0 space (i.e., elementwise averaging).
plot(project(f,FunctionSpace(mesh,"DG",0)))

# Option2: Lumped-mass projection to CG1, which remains monotone.
def lumpedProject(f):
v = TestFunction(V)
lhs = assemble(inner(Constant(1.0),v)*dx)
rhs = assemble(inner(f,v)*dx)
u = Function(V)
as_backend_type(u.vector())\
.vec().pointwiseDivide(as_backend_type(rhs).vec(),
as_backend_type(lhs).vec())
return u
plot(lumpedProject(f))

from matplotlib import pyplot as plt
plt.autoscale()
plt.show()

1 Like

Awesome, thank you! Just one quick question. Is your lumpedProject equivalent to

def lumpedProject(f):
v = TestFunction(V)
lhs = assemble(inner(Constant(1.0),v)*dx)
rhs = assemble(inner(f,v)*dx)
u = Function(V)
u.vector()[:] = rhs[:]/lhs[:]
return u


I am not familiar with the as_beckend_type but I figured out what you did and tried to implement it “my way”. It gives the same output for this example, so I do not know if this is valid for those but will fail for other.

Thank you very much,
Yannik

I’ve never tried to do that before, but it looks like that syntax performs the desired element-wise division. I just default to thinking in terms of PETSc operations, because I learned PETSc before I started using FEniCS.

About as_backend_type: In the current stable version of FEniCS, there are abstractions for linear algebra that are designed to interface with different “backend” libraries, but, in practice, everyone uses the default PETSc backend; according to the development roadmap, future versions of FEniCS will support PETSc exclusively.