# Calculating the curl of a tensor field

Hi everyone,

I’m working with a problem in which a PDE involves the curl of a tensor field. I get the impression that the way I’m computing this curl is relatively slow, and that it could be significantly improved by e.g. vectorizing it in some way. Here is an MWE illustrating how I’ve been doing it:

``````import dolfin as dlf
import ufl
import numpy as np

# Levi-Civita permutation tensor
Id = np.eye(3)
A = lambda *ind: np.linalg.det(np.array([[Id[p, q] for q in range(3)] for p in ind]))
perm = ufl.as_tensor([[[A(i, j, k) for k in range(3)] for j in range(3)] for i in range(3)])

def tcurl(A):

i, j, k, l = ufl.indices(4)
return ufl.as_tensor(A[i, l].dx(k) * perm[j, k, l], (i,j,))

mesh = dlf.BoxMesh(
dlf.Point(0., 0., 0.),
dlf.Point(2., 2., 2.),
20,
20,
1
)

V = dlf.TensorFunctionSpace(mesh, "CG", 1)
u = dlf.Function(V, name="u")
curl_u = dlf.Function(V, name="curl of u")

u_exp  = dlf.Expression("pow(x,2) + pow(x,2)*x + sin(x)", degree=1)
u_init = dlf.Expression(
[
['u11', '0', '0'],
['0', '0', '0'],
['0', '0', '0']
],
u11=u_exp,
degree=1
)
u.assign(dlf.project(u_init, V))
curl_u.assign(dlf.project(tcurl(u), V))

with dlf.XDMFFile("outfile.xdmf") as outfile:
outfile.parameters["flush_output"] = True
outfile.parameters["functions_share_mesh"] = True
outfile.write(u, 0.)
outfile.write(curl_u, 0.)
``````

Does anyone know a faster/better way to compute this?

Thanks!

I don’t know whether there’s any performance difference, but I’ll mention that you can use UFL’s existing definition of the Levi-Civita tensor, which simplifies the code slightly:

``````import dolfin as dlf
import ufl
#import numpy as np

# Levi-Civita permutation tensor
#Id = np.eye(3)
#A = lambda *ind: np.linalg.det(np.array([[Id[p, q] for q in range(3)] for p in ind]))
#perm = ufl.as_tensor([[[A(i, j, k) for k in range(3)] for j in range(3)] for i in range(3)])
perm = ufl.PermutationSymbol(3)
``````
2 Likes

I ported your code to dolfinx and it’s 20x faster.
However I am not able to help you with the dolfin code as I am not familiar with how dolfin works for interpolation.
More generally, it is known that dolfinx has better performance than dolfin.

``````import dolfinx.mesh, dolfinx.fem as fem
from mpi4py import MPI
import ufl
import numpy as np
import time

# Levi-Civita permutation tensor
Id = np.eye(3)
A = lambda *ind: np.linalg.det(np.array([[Id[p, q] for q in range(3)] for p in ind]))
perm = ufl.as_tensor([[[A(i, j, k) for k in range(3)] for j in range(3)] for i in range(3)])

i, j, k, l = ufl.indices(4)
def tcurl(A):
return ufl.as_tensor(A[i, l].dx(k) * perm[j, k, l], (i,j,))

mesh = dolfinx.mesh.create_box(
MPI.COMM_SELF,
[[0.,0.,0.], [2., 2., 2.]],
[20, 20, 1]
)

V = fem.TensorFunctionSpace(mesh, ("CG", 1))
u = fem.Function(V, name="u")
curl_u = fem.Function(V, name="curl of u")

x = ufl.SpatialCoordinate(mesh)
u_exp = x**2 + x**2*x + ufl.sin(x)
u_init = ufl.as_matrix([
[u_exp, 0., 0.],
[0., 0., 0.],
[0., 0., 0.]
])

curl_u_expr = fem.Expression(tcurl(u_init), V.element.interpolation_points)

curl_u = fem.Function(V)
start = time.process_time()
curl_u.interpolate(curl_u_expr)
print(time.process_time() - start)

with dolfinx.io.XDMFFile(mesh.comm, "outfile.xdmf", "w") as outfile:
outfile.write_mesh(mesh)
outfile.write_function(curl_u)
``````
2 Likes

Just looking at the code, the main performance difference is probably due to the use of interpolation instead of (L^2-)projection, where the latter needs to solve a linear system. FEniCSx greatly expands the types of expressions that can be interpolated. Legacy FEniCS doesn’t really provide good options other than `project` to get arbitrary UFL expressions into finite element function spaces, since only very specific classes of objects can be `interpolate`d. But, if you’re stuck with Legacy FEniCS due to an existing codebase, you can optimize the projection by specifying an iterative linear solver instead of the default direct solver, using mass lumping, or using a local element-by-element solve to project to a discontinuous space.

3 Likes

No performance difference, but it’s always nice to simplify the code!