Nedelec elements or Lagrange? What about the order?

Hello FEniCSxers,
I have a thermoelectric problem I can solve with FEniCSx, but having no analytical result to check the validity of the solution, I have some doubts everything I do is correct. Everything seems to make sense though, so far, but I would like to be 99.9% sure and not 80%.
I solve 2 PDE’s with mixed elements, one of them being \nabla \cdot \vec J_e =0, where \vec J_e = -\sigma \nabla V -\sigma S \nabla T, i.e. it derives from Maxwell equations applied in matter (the divergence of a generalized Ohm’s law equals 0).

I have read at several places one should be careful when dealing with Maxwell equations, and should use Nedelec elements (I only used Lagrange). For example this website: MaxwellGoodBadUgly shows that Lagrange elements won’t converge towards the solution, even with mesh refinement. Nedelec will, however. The FEniCSx tutorial (Electromagnetics example — FEniCSx tutorial) only uses Lagrange elements, and no discussion about this choice is done, on that page at least. Another approach from someone from this website (3D magnetostatics - Curl-conform field) was apparently very cautious and picked Nedelec elements (N1curl) for his test and trial functions for his magnetostatics problem, but not for the B field nor its norm (he picked DG elements). He dealt with the curl operator though, not div.

I do not know enough finite elements theory (only followed FEniCS(x) tutorials) and tried to dig a little bit more, but my knowledge is too limited here.

  1. Would Nedelec elements be appropriate for my problem? Or is Lagrange enough? (I believe so, as I do not deal with magnetic fields. I do compute the current, however, but this doesn’t involve the curl)
  2. What about the order of the elements, in general? Is it always better to use higher order? What does this change exactly?

Something is definitely wrong. I would really appreciate some help/guidance in debugging.
Here are 2 screenshots showing the problem. This represents a heating (called Bridgman heat), it is a scalar field computed as -T\left( S_{xx}\frac{\partial J_x}{\partial x} + S_{yy}\frac{\partial J_y}{\partial y}\right) where the current density is given in my OP. I think this heating term is computed correctly almost everywhere, except in some elements of the meshing. This messes up my computations, I think (for example when I integrate this quantity over the whole volume).


import numpy as np
from dolfinx import log
from dolfinx.fem import (Constant, Function, FunctionSpace, assemble_scalar,
                         dirichletbc, form, locate_dofs_geometrical, VectorFunctionSpace,
                         locate_dofs_topological, Expression)
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.io import XDMFFile, gmshio
from dolfinx.nls.petsc import NewtonSolver
from mpi4py import MPI
from petsc4py import PETSc
from petsc4py.PETSc import ScalarType
from ufl import (FiniteElement, Measure, MixedElement, SpatialCoordinate,
                 TestFunction, TrialFunction, as_tensor, dot, dx, grad, inner,
                 inv, split, derivative, FacetNormal, div)
from mpi4py import MPI
import gmsh
from dolfinx.io import gmshio, VTXWriter, XDMFFile
mesh, cell_markers, facet_markers = gmshio.read_from_msh("meshes/half_annulus.msh", MPI.COMM_WORLD, gdim=2)

inj_current_curve = 12
out_current_curve = 13 
reading_voltage_surface_0 = 12
reading_voltage_surface_1 = 13

# Define ME function space
el = FiniteElement("CG", mesh.ufl_cell(), 3)
mel = MixedElement([el, el])
ME = FunctionSpace(mesh, mel)

u, v = TestFunction(ME)
TempVolt = Function(ME)
temp, volt = split(TempVolt)
dTV = TrialFunction(ME)

rho = 1
sigma = 1.0 / rho
κ = 1.8
S_xx = 3000e-6
S_yy = S_xx * 10#* 3#* 300
Seebeck_tensor = as_tensor([[S_xx, 0], [0, S_yy]])

# Define the boundary conditions
left_facets = facet_markers.find(inj_current_curve)
right_facets = facet_markers.find(out_current_curve)
left_dofs = locate_dofs_topological(
    ME.sub(1), mesh.topology.dim-1, left_facets)
left_dofs_temp = locate_dofs_topological(
    ME.sub(0), mesh.topology.dim-1, left_facets)
right_dofs_temp = locate_dofs_topological(
    ME.sub(0), mesh.topology.dim-1, right_facets)
T_cold = 300.0
ΔT = 2000
bc_temp_left = dirichletbc(ScalarType(T_cold), left_dofs_temp, ME.sub(0))
bc_temp_right = dirichletbc(ScalarType(T_cold + ΔT), right_dofs_temp, ME.sub(0))
bcs = [bc_temp_left, bc_temp_right]

x = SpatialCoordinate(mesh)
dx = Measure("dx", domain=mesh, subdomain_data=cell_markers)
ds = Measure("ds", domain=mesh, subdomain_data=facet_markers)

the_current = 1e-15
J = the_current / assemble_scalar(form(1 * ds(inj_current_curve, domain=mesh)))

# Weak form.
J_vector = -sigma * grad(volt) - sigma * Seebeck_tensor * grad(temp)
Fourier_term = dot(-κ * grad(temp), grad(u)) * dx
Joule_term = dot(rho * J_vector, J_vector) * u * dx
Bridgman_term = - temp * (S_xx * J_vector[0].dx(0) + S_yy * J_vector[1].dx(1)) * u * dx
F_T = Fourier_term + Joule_term + Bridgman_term
F_V = -dot(grad(v), sigma * grad(volt))*dx -dot(grad(v), sigma * Seebeck_tensor * grad(temp))*dx - v * J * ds(out_current_curve) + v * J * ds(inj_current_curve) 

weak_form = F_T + F_V

# Solve the PDE.
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.nls.petsc import NewtonSolver

Jac = derivative(weak_form,TempVolt,dTV)

# Solve the PDE.
problem = NonlinearProblem(weak_form, TempVolt, bcs=bcs, J = Jac)
solver = NewtonSolver(MPI.COMM_WORLD, problem)
solver.convergence_criterion = "incremental"
solver.rtol = 1e-10
solver.report = True
solver.max_it = 10000

ksp = solver.krylov_solver
opts = PETSc.Options()
option_prefix = ksp.getOptionsPrefix()
opts[f"{option_prefix}ksp_type"] = "preonly"
opts[f"{option_prefix}pc_type"] = "ksp"
opts[f"{option_prefix}pc_factor_mat_solver_type"] = "mumps"
#opts[f"{option_prefix}ksp_max_it"]= 10000
ksp.setFromOptions()
log.set_log_level(log.LogLevel.WARNING)
n, converged = solver.solve(TempVolt)
assert(converged)
print(f'''------- Processing --------
Number of interations: {n:d}''')

# Compute fluxes on boundaries
n = FacetNormal(mesh)
down_heat_flux = form(dot(-κ * grad(temp) + Seebeck_tensor * temp * J_vector, n)*ds(out_current_curve))

print(f'''--------- Post processing -------
Computed generated Joule heat through the whole volume: {assemble_scalar(form(dot(rho * J_vector, J_vector) * dx))} W.
Make sure div(J_e) = 0 throughought the whole volume: {assemble_scalar(form((J_vector[0].dx(0) + J_vector[1].dx(1)) * dx))}
Computed generated Bridgman heat through the whole volume: {assemble_scalar(form(- temp * (S_xx * J_vector[0].dx(0) + S_yy * J_vector[1].dx(1)) * dx))} W.
''')

# Current density.
J = VectorFunctionSpace(mesh, ("DG", 2))
Je = Function(J)
Je_flux_calculator = Expression(-sigma * grad(volt) - sigma * Seebeck_tensor * grad(temp), J.element.interpolation_points())
Je.interpolate(Je_flux_calculator)

# Bridgman heat.
Q_B = FunctionSpace(mesh, ("DG", 2))
QB = Function(Q_B)
QB_flux_calculator = Expression(- temp * (S_xx * J_vector[0].dx(0) + S_yy * J_vector[1].dx(1)), Q_B.element.interpolation_points())
QB.interpolate(QB_flux_calculator)

Output_0 = TempVolt.sub(0).collapse()
Output_1 = TempVolt.sub(1).collapse()
output_4 = Je
output_5 = QB

with VTXWriter(MPI.COMM_WORLD, "results/temperature.bp", [Output_0], engine="BP4") as vtx:
    vtx.write(0.0)
    
with VTXWriter(MPI.COMM_WORLD, "results/voltage.bp", [Output_1], engine="BP4") as vtx:
    vtx.write(0.0)
    
with VTXWriter(MPI.COMM_WORLD, "results/current_density.bp", [output_4], engine="BP4") as vtx:
    vtx.write(0.0)
    
with VTXWriter(MPI.COMM_WORLD, "results/Bridgman_heat.bp", [output_5], engine="BP4") as vtx:
    vtx.write(0.0)

The geo file used to produce the mesh:

h  = 0.00915;
r1 = 0.3;
r2 = 0.1;
Point(0) = {r1,0,0,h};
Point(1) = {0,0,0,h};
Point(2) = {r2,0,0,h};
Point(3) = {2*r1-r2,0,0,h};
Point(4) = {2*r1,0,0,h};
Line(1) = {1,2};
Circle(2) = {2,0,3};
Line(3) = {3,4};
Circle(4) = {4,0,1};
Line Loop(1) = {1,2,3,4};
Plane Surface(1) = {1};
Physical Curve("bottom_left", 12) = {1};
Physical Curve("bottom_right", 13) = {3};
Physical Surface("material", 11) = {1};

(discourse website being too limited to accept the content of the file.msh due to a line number limit.)

Oh… I guess it’s because I didn’t pay attention and used DG elements instead of CG for the fluxes! Hmm. I still get a strange solution, not 100% sure it’s correct, unfortunately I see no way to really make sure what I’m doing is correct, since analytical solutions aren’t to be found.

Hello again, I realize this might be an unrelated question, so I will probably open a new thread, but I don’t have much time right now… here it goes:
Do you have an idea why I would get a strange convergence for the Bridgman heat term as computed in my code above? It doesn’t vary much with mesh refinement (same as Joule heat), so I thought it had converged. However increasing the order of the Lagrange elements for the fields and fluxes make everything converge except the Brdigman heat term, even up to order 5 and 4 respectively, i.e. the result would differ by over 10 percent (compared to less than 1 percent for anything else).
This is the very last thing I need to figure out before I can trust my results.

One thing I notice about my code is that I do not seem to update temp, volt nor J_vector after I solve the PDEs. However I do calculations involving those terms. I do seem to get reasonable results except for that Bridgman term, which is very strange.
But maybe everything is wrong because I didn’t uodate those values?

It’s a bit sad, I am not making progress with my problem. I have tried several things. I have added

temp, volt = split(TempVolt)
J_vector = -sigma * grad(volt) - sigma * Seebeck_tensor * grad(temp)

after I solve the problem, thinking that this would update temp, volt and J_vector. However this didn’t change the postprocessing, where quantities involving temp, volt and J_vector are involved. How is this possible? How does temp, volt and J_vector get updated on their own?

If using a degree of 1 for el (with continuous Lagrange elements) and a degree of 0 for the fluxes defined in the postprocessing (discontinuous Galerkin elements), I face an error leading to a resolved error on gitlab:

Traceback (most recent call last):
  File "/usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/io/utils.py", line 91, in __init__
    comm, filename, output._cpp_object, engine)  # type: ignore[union-attr]
AttributeError: 'list' object has no attribute '_cpp_object'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/shared/cut_annulus_Bridgman/Bridgman_Nye_example.py", line 226, in <module>
    with VTXWriter(MPI.COMM_WORLD, "results/heat_flux.bp", [output_3], engine="BP4") as vtx:
  File "/usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/io/utils.py", line 94, in __init__
    self._cpp_object = _vtxwriter(comm, filename, _extract_cpp_functions(
RuntimeError: VTK does not support cell-wise fields. See https://gitlab.kitware.com/vtk/vtk/-/issues/18458.

I wish someone could help me debug what’s wrong with this Bridgman term.

In case this enlighten someone, here’s a screenshot of what I view with Paraview for the Bridgman heat when solving on a high density mesh.

When integrated through the whole volume, as shown in my code, Bridgman heat is the only quantity that doesn’t converge when I increase the mesh density, nor when I increase the degree of the vector spaces. Joule heat for instance, which involves J_vector squared, converges rather quickly.

When choosing a ME space of degree 8 (continuous Lagrange), and 7 for the fluxes (discontinuous Galerkin) on a coarser mesh and by multiplying delta T by 10 (to increase Bridgman heat by 10), I see:


In that case it looks like it’s not well computed on the edges.
I suspect that in general it’s well computed in most of the mesh but not at particular spots, which makes the calculations wrong.
In this example the bounds are [-214676.57767571456, 623287.1757727616] which is probably way too big, or an indicator maybe that Bridgman heat might not be well defined in some points… I am not sure.

Another pointer maybe. When I look at the current density (J_vector), from which both Joule heat (correctly computed) and Bridgman heat (not correctly computed), I see that at some nodes (not only edge nodes), there are several current density vectors defined, with different directions! This is not normal I suppose… and obviously this should totally mess up computations of the Bridgman heat since it is sensitive to the direction of J. I have no idea how computations can be done if there are several values for J in a given cell…
What is going on? Why do I see several vectors (like 4) starting at the exact same point?

Edit: And I don’t have this problem with higher density meshes… Wow. This is so confusing.

What space does your J live it?

If you see multiple nodes at vertices there are two possibilities:

  1. You are using a discontinuous space for J.
  2. Your mesh have fully decoupled elements in this area.

Thanks for your reply dokken. I posted my code in post number 2. Obviously my current code is not exactly the same as back then, but almost. This should tell you how I deal with the current density.
Ok about the possible reasons. I gave the file.geo in post number 2 also. I only changed the mesh resolution many times.

Some new insights since I last posted here. Bridgman heat is well computed (or almost) on a different mesh (a quarter of an annulus). That’s the only geometry I could compare with an analytical solution.

Also the mesh presented in this thread causes a problem I never faced before. The current I put in with a Neumann boundary condition is not well applied. I know this because when I inject 1A of current and when I compute the value of J_vector along the side where the current leaves the material, I get 0.955 A, even with a very fine mesh and with a tighther solver tolerance. Therefore I conclude that Neumann’s b.c. are not well applied on this mesh…

Also @dokken, I would really appreciate if you could comment my remark in post #6. Here is the thing explained more precisely:

(...)
el = FiniteElement("CG", mesh.ufl_cell(), 3)
mel = MixedElement([el, el])
ME = FunctionSpace(mesh, mel)

u, v = TestFunction(ME)
TempVolt = Function(ME)
temp, volt = split(TempVolt)
dTV = TrialFunction(ME)
(...)
# Weak form.
J_vector = -sigma * grad(volt) - sigma * Seebeck_tensor * grad(temp)
Fourier_term = dot(-κ * grad(temp), grad(u)) * dx
Joule_term = dot(rho * J_vector, J_vector) * u * dx
Bridgman_term = - temp * (S_xx * J_vector[0].dx(0) + S_yy * J_vector[1].dx(1)) * u * dx
F_T = Fourier_term + Joule_term + Bridgman_term
F_V = -dot(grad(v), sigma * grad(volt))*dx -dot(grad(v), sigma * Seebeck_tensor * grad(temp))*dx - v * J * ds(out_current_curve) + v * J * ds(inj_current_curve) 

weak_form = F_T + F_V

Jac = derivative(weak_form,TempVolt,dTV)

# Solve the PDE.
problem = NonlinearProblem(weak_form, TempVolt, bcs=bcs, J = Jac)
solver = NewtonSolver(MPI.COMM_WORLD, problem)
(...)
n, converged = solver.solve(TempVolt)
assert(converged)
print(f'''--------- Post processing -------
Computed generated Joule heat through the whole volume: {assemble_scalar(form(dot(rho * J_vector, J_vector) * dx))} W.

I do not understand why the last line is correct, in that it produces a correct result. As far as I can see, I define J_vector in terms of temp and volt (the 2 unknowns I solve for) before I solve the PDEs. After solving the PDEs, I do not update temp and volt, as I could have done by writing temp, volt = split(TempVolt). Therefore, to me, in my mind at least, J_vector shouldn’t be updated to the solution.
However if I update temp, volt and J_vector, the computations (i.e. the last line of the code snippet here) are exactly the same, there is absolutely no difference whatsoever. How is this possible? I am not a programmer, but I thought I had understood enough Python to realize that using J_vector should point to the last last where it was defined, and this should imply a temp and volt that has nothing to do with the solution to the PDEs…
What am I missing?

I asked chatGPT and I think its answer makes sense, so I am posting the gist of it for reference. The answer is that indeed, J_vector is automatically updated, there is no need to do a temp, volt = split(TempVolt) after the PDEs are solved. This is due to how FEniCSx works, when we define J_vector = -sigma * grad(volt) - sigma * Seebeck_tensor * grad(temp), it is a symbolic expression, an object, where temp and volt are automatically updated when TempVolt is solved, which in turns also update J_vector automatically.