Least squares finite element formulations for time harmonic electromagnetic fields

Building on my post on preconditioners (found here ), I have decided to create a new topic on the least-squares formulation. I would like to ask the Fenics community if anyone has explored this method and what their experiences were.

Here are some notes on what I have found as well as open questions that I have.

The least squares functional is very straightforward:
R = \vert\!\vert \nabla\times\mathbf{E}+jk_0\mathbf{H}\vert\!\vert^2_\Omega + \vert\!\vert\nabla\times jk_0\epsilon_r\mathbf{E}\vert\!\vert^2_\Omega + R_B,
where R_B is a boundary residual, where we include all perfect electric, perfect magnetic and impedance boundary conditions:
R_B = \vert\!\vert\mathbf{n\times E}\vert\!\vert^2_{\partial\Omega_{PEC}} + \vert\!\vert\mathbf{n\cdot H}\vert\!\vert^2_{\partial\Omega_{PEC}} +\vert\!\vert\mathbf{n\times H}\vert\!\vert^2_{\partial\Omega_{PMC}} + \vert\!\vert\mathbf{n\cdot E}\vert\!\vert^2_{\partial\Omega_{PMC}} +
\hspace{1cm} \vert\!\vert\mathbf{n\times((E - E^{inc})\times n)} - Z_b\mathbf{n\times H}\vert\!\vert^2_{\partial\Omega_{InOut}}
I discretized the electric field using Nedelec 1st kind curl-conforming basis functions and the magnetic field with a vector Lagrange basis (because I wished to preserve both tangential and normal continuity on the magnetic field across element boundaries. I am only concerning myself with inhomogeneous dielectrics in the problem domain, where electric field components normal to dielectric interfaces experience jumps).

Some good properties that I see using this method:

  • Operators are inherently self-adjoint even for lossy problems allowing the use of Conjugate Gradient solvers with simple preconditoners (Jacobi, SOR).
  • Finite element basis need not satisfy boundary conditions. Dirichlet, Neumann, Robin conditions can be applied as part of the least-squares residual. The boundary conditions are satisfied in a least-squares sense over the element edge or face in 2D, 3D respectively. This removes the ambiguity on the normal unit vector at boundary corners.
  • Dirichlet boundary conditions can, if desired, be applied to the basis in the usual way.
  • Local residuals can be computed for use in adaptive mesh refinement schemes.
  • All field quantities are approximated to full interpolation order.
  • Method is intuitively straightforward.

Some of the downsides that have become apparent:

  • Operator truncation error (I think) introduces non-physical absorption causing decay in the solution. I have not been able to find much in the literature on this phenomenon with the exception of shock modeling in compressible fluid dynamics where adaptive mesh refinement is used to counteract the effect of spurious artificial viscosity spreading the shock front. I have been able to satisfactory solutions for simple problems with non-curved boundaries using a large number of DoFs (e.g. rectangular waveguide). For problems with curved boundaries (e.g. coaxial transmission line: see code below), I have not been able to generate usable solutions using high-order interpolation or fine meshing.
  • The use of first order PDEs doubles the order of the finite element matrices.
  • The time-harmonic equations mean the use of real and imaginary field components, doubling again the matrix order.

It would be great to find a way to reduce the degradation of the solution by the spurious decay. The least squares method seems promising as a result of its self-adjointedness, but suffers severely from the spurious decay issue. Does anyone out there have any experience with this problem?

Questions:

  • Is there any work out there on reducing the spurious loss in least-squares finite element formulations?
  • What might be a good strategy for doing automatic mesh refinement in Fenics using this method? I would like to start off using the CGAL routines inside of Fenics to put together an automatic refinement algorithm, but the documention seems a bit sparse.

My feeling is that I will have to revisit the traditional Galerkin formulation, perhaps in the time-domain for very large problems (where operators are self-adjoint) and reserve the non-self adjoint time-harmonic solution for smaller problems where I can use a direct solver (e.g. MUMPS).

I include the full code for the coaxial transmission line problem for those who may want to play with the solution.

from dolfin import *
from mshr import *
import numpy as np
import matplotlib.pyplot as plt
import os, sys, traceback

tol = 1.0e-12
Dk = 4.0
k0 = 2.0 * pi / 5.0
Zm = 1.0 / sqrt(Dk) # Normalized wave impedance
class PEC(SubDomain):
    def inside(self, x, on_boundary):
        return on_boundary

class PMC(SubDomain):
    def inside(self, x, on_boundary):
        return on_boundary and (near(x[0], 0.0, tol) or near(x[1], 0.0, tol))

class InputBC(SubDomain):
    def inside(self, x, on_boundary):
        return on_boundary and near(x[2], -2.5, tol)

class OutputBC(SubDomain):
    def inside(self, x, on_boundary):
        return on_boundary and near(x[2], 2.5, tol)


C1 = Cylinder(dolfin.Point(0,0,-2.5), dolfin.Point(0, 0, 2.5), 1.0, 1.0)
C2 = Cylinder(dolfin.Point(0,0,-2.5), dolfin.Point(0, 0, 2.5), 0.5, 0.5)
B1 = Box(dolfin.Point(0, 0, -2.5), dolfin.Point(1.0, 1.0, 2.5))
Coax = (C1 - C2) * B1
#generator = CSGCGALMeshGenerator3D()
#generator.parameters["edge_size"] = 0.1
#generator.parameters["facet_angle"] = 25.0
#generator.parameters["facet_size"] = 4.0
#generator.parameters["facet_distance"] = 0.01
#generator.parameters["cell_radius_edge_ratio"] = 2.0
#generator.parameters["cell_size"] = 0.02
#mesh = generator.generate(CSGCGALDomain3D(Coax))

mesh = generate_mesh(Coax, 60)

info(mesh)
plot(mesh)
plt.show()

# Output mesh for inspection
File("BoxMesh.pvd").write(mesh)

# Mark boundaries
sub_domains = MeshFunction("size_t", mesh, mesh.topology().dim() - 1)
sub_domains.set_all(4)
pec = PEC()
pec.mark(sub_domains, 0)
pmc = PMC()
pmc.mark(sub_domains, 1)
in_port = InputBC()
in_port.mark(sub_domains, 2)
out_port = OutputBC()
out_port.mark(sub_domains, 3)
File("BoxSubDomains.pvd").write(sub_domains)

# Set up function spaces
cell = tetrahedron
ele_type = FiniteElement('N1curl', cell, 3) # H(curl) element for EM
lg_ele = VectorElement('CG', cell, 3) # Lagrange for magnetic field
V2 = FunctionSpace(mesh, MixedElement([ele_type, ele_type, lg_ele, lg_ele]))
V = FunctionSpace(mesh, ele_type)
e_r, e_i, h_r, h_i = TrialFunctions(V2)
ev_r, ev_i, hv_r, hv_i = TestFunctions(V2)

#surface integral definitions from boundaries
ds = Measure('ds', domain = mesh, subdomain_data = sub_domains)
# with source and sink terms
u0 = Constant((0.0, 0.0, 0.0)) #PEC definition
e_src = Expression(('x[0] / (x[0] * x[0] + x[1] * x[1])','x[1] / (x[0] * x[0] + x[1] * x[1])', 0.0), degree = 3)

#Boundary condition dictionary
boundary_conditions = {0: {'PEC' : u0},
                       1: {'PMC' : u0},
                       2: {'InputBC': (h_src, e_src)},
                       3: {'OutputBC': e_src}}

n = FacetNormal(mesh)

#Build PEC boundary conditions for real and imaginary parts
bcs = []
integral_source = []
integrals_load =[]
for i in boundary_conditions:
    if 'PEC' in boundary_conditions[i]:
        bb1 = (inner(cross(n, ev_r), cross(n, e_r)) + inner(cross(n, ev_i), cross(n, e_i)) + inner(dot(n, hv_r), dot(n, h_r)) + inner(dot(n, hv_i), dot(n, h_i))) * ds(i)
        integrals_load.append(bb1)

for i in boundary_conditions:
    if 'PMC' in boundary_conditions[i]:
        bb1 = (inner(cross(n, hv_r), cross(n, h_r)) + inner(cross(n, hv_i), cross(n, h_i)) + inner(dot(n, ev_r), dot(n, e_r)) + inner(dot(n, ev_i), dot(n, e_i))) * ds(i)
        integrals_load.append(bb1)

# Build input BC source term and loading term
for i in boundary_conditions:
    if 'InputBC' in boundary_conditions[i]:
        r, s = boundary_conditions[i]['InputBC']
        bb = inner(cross(n, cross(ev_i, n)) - Zm * cross(n, hv_i), s) * ds(i)
        integral_source.append(bb)
        bb1 = (inner(cross(n, cross(ev_r, n)) - Zm * cross(n, hv_r), cross(n, cross(e_r, n)) - Zm * cross(n, h_r)) + inner(cross(n, cross(ev_i, n)) - Zm * cross(n, hv_i), cross(n, cross(e_i, n)) - Zm * cross(n, h_i))) * ds(i)
        integrals_load.append(bb1)

for i in boundary_conditions:
    if 'OutputBC' in boundary_conditions[i]:
        bb1 = (inner(cross(n, cross(ev_r, n)) - Zm * cross(n, hv_r), cross(n, cross(e_r, n)) - Zm * cross(n, h_r)) + inner(cross(n, cross(ev_i, n)) - Zm * cross(n, hv_i), cross(n, cross(e_i, n)) - Zm * cross(n, h_i))) * ds(i)
        integrals_load.append(bb1)


a = (inner(curl(ev_r) - k0 * hv_i, curl(e_r) - k0 * h_i) + inner(curl(ev_i) + k0 * hv_r, curl(e_i) + k0 * h_r) + inner(curl(hv_r) + Dk * k0 * ev_i, curl(h_r) + Dk * k0 * e_i) + inner(curl(hv_i) - Dk * k0 * ev_r, curl(h_i) - Dk * k0 * e_r))* dx + sum(integrals_load)
L = sum(integral_source)

u1 = Function(V2)

#solve(a == L, u1, bcs, solver_parameters = {'linear_solver' : 'mumps'}) 
vdim = u1.vector().size()
print ('Solution vector size = ',vdim)
u1.vector()[:] = np.random.uniform(-1, 1, vdim)
A, b = assemble_system(a, L, bcs)
solver = PETScKrylovSolver("cg","sor")
solver.parameters['absolute_tolerance'] = 1e-5
solver.parameters['relative_tolerance'] = 1e-12
solver.parameters['maximum_iterations'] = 10000
solver.parameters['monitor_convergence'] = True
solver.parameters['error_on_nonconvergence'] = False
solver.parameters['nonzero_initial_guess'] = False
solver.parameters['report'] = True
solver.set_operators(A, A)
solver.solve(u1.vector(),b)

u1_r, u1_i, v1_r, v1_i = u1.split(True)
fp = File("EField_r.pvd")
fp << u1_r
fp = File("EField_i.pvd")
fp << u1_i

ut = u1_r.copy(deepcopy=True)
vt = v1_r.copy(deepcopy=True)

fp = File('EWaveFile.pvd')
for i in range(50):
    ut.vector().zero()
    ut.vector().axpy(sin(pi * i / 25.0), u1_i.vector())
    ut.vector().axpy(cos(pi * i / 25.0), u1_r.vector()) 
    fp << (ut, i)

fp = File('HWaveFile.pvd')
for i in range(50):
    vt.vector().zero()
    vt.vector().axpy(sin(pi * i / 25.0), v1_i.vector())
    vt.vector().axpy(cos(pi * i / 25.0), v1_r.vector()) 
    fp << (vt, i)

E = interpolate(e_src, V)
Pinc = assemble(inner(E, E) * sqrt(Dk) * 0.25 * ds(2)) #Input power
Ptot = assemble((inner(v1_r, cross(n, u1_r)) + inner(v1_i, cross(n, u1_i))) * ds(2))
Pref = Ptot - Pinc
Prad = assemble((inner(v1_r, cross(n, u1_r)) + inner(v1_i, cross(n, u1_i))) * ds(3))
print("Pinc = \n", Pinc)
print("Ptot = \n", Ptot)
print("Pref = \n", Pref)
print("Prad = \n", Prad)

sys.exit(0)
2 Likes

Comparing with the code, I think there is a typo in the second term of the functional R from your explanation, since there should be a curl of \mathbf{H} somewhere. Also, I would expect an additional volume term to control the divergence of the magnetic field. You might take a look at the formulation (5.17) on page 48 here if you haven’t already.

1 Like

Thanks for the reply @kamensky .

The curl H equations are found in the third and fourth terms in the functional. The first two terms are the real and imaginary parts of the curl E expression. Both E and H curl equations must be broken up into real and imaginary parts since Fenics only handles real arithmetic.

I probably should have mentioned that I first solved this system using Nedelec basis functions for both electric and magnetic fields. Changing the magnetic field interpolation to the Lagrange basis made no difference in the solution result.

Based on your suggestion regarding the \nabla\cdot\mathbf{H} term, I ran a comparison of the solutions with and without the divergence term in the functional. The results were very similar. In fact, the power conservation was slightly worse after including the divergence term.

\begin{array}{r c c} \mathrm{Power} & \mathrm{w/div} & \mathrm{w/o\ div} \\ \hline \mathrm{incident} & 0.545 & 0.545 \\ \mathrm{reflected} & 0.0307 & 0.0285 \\ \mathrm{total\ in} & 0.514 & 0.516 \\ \mathrm{output} & 0.190 & 0.213 \end{array}

As you can see from the power budget, the problem I am trying to overcome is the very poor power conservation in what should be a lossless structure. The “total” should be equivalent to the “output” (or close, within numerical solution error).

You can see the field distribution here at this link.

You might also check the dimensional consistency of the formulation. For instance, in the reference I linked before, there are factors of element diameter h to correct for a dimensional mismatch in the boundary terms of (5.19). In your R functional, the first volume term has dimensions of [E]^2L, while the first boundary term has dimensions of [E]^2L^2, so I would expect that the boundary term would need to be scaled by something with dimensions of L^{-1} (such as h^{-1}, like the formulation of the linked reference) for consistency.

You are quite right. I will have to investigate this. It could be a contributing factor to the spurious dissipation in the solution. My observations confirm that the dissipation depends heavily on the nature of the boundary. Would you be able to point me to a reference or an example for extracting h for incorporation as weighting factors in the functional expression (e.g. as the square root of element face area)? I am still pretty new to Fenics and finding clues to its subtleties can stump me for a long time.

Again, thanks for your help!

You can use the CellDiameter function:

from dolfin import *

# Set up mesh for testing:
N = 100
mesh = UnitSquareMesh(N,N)

# Get element size in UFL:
h = CellDiameter(mesh)

print(assemble(h*dx)) # Check
print(2.0**0.5/N) # Expected: Hypotenuse of an element
1 Like

Thanks! I applied the correction using the CellDiameter function and, unfortunately, is did not improve the dissipation. Best performance was achieved when applying the correction to the volume terms

R = h\left ( \vert\!\vert\nabla\times\mathbf{E}+jk_0\mathbf{H}\vert\!\vert^2_\Omega + \vert\!\vert\nabla\times\mathbf{H}-jk_0\epsilon_r\mathbf{E}\vert\!\vert^2_\Omega\right) + R_B.

There were some variations in performance, but, overall, the power conservation result was slightly worse than when no h correction is applied. The SOR-CG solver converged to tolerance with 40% fewer steps. The power conservation was similar to the result shown in my table above. I admit, I am perplexed, but it is an interesting problem to explore.

Two things seem to be at play here:

  • Higher order elements give better results, even with coarse geometric representation. However, for reasonable results, dense meshes of at least order 2 are needed. Order 3 is better. The exceeds the memory capacity of most PCs even for simple problems like the coaxial example here.
  • Non-curved boundaries give better results (e.g. rectangular waveguides) with reasonable levels of discretization. Numerical dissipation is still present, but can be reduced to 1-2% of input power.

Then there is the issue of re-entrant PEC corners that give rise to singular field solutions. I haven’t even considered that issue yet! (The Galerkin solution seems to handle this well with curl-conforming elements as long as you are willing to accept some error in the field solution near the corner.)