H(curl) elements with MPI returns incorrect function values

I have been using FEniCS to solve a model of electromagnetic wave propagation through a rectangular waveguide. I was able to successfully solve this problem using Lagrange elements but found that N1curl elements were superior when solving for the magnetic field after first solving the electric field (which makes sense given why N1curl elements are used). The problem with this approach is that the N1curl elements did not work when I tried to run the solver in parallel using MPI.

To investigate this I have posted here a MWE using code from the dolfinx test directory for the N1curl elements. This example returns what appears to be a reasonable answer for a serial execution (python3 n1curl_serial.py) but yields an obviously incorrect answer for the parallel execution (mpirun -n 8 python3 n1curl_mpi.py)

Here is the MWE:

import os
import time

import numpy as np
import pytest
from mpi4py import MPI
from petsc4py import PETSc

import ufl
from dolfinx import DirichletBC, Function, FunctionSpace, fem, geometry, cpp, UnitCubeMesh
from dolfinx.cpp.mesh import CellType
from dolfinx.fem import (apply_lifting, assemble_matrix, assemble_scalar,
                         assemble_vector, locate_dofs_topological, set_bc)
from dolfinx.io import XDMFFile
from dolfinx_utils.test.skips import skip_if_complex, skip_in_parallel
from ufl import (SpatialCoordinate, TestFunction, TrialFunction, div, dx, grad,
                 inner)

mesh = UnitCubeMesh(MPI.COMM_WORLD, 32,32, CellType.tetrahedron)
V = FunctionSpace(mesh, ("N1curl", 1))

u = TrialFunction(V)
v = TestFunction(V)
a = inner(u,v)*dx

x = SpatialCoordinate(mesh)
u_ref = x[0]**2 + x[1]**2
L = inner(u_ref, v[0])*dx

b = assemble_vector(L)
b.ghostUpdate(addv=PETSc.InsertMode.ADD, mode=PETSc.ScatterMode.REVERSE)

A = assemble_matrix(a)
A.assemble()

# Create LU linear solver (Note: need to use a solver that
# re-orders to handle pivots, e.g. not the PETSc built-in LU
# solver)
solver = PETSc.KSP().create(MPI.COMM_WORLD)
solver.setType("preonly")
solver.getPC().setType('lu')
solver.setOperators(A)

# Solve
uh = Function(V)
solver.solve(b, uh.vector)
uh.vector.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)


# Save solution in XDMF format
with XDMFFile(MPI.COMM_WORLD, "n1curl.xdmf", "w") as file:
    file.write_mesh(mesh)
    file.write_function(uh)

On closer inspection of the codebase I see that in the dolfinx test directory there is an @skip_in_parallel tag on the section testing the N1curl elements. It seems like this behaviour is known so I’ll shift my question to: Is there something else I can do to make the solution parallelizable?

This is related to stuff I have worked on. I’ll take a proper look at this tomorrow

2 Likes

Originally we disabled the FEM pipeline tests in parallel as ghosting in parallel was not working. I think this should be fixed now, but the tests still fail. I’ve created an issue about this: https://github.com/FEniCS/dolfinx/issues/1072

I’ll post again here once there’s progress on this issue.

Also, when you create your mesh:
mesh = UnitCubeMesh(MPI.COMM_WORLD, 32,32, CellType.tetrahedron)
I think you mean:
mesh = UnitCubeMesh(MPI.COMM_WORLD, 32,32,32 CellType.tetrahedron)
(although this doesn’t cause anything to fail, as CellType.tetrahedron is interpreted as 4

1 Like

Ah, you’re right I didn’t inspect the mesh it returned, just looked at a field slice. Thanks for looking into this, and look forward to any updates.

I believe this issue is now fixed in this PR: https://github.com/FEniCS/dolfinx/pull/1081.

Update: that PR has now been merged into master.

3 Likes

Just tested the fix, works for the MWE, thanks!
In general, what should my expectations for parallel results be? When I test this on my full waveguide example (haven’t made a MWE yet), there are still pretty large differences between the parallel and the serial solution, with the parallel solver returning singularities or numerical artifacts in some places. I’m attaching pictures of the results, but I’ll try to distill it down to a MWE sometime this week. Is it generally expected that results should be the same to machine precision or something close to that?

Serial Results:

Parallel Results:

1 Like

Yes, parallel and serial should agree up to somewhere around machine precision. A MWE would be excellent so we can try to work out what’s causing this.

Could you also compare the norms of the two solutions? If they are the same, it could be that the error you’re seeing is due to an error in file output/plotting

1 Like

For the MWE I used pygmsh to make the mesh in serial prior to running the solve script. Comparing the norms of the two solutions still shows a handful of discontinuities. When I plotted across the central Z-line in Paraview the solutions were identical except for one or two areas where the parallel solver had significantly more noise in the solution.

Here is the mesh generation code:

from pygmsh.opencascade import Geometry
from pygmsh import generate_mesh
import meshio
import numpy as np

waveguide_length = 0.1
waveguide_width  = 0.05
waveguide_height = 0.6

geom = Geometry(characteristic_length_max=0.01, characteristic_length_min=0.01)

waveguide = geom.add_box([0,0,0], (waveguide_length, waveguide_width, waveguide_height))

geom.add_raw_code("Physical Volume(7) = {{{}}};".format(waveguide.id)) # Waveguide Volume
geom.add_raw_code("Physical Surface(1) = {{{}}};".format("5")) # Input
geom.add_raw_code("Physical Surface(2) = {{{}}};".format("6")) # Output
geom.add_raw_code("Physical Surface(3) = {{{}}};".format("1,2,3,4,7,8,9,10,13,14,15,16,17")) # PEC Edges

# Build the .geo file.
mesh = generate_mesh(geom, geo_filename="mesh.geo")

# What follows is this (strange) mesh conversion stuff ... . ;-)
tetra_cells    = None
tetra_data     = None
triangle_cells = None
triangle_data  = None

print(mesh.cell_data_dict["gmsh:physical"].keys())

for key in mesh.cell_data_dict["gmsh:physical"].keys():
    if key == "triangle":
        triangle_data = mesh.cell_data_dict["gmsh:physical"][key]
    elif key == "tetra":
        tetra_data = mesh.cell_data_dict["gmsh:physical"][key]

for cell in mesh.cells:
    if cell.type == "tetra":
        if tetra_cells is None:
            tetra_cells = cell.data
        else:
            tetra_cells = np.vstack([tetra_cells, cell.data])
    elif cell.type == "triangle":
        if triangle_cells is None:
            triangle_cells = cell.data
        else:
            triangle_cells = np.vstack([triangle_cells, cell.data])

tetra_mesh = meshio.Mesh(points=mesh.points,
                         cells={"tetra": tetra_cells},
                         cell_data={"name_to_read":[tetra_data]})
triangle_mesh =meshio.Mesh(points=mesh.points,
                           cells=[("triangle", triangle_cells)],
                           cell_data={"name_to_read":[triangle_data]})
meshio.xdmf.write("tetra_mesh.xdmf", tetra_mesh)
meshio.xdmf.write("triangle_mesh.xdmf", triangle_mesh)

Here is the code for the solver. For anyone viewing this thread looking for waveguide examples, I wouldn’t use this variational form. I haven’t put in the work to check the form and verify this with an analytical solution yet (I’ll probably post back on the forums once I do that).

import dolfinx
import numpy as np
from cmath import sqrt

from dolfinx import Mesh, geometry, Function, DirichletBC, Constant, solve, FunctionSpace, VectorFunctionSpace
from dolfinx.io import XDMFFile
from dolfinx.plotting import plot

import ufl
from ufl import TrialFunction, TestFunction, dot, grad, dx, inner, Measure, FacetNormal, as_vector, curl, cross

from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()

from dolfinx.fem import assemble_vector, apply_lifting, set_bc, assemble_matrix
import dolfinx.la
import ufl
from petsc4py import PETSc

a = 0.1
b = 0.05
kc = np.pi/a
mu0 = 4 * np.pi * 10**-7
eps0 = 8.854*10**-12
c = 1/sqrt(eps0*mu0)
sigma_copper = 5.96*10**7
frequency = 2.5e9 # 2.5 GHz
omega = 2*np.pi*frequency
k = omega * sqrt(eps0 * mu0)
beta =  sqrt(k**2 - kc**2)

with XDMFFile(MPI.COMM_WORLD,"tetra_mesh.xdmf",'r') as infile:
    mesh = infile.read_mesh(dolfinx.cpp.mesh.GhostMode.none, name="Grid")

mesh.topology.create_connectivity(mesh.topology.dim-1, mesh.topology.dim)
with XDMFFile(MPI.COMM_WORLD, "triangle_mesh.xdmf", "r") as xdmf_infile:
    mt_triangle = xdmf_infile.read_meshtags(mesh, name="Grid")

#Set the input wave function
x = ufl.SpatialCoordinate(mesh)
Ep = ufl.as_vector((0,k/kc * ufl.sin(kc*(x[0]))*ufl.exp(1j*beta*0),0))

#Set the function space and Test/Trial Functions
V = FunctionSpace(mesh, ("N1curl", 1))

e = TrialFunction(V)
ev = TestFunction(V)

#Establish the boundary conditions
n = FacetNormal(mesh)
pec_idx = 3
absorb_idx = 2
in_idx = 1
ds = Measure('ds', domain=mesh, subdomain_data=mt_triangle)

Zp = (1-1j)*sqrt(mu0*omega/(2*sigma_copper))
alpha_pec = 1j*beta*20000
alpha_in = 1j*beta
alpha_absorb = 1j*beta
g = 2*1j*beta*cross(cross(n,Ep), n)

pec = -1/mu0*alpha_pec*inner(cross(n,e),cross(n, ev))*ds(pec_idx) 
absorb = -1/mu0*alpha_absorb*inner(cross(n,e), cross(n,ev))*ds(absorb_idx)
input_wave = -1/mu0*alpha_in*inner(cross(n,e), cross(n,ev))*ds(in_idx) + 1/mu0*inner(g,ev)*ds(in_idx)

#Variational Form
F_maxwell = inner(1/mu0 * curl(e), curl(ev))*dx - inner(omega**2*eps0*e,ev)*dx
F = F_maxwell + pec + absorb + input_wave

#Solving
petsc_options = {}#{"ksp_type": "bcgsl", "pc_type": "jacobi", "ksp_view":None}
a = ufl.lhs(F)
L = ufl.rhs(F)

e = Function(V, name="e")
solve(a == L,e, petsc_options=petsc_options)

with XDMFFile(MPI.COMM_WORLD, "E_serial.xdmf", "w") as file:
    file.write_mesh(mesh)
    file.write_function(e)

My thought for the possible source of the error: Maybe something to do with reading the mesh in parallel from XDMF versus using the UnitCubeMesh?
I also originally was using the bcgsl solver, but even after removing that to use the default gmres the result still showed discontinuities.

1 Like

Before writing to file, you need to update the ghosts of your solution vector:

e.vector.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)

Adding this after solve before reading from file removes the discontinuities

3 Likes

@awarru Also note that iterative solvers will give different results in parallel. To ensure that everything works, I would use an LU solver for verification purposes:

petsc_options = {"ksp_type":"preonly", "pc_type":"lu"}
1 Like