I have been using FEniCS to solve a model of electromagnetic wave propagation through a rectangular waveguide. I was able to successfully solve this problem using Lagrange elements but found that N1curl elements were superior when solving for the magnetic field after first solving the electric field (which makes sense given why N1curl elements are used). The problem with this approach is that the N1curl elements did not work when I tried to run the solver in parallel using MPI.
To investigate this I have posted here a MWE using code from the dolfinx test directory for the N1curl elements. This example returns what appears to be a reasonable answer for a serial execution (python3 n1curl_serial.py
) but yields an obviously incorrect answer for the parallel execution (mpirun -n 8 python3 n1curl_mpi.py
)
Here is the MWE:
import os
import time
import numpy as np
import pytest
from mpi4py import MPI
from petsc4py import PETSc
import ufl
from dolfinx import DirichletBC, Function, FunctionSpace, fem, geometry, cpp, UnitCubeMesh
from dolfinx.cpp.mesh import CellType
from dolfinx.fem import (apply_lifting, assemble_matrix, assemble_scalar,
assemble_vector, locate_dofs_topological, set_bc)
from dolfinx.io import XDMFFile
from dolfinx_utils.test.skips import skip_if_complex, skip_in_parallel
from ufl import (SpatialCoordinate, TestFunction, TrialFunction, div, dx, grad,
inner)
mesh = UnitCubeMesh(MPI.COMM_WORLD, 32,32, CellType.tetrahedron)
V = FunctionSpace(mesh, ("N1curl", 1))
u = TrialFunction(V)
v = TestFunction(V)
a = inner(u,v)*dx
x = SpatialCoordinate(mesh)
u_ref = x[0]**2 + x[1]**2
L = inner(u_ref, v[0])*dx
b = assemble_vector(L)
b.ghostUpdate(addv=PETSc.InsertMode.ADD, mode=PETSc.ScatterMode.REVERSE)
A = assemble_matrix(a)
A.assemble()
# Create LU linear solver (Note: need to use a solver that
# re-orders to handle pivots, e.g. not the PETSc built-in LU
# solver)
solver = PETSc.KSP().create(MPI.COMM_WORLD)
solver.setType("preonly")
solver.getPC().setType('lu')
solver.setOperators(A)
# Solve
uh = Function(V)
solver.solve(b, uh.vector)
uh.vector.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)
# Save solution in XDMF format
with XDMFFile(MPI.COMM_WORLD, "n1curl.xdmf", "w") as file:
file.write_mesh(mesh)
file.write_function(uh)
On closer inspection of the codebase I see that in the dolfinx test directory there is an @skip_in_parallel
tag on the section testing the N1curl elements. It seems like this behaviour is known so I’ll shift my question to: Is there something else I can do to make the solution parallelizable?