Hi all,
I have a nonlinear multiphysics problem and my goal is to build a good preconditioner to solve the problem efficiently. I have some ideas on how to do it, but I struggle a bit with the implementation.
I’m going to formulate a minimal example that captures the main features of my original system of equations (nonlinear and coupled system of equations), and will provide a working dolfinx code.
For simplicity, let us consider a single domain \Omega bounded by the Dirichlet boundary \Gamma_{D}. I’m then going to consider N Poisson equations coupled through the function f_{i}:
For N=3 (i.e., three equations), the following minimal code solves the problem:
from mpi4py import MPI
from petsc4py import PETSc
from mpi4py import MPI
import numpy as np
import ufl
import basix.ufl
import dolfinx
from dolfinx import fem, io, log, default_scalar_type
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.nls.petsc import NewtonSolver
rank = MPI.COMM_WORLD.rank
# --- Computational domain ---
# Mesh
mesh_size = 200
domain = dolfinx.mesh.create_unit_square(MPI.COMM_WORLD, mesh_size, mesh_size)
# Normal vector
n = ufl.FacetNormal(domain)
# --- Mixed element space
el_u = basix.ufl.element("Lagrange", domain.basix_cell(), 1)
el_mixed = basix.ufl.mixed_element([el_u] * 3)
W = dolfinx.fem.functionspace(domain, el_mixed)
# Test functions
dw = ufl.TestFunctions(W)
v0 = dw[0]
v1 = dw[1]
v2 = dw[2]
# Solution vector
uh = dolfinx.fem.Function(W)
u_split = ufl.split(uh)
uh0 = u_split[0]
uh1 = u_split[1]
uh2 = u_split[2]
# --- Distributed forcing and boundary conditions ---
# Distributed forcing
f = fem.Constant(domain, default_scalar_type(-6))
# Homogeneous Dirichlet, system 0
W0 = W.sub(0)
V0, V0_to_W0 = W0.collapse()
uh0_D = fem.Function(V0)
uh0_D.x.array[:] = 0
# Homogeneous Dirichlet, system 1
W1 = W.sub(1)
V1, V1_to_W1 = W1.collapse()
uh1_D = fem.Function(V1)
uh1_D.x.array[:] = 0
# Homogeneous Dirichlet, system 2
W2 = W.sub(2)
V2, V2_to_W2 = W2.collapse()
uh2_D = fem.Function(V2)
uh2_D.x.array[:] = 0
# Penalty parameters
h = 2 * ufl.Circumradius(domain)
alpha = fem.Constant(domain, default_scalar_type(10))
# --- Weak form ---
# Bilinear form
# Block 0
a = ufl.inner(ufl.grad(uh0), ufl.grad(v0)) * ufl.dx # grad(u),grad(v)
a += -ufl.inner(ufl.dot(ufl.grad(uh0), n), v0) * ufl.ds # Consistency term
a += -ufl.inner(ufl.dot(ufl.grad(v0), n), uh0) * ufl.ds # Symmetry term
a += alpha/h*ufl.inner(uh0, v0) * ufl.ds # Penalty term
# Block 1
a += ufl.inner(ufl.grad(uh1), ufl.grad(v1)) * ufl.dx # grad(u),grad(v)
a += -ufl.inner(ufl.dot(ufl.grad(uh1), n), v1) * ufl.ds # Consistency term
a += -ufl.inner(ufl.dot(ufl.grad(v1), n), uh1) * ufl.ds # Symmetry term
a += alpha/h*ufl.inner(uh1, v1) * ufl.ds # Penalty term
# Block 2
a += ufl.inner(ufl.grad(uh2), ufl.grad(v2)) * ufl.dx # grad(u),grad(v)
a += -ufl.inner(ufl.dot(ufl.grad(uh2), n), v2) * ufl.ds # Consistency term
a += -ufl.inner(ufl.dot(ufl.grad(v2), n), uh2) * ufl.ds # Symmetry term
a += alpha/h*ufl.inner(uh2, v2) * ufl.ds # Penalty term
# Linear form
# Block 0
f0 = uh1 + uh2
L = ufl.inner(f, v0) * ufl.dx # Forcing term
L += ufl.inner(f0*f0, v0) * ufl.dx # Coupling term
L += -ufl.inner(ufl.dot(ufl.grad(v0), n), uh0_D) * ufl.ds # Symmetry term
L += alpha/h*ufl.inner(uh0_D, v0) * ufl.ds # Penalty term
# Block 1
f1 = uh0 + uh2
L += ufl.inner(f1*f1, v1) * ufl.dx # Coupling term
L += -ufl.inner(ufl.dot(ufl.grad(v1), n), uh1_D) * ufl.ds # Symmetry term
L += alpha/h*ufl.inner(uh1_D, v1) * ufl.ds # Penalty term
# Block 2
f2 = uh0 + uh1
L += ufl.inner(f2*f2, v2) * ufl.dx # Coupling term
L += -ufl.inner(ufl.dot(ufl.grad(v2), n), uh2_D) * ufl.ds # Symmetry term
L += alpha/h*ufl.inner(uh2_D, v2) * ufl.ds # Penalty term
# Residual
F = a - L
# --- Solver setup ---
# Nonlinear solver setup
problem = NonlinearProblem(F, uh, bcs=[])
solver = NewtonSolver(MPI.COMM_WORLD, problem)
solver.convergence_criterion = "residual"
solver.rtol = 1e-6
solver.atol = 1e-8
solver.max_it = 100
solver.report = True
# Linear solver setup
ksp = solver.krylov_solver
opts = PETSc.Options()
option_prefix = ksp.getOptionsPrefix()
opts[f"{option_prefix}ksp_type"] = "gmres"
opts[f"{option_prefix}ksp_rtol"] = 1.0e-8 # relative tolerance
opts[f"{option_prefix}ksp_atol"] = 1.0e-6 # absolute tolerance
opts[f"{option_prefix}pc_type"] = "gamg"
ksp.setFromOptions()
# --- Solve problem ---
if rank == 0:
print("solving", flush=True)
import time
start_time = time.perf_counter()
n_iter, converged = solver.solve(uh)
end_time = time.perf_counter()
assert (converged)
if rank == 0:
print(f"Number of interations: {n_iter:d}")
print(f"Took {end_time - start_time:.6f} seconds")
Note that I have considered the Nitsche’s method to impose the Dirichlet boundary conditions weakly.
As you can check, the computational time scales badly as I increase the number of processors and number of dofs, which is expected due to solving this as a monolithic problem.
Because of the block structure of multiphysics problems, I’m thinking of building a block preconditioner using fieldsplit in PETSc. My guess is that something on the lines of this Stokes Example could work well.
However, my problem is nonlinear, so I’d have to build a nested Jacobian matrix and feed the ksp solver with it, along with a preconditioner.
Is there a way of obtaining a nested Jacobian from the residual F I provided in the minimal code using ufl.derivatives? or is it necessary to write down the residual in blocks first?
As a final note, I’m using the following Dolfinx version:
DOLFINx version: 0.9.0 based on GIT commit: 52b6ca576a5807f3a830b2de8cad6c00a6f729fc of GitHub - FEniCS/dolfinx: Next generation FEniCS problem solving environment
Thanks in advance.