Issue with PETSc SNES and LU Solver

Hi everyone,

I have a problem when using PETSc’s SNES directly for solving nonlinear problems with a direct (LU) solver for the linear system, at least for newer PETSc versions. I have created the following MWE, showing the problem

from fenics import *
from petsc4py import PETSc

mesh = UnitSquareMesh(8, 8)
V = FunctionSpace(mesh, "CG", 1)

u = Function(V)
v = TestFunction(V)

F = dot(grad(u), grad(v)) * dx - Constant(1.0) * v * dx


def boundary(x, on_boundary):
    return on_boundary


bcs = DirichletBC(V, Constant(0.0), boundary)

du = TrialFunction(V)
dF = derivative(F, u, du)


def ksp_solve():
    A = PETScMatrix()
    b = PETScVector()
    assemble(dF, tensor=A)
    assemble(-F, tensor=b)
    bcs.apply(A)
    bcs.apply(b)

    A = as_backend_type(A).mat()
    b = as_backend_type(b).vec()

    opts = PETSc.Options()
    opts["ksp_type"] = "cg"
    opts["pc_type"] = "lu"
    opts["ksp_monitor_true_residual"] = None

    ksp = PETSc.KSP().create()
    ksp.setOperators(A)
    ksp.setFromOptions()

    x = u.vector().vec()
    ksp.solve(b, x)
    ksp.destroy()


def snes_solve():
    def assemble_function(snes, x, f):
        u.vector().vec().setArray(x)
        u.vector().apply("")

        b = PETScVector(f)

        assemble(F, tensor=b)
        bcs.apply(b)

    def assemble_jacobian(snes, x, J, P):
        u.vector().vec().setArray(x)
        u.vector().apply("")

        A = PETScMatrix(J)
        assemble(dF, tensor=A)
        bcs.apply(A)

        AP = PETScMatrix(P)
        assemble(dF, tensor=AP)
        bcs.apply(AP)

    f = PETScVector()
    J = PETScMatrix()
    P = PETScMatrix()

    opts = PETSc.Options()
    opts["snes_type"] = "newtonls"
    opts["snes_monitor"] = None
    opts["ksp_type"] = "cg"
    opts["pc_type"] = "none"  # when changing this to "lu", things break
    opts["ksp_monitor_true_residual"] = None

    snes = PETSc.SNES().create()
    snes.setFunction(assemble_function, f.vec())
    snes.setJacobian(assemble_jacobian, J.mat(), P.mat())

    snes.setFromOptions()
    x = u.vector().vec()
    snes.solve(None, x)
    snes.destroy()


# ksp_solve()  # works fine
snes_solve()  # does not work properly for "lu"

As you can see, I am using a linear model problem, which is to be solved. The problem is not present, when I am using the KSP directly, as can be seen from my example. However, when I am using the snes_solve function and set

opts["pc_type"] = "lu" 

I am getting the error message

Segmentation fault (core dumped)

with PETSc 3.21 and 3.20, however, the code works fine for my old PETSc version 3.17.4. I have also tested several linear solvers (petsc, umfpack, mumps), but the error is present regardless of which solver I am trying to use.
Also, I have added a “dummy” computation for the preconditioner matrix in assemble_jacobian, but this does not change anything as far as I noticed.

Does anyone have an idea / solution for this strange problem?

Thanks a lot in advance,
Sebastian

Okay, so a quick update: I have found that the issue comes from the fact that the matrices seem to not be initialized the way PETSc expects it. If I use the following lines, the code works:

f = PETScVector()
J = PETScMatrix()
assemble(dF, tensor=J)
P = PETScMatrix()
assemble(dF, tensor=P)

however, this means that the jacobian has to be assembled before being used in the call to the SNES solver, which is not ideal.
Does anyone know another way of correctly initializing the matrices for PETSc (ideally without assembling the jacobian first)?

I did a bit of digging and found the following line: Bitbucket

Here, it seems that the FormJacobian function is called before calling SNESSetFromOptions, and in the comments it is explained, that this call needs the matrix to be defined (at least the matrix type).

So am I seeing this correctly, that, when using the SNES in the NonlinearVariationalProblem, the Jacobian matrix is assembled an additional time during the initialization (for the sole purpose of not raising errors during SNESSetFromOptions)?
That would, more or less, be the same solution that I came up with above. Is there any better way of doing this, so that the Jacobian is not assembled n+1 times for n iterations of the nonlinear solver?

PETSc has made drastic changes to its garbage collection/destruction of objects, meaning that using newer petsc with legacy dolfin is not adviced.

I cant comment on the further details that you have provided wrt PETSc Snes and matrix initialozation, as I do not use legacy dolfin and snes for anything.

Thanks for the answer @dokken

I want to comment on your claim, that new PETSc versions and legacy dolfin are not compatible. From my experience, they still work great, except for the issue mentioned in Issue with PETSc 3.20 in fenics.PETScDMCollection.create_transfer_matrix · Issue #192 · conda-forge/fenics-feedstock · GitHub

I can only say that I am using petsc4py directly for my solvers and not the fenics ones, but all of my tests for cashocs except those related to the create_transfer_matrix issue work perfectly fine.

Yes, most functionality works, but there is a memory issue, i.e. petsc4py objects needs explicit garbage collection.

There is also no ongoing efforts to keep legacy dolfin up to date with newer versions of petsc, so there might be more things to break in future releases