Krylov solver failing with MPI run / parallel

Hi! I implemented an interative solver for my simulation as follows: Running the iterative method on a single core works just fine. However, it fails when I try to do an MPI run:

if SOLVER_CONFIG == "LU":

    problem = CahnHilliardEquation(a, F)
    solver = NewtonSolver()
    solver.parameters["linear_solver"] = "lu"
    #solver.parameters["linear_solver"] = "gmres"
    #solver.parameters["preconditioner"] = "ilu"
    solver.parameters["convergence_criterion"] = "residual"
    solver.parameters["relative_tolerance"] = 1e-6

elif SOLVER_CONFIG == "KRYLOV":
    class CustomSolver(NewtonSolver):

        def __init__(self):
            NewtonSolver.__init__(self, mesh.mpi_comm(),
                                PETScKrylovSolver(), PETScFactory.instance())

        def solver_setup(self, A, P, problem, iteration):
            self.linear_solver().set_operator(A)

            PETScOptions.set("ksp_type", "gmres")
            PETScOptions.set("ksp_monitor")
            PETScOptions.set("pc_type", "ilu")
            PETScOptions.set("ksp_rtol", "1.0e-6")
            PETScOptions.set("ksp_atol", "1.0e-10")

            self.linear_solver().set_from_options()

    problem = CahnHilliardEquation(a, F)
    solver = CustomSolver()

How I execute an mpi-run:

mpirun -np 12 python foo.py 2>outputerr.txt

The output I get:

Error:   Unable to successfully call PETSc function 'KSPSolve'.
*** Reason:  PETSc error code is: 92 (See http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for possible LU and Cholesky solvers).
*** Where:   This error was encountered inside /home/conda/feedstock_root/build_artifacts/fenics-pkgs_1566991881845/work/dolfin/dolfin/la/PETScKrylovSolver.cpp.
*** Process: 11
***
*** DOLFIN version: 2019.1.0
*** Git changeset:  a97cbd7b6bf8089d364d61584f529e6e36d85845

I have this problem both with the fenics build from conda and ocellaris singularity container if that is useful info.

Hi, are you sure ‘ILU’ here is calling to a parallel implementation from Hypre?

Hi!

I dont think so, I’m running it naively as shown in my implementation above. How would I ensure I’m calling in the parallel implementation?

Thank you for your help!

Using dolfin API to PETSc something like

PETScKrylovSolver('gmres', 'hypre_euclid')

should do the trick. With PETSc configuration I’d guess you need to set 'pc_type': 'hypre'
and 'pc_hypre_type': 'euclid' In any case check first list_krylov_solver_preconditioners.

2 Likes

Hi @MiroK !! Thanks so much! It works perfectly! For posterity’s sake, my final implementation is as follows:

 class CustomSolver(NewtonSolver):

        def __init__(self):
            NewtonSolver.__init__(self, mesh.mpi_comm(),
                                PETScKrylovSolver(), PETScFactory.instance())

        def solver_setup(self, A, P, problem, iteration):
            self.linear_solver().set_operator(A)

            PETScOptions.set("ksp_type", "gmres")
            PETScOptions.set("ksp_monitor")
            PETScOptions.set("pc_type", "hypre")
            PETScOptions.set("pc_hypre_type", "euclid")
            PETScOptions.set("ksp_rtol", "1.0e-6")
            PETScOptions.set("ksp_atol", "1.0e-10")

            self.linear_solver().set_from_options()

    problem = CahnHilliardEquation(a, F)
    solver = CustomSolver()
1 Like