Problem with fieldsplit in FEniCS 2019

Dear all,

I have some problems with using fieldsplit for preconditioning of Navier-Stokes problem. My code works fine for previous FEniCS (2017,2018). For FEniCS 2019, my code does not work for parallel case after I use fieldsplit for preconditioning. I wonder whether there is any changes in FEniCS 2019 for fieldsplit.

I include part of code here:

// User defined nonlinear problem
class NSequation : public NonlinearProblem
{
public:

// Constructor
NSequation(std::shared_ptr<const Form> F,
		std::shared_ptr<const Form> J,
		std::vector<DirichletBC*> bcs) : _F(F), _J(J), _bcs(bcs) {}

// User defined residual vector
void F(GenericVector& b, const GenericVector& x)
{
	assemble(b, *_F);
	for (std::size_t i = 0; i < _bcs.size(); i++)
		_bcs[i]->apply(b, x);
}

// User defined assemble of Jacobian
void J(GenericMatrix& A, const GenericVector& x)
{
	assemble(A, *_J);
	for (std::size_t i = 0; i < _bcs.size(); i++)
		_bcs[i]->apply(A);
}

void bcschange(std::vector<DirichletBC*> bcnew)
{
	for (std::size_t i = 0; i < _bcs.size(); i++)
                    _bcs[i]=bcnew[i];
}

private:

// Forms
std::shared_ptr<const Form> _F;
std::shared_ptr<const Form> _J;
std::vector<DirichletBC*> _bcs;

};

int main(int argc, char* argv[])
{
init(argc, argv);

// Backend
//parameters["linear_algebra_backend"] = "PETSc";

auto mesh = std::make_shared<Mesh>("sphere.xml");

auto W = std::make_shared<ns::FunctionSpace>(mesh);
auto F = std::make_shared<ns::LinearForm>(W);
auto J = std::make_shared<ns::JacobianForm>(W, W);

auto Wn   = std::make_shared<Function>(W);

IS is[2];
auto u_dofs = W->sub(0)->dofmap()->dofs();
auto p_dofs = W->sub(1)->dofmap()->dofs();
dolfin::cout << "Number of u and p dofs: " << u_dofs.size() << ", "
	<< p_dofs.size() << dolfin::endl;
ISCreateGeneral(PETSC_COMM_WORLD, u_dofs.size(), u_dofs.data(),
	PETSC_COPY_VALUES, &is[0]);
ISCreateGeneral(PETSC_COMM_WORLD, p_dofs.size(), p_dofs.data(),
	PETSC_COPY_VALUES, &is[1]);

auto solver = std::make_shared<PETScKrylovSolver>("gmres");
solver->parameters["nonzero_initial_guess"]=true;
KSP ksp = solver->ksp();
PC pc;

KSPGetPC(ksp, &pc);
PCSetType(pc, PCFIELDSPLIT);

PCFieldSplitSetIS(pc, "u", is[0]);
PCFieldSplitSetIS(pc, "p", is[1]);
dolfin::PETScOptions::set("ksp_view");
dolfin::PETScOptions::set("ksp_monitor_true_residual");
dolfin::PETScOptions::set("ksp_pc_side", "right");
dolfin::PETScOptions::set("pc_fieldsplit_type", "schur");
dolfin::PETScOptions::set("pc_fieldsplit_schur_fact_type", "upper");
dolfin::PETScOptions::set("pc_fieldsplit_schur_preconditioning", "selfp");
dolfin::PETScOptions::set("fieldsplit_u_ksp_type", "preonly");
dolfin::PETScOptions::set("fieldsplit_u_pc_type", "jacobi");
dolfin::PETScOptions::set("fieldsplit_p_ksp_type", "preonly");
dolfin::PETScOptions::set("fieldsplit_p_pc_type", "hypre");
dolfin::PETScOptions::set("fieldsplit_p_pc_hypre_type", "boomeramg");
dolfin::PETScOptions::set("fieldsplit_p_pc_hypre_boomeramg_coarsen_type", "pmis");
    dolfin::PETScOptions::set("fieldsplit_p_pc_hypre_boomeramg_interp_type", "FF1");
dolfin::PETScOptions::set("fieldsplit_p_pc_hypre_boomeramg_strong_threshold", "0.5");
KSPSetFromOptions(ksp);

NSequation nseq(F, J, bcs);
NewtonSolver newton_solver(MPI_COMM_WORLD, solver, PETScFactory::instance());
newton_solver.parameters["relative_tolerance"] = 1.0e-4;
newton_solver.parameters["absolute_tolerance"] = 1.0e-4;
newton_solver.parameters["maximum_iterations"] = 10;

newton_solver.solve(nseq, *Wn->vector());

Any suggestion would be helpful. Thanks.

Well what’s the error? Also a MWE would be useful for others to try to undestand/reproduce your problem.

Dear dajuno,

My problem is when I run my code with serial I am fine. But when I run my program parallel, it does not work. This just happens on Ubuntu 18.04, but the program works fine on Ubuntu 16.04. I already test for different gcc version and different python version. It seems this is not the case. I guess there is some incompatiable things in Ubuntu 18.04. For now, I would stick to 16.04. Thanks.

Yours sincerely,
Qiming

Are you sure your NewtonSolver is using you PETSc KSP? Looks like you make and configure the KSP and then do nothing with it.

For example see here, where I design a CustomNewtonSolver inheriting NewtonSolver and configure its underlying linear solver.

Edit: Just saw that you’re using solver in the constructor. And you have ksp_view turned on so you can see that it’s setup correctly. I’m not sure what’s going on without an MWE.

Dear nate,

Thank you for your reply. I include MWE code. It is still a little bit long. Let me know whether you could run this on Ubuntu 18.04 with multiple processor. Currently the convergence is bad, but this is just for testing.

First is UFL code:
P2 = VectorElement(“Lagrange”, hexahedron, 2)
P1 = FiniteElement(“Lagrange”, hexahedron, 1)
TH = P2 * P1

(u, p) = TrialFunctions(TH)
(v, q) = TestFunctions(TH)

f = Coefficient(P2)

a = (inner(grad(u), grad(v)) + div(v)*p + div(u)*q)*dx
L = dot(f, v)*dx

The main code is as followed:

#include <dolfin.h>
#include “Stokes.h”

using namespace dolfin;

int main()
{
// Sub domain for left-hand side
class Left : public SubDomain
{
bool inside(const Array& x, bool on_boundary) const
{
return x[0] < 1.0e-5;
}
};

// Sub domain for right-hand side
class Right : public SubDomain
{
bool inside(const Array& x, bool on_boundary) const
{
return std::abs(1.0 - x[0]) < 1.0e-5;
}
};

// Sub domain for top and bottom
class TopBottom : public SubDomain
{
bool inside(const Array& x, bool on_boundary) const
{
return std::abs(1.0 - x[1]) < 1.0e-5 || std::abs(x[1]) < 1.0e-5 ||
std::abs(1.0 - x[2]) < 1.0e-5 || std::abs(x[2]) < 1.0e-5;
}
};

// Function for inflow boundary condition for velocity
class Inflow : public Expression
{
public:

Inflow() : Expression(3) {}

void eval(Array<double>& values, const Array<double>& x) const
{
  values[0] = -sin(x[1]*DOLFIN_PI);
  values[1] = 0.0;
  values[2] = 0.0;
}

};

// Create mesh
auto mesh = std::make_shared(
UnitCubeMesh::create({{16, 16, 16}}, CellType::Type::hexahedron));

// Create function space and subspaces
auto W = std::make_sharedStokes::FunctionSpace(mesh);

// Set-up infow boundary condition
auto inflow_prfofile = std::make_shared();
auto right = std::make_shared();
auto left = std::make_shared();
auto inflow = std::make_shared(W->sub(0), inflow_prfofile,
right);

// Set-up no-slip boundary condition
auto zero_vector = std::make_shared(0.0, 0.0, 0.0);
auto zero_scalar = std::make_shared(0.0);
auto top_bottom = std::make_shared();
auto noslip = std::make_shared(W->sub(0), zero_vector,
top_bottom);
auto pointp = std::make_shared(W->sub(1), zero_scalar,
left);

// Create forms for the Stokes problem
auto f = std::make_shared(0.0, 0.0, 0.0);
Stokes::BilinearForm a(W, W);
Stokes::LinearForm L(W);
L.f = f;

// Create solution function
Function w(W);

// Assemble Stokes system (A, b)
auto A = std::make_shared();
Vector b;
assemble_system(*A, b, a, L, {inflow, noslip, pointp});

IS is[2];
auto u_dofs = W->sub(0)->dofmap()->dofs();
auto p_dofs = W->sub(1)->dofmap()->dofs();
dolfin::cout << "Number of u and p dofs: " << u_dofs.size() << ", "
<< p_dofs.size() << dolfin::endl;
ISCreateGeneral(PETSC_COMM_WORLD, u_dofs.size(), u_dofs.data(),
PETSC_COPY_VALUES, &is[0]);
ISCreateGeneral(PETSC_COMM_WORLD, p_dofs.size(), p_dofs.data(),
PETSC_COPY_VALUES, &is[1]);

auto solver = std::make_shared(“gmres”);
KSP ksp = solver->ksp();
PC pc;

KSPGetPC(ksp, &pc);
PCSetType(pc, PCFIELDSPLIT);

PCFieldSplitSetIS(pc, “u”, is[0]);
PCFieldSplitSetIS(pc, “p”, is[1]);
dolfin::PETScOptions::set(“ksp_view”);
dolfin::PETScOptions::set(“ksp_monitor_true_residual”);
dolfin::PETScOptions::set(“ksp_pc_side”, “right”);
dolfin::PETScOptions::set(“pc_fieldsplit_type”, “schur”);
dolfin::PETScOptions::set(“pc_fieldsplit_schur_fact_type”, “upper”);
dolfin::PETScOptions::set(“pc_fieldsplit_schur_preconditioning”, “selfp”);
dolfin::PETScOptions::set(“fieldsplit_u_ksp_type”, “preonly”);
dolfin::PETScOptions::set(“fieldsplit_u_pc_type”, “jacobi”);
dolfin::PETScOptions::set(“fieldsplit_p_ksp_type”, “preonly”);
dolfin::PETScOptions::set(“fieldsplit_p_pc_type”, “hypre”);
KSPSetFromOptions(ksp);

// Set operator (A) and precondtioner matrix §
solver->set_operator(A);

solver->set_from_options();

// Solve system
solver->solve(*w.vector(), b);

return 0;
}

Let me know whether you could reproduce my problem. The problem I see is the problem just say one processor have segmentation fault when I run the problem with Ubuntu 18.04 with multiple processor. Thanks.

Yours sincerely,
Qiming

Dear dajuno,

I include MWE here. Let me know whether you could reproduce this problem. Thanks.

Yours sincerely,
Qiming