Solver running out of memory for 'Real' mixed-space formulation

Hi dear FEniCS users,

I am trying to solve a mixed formulation using BDM1 and DG0 elements. In order to enforce constraints at boundaries, I want to use Lagrange Multipliers(LM) at different boundaries of my domain using FunctionSpace(mesh, 'R', 0), i.e., one scalar LM for each boundary of the domain to enforce constraint. The issue is, it works fine for a 2D example with relatively lesser degrees of freedom, however, my ultimate goal is to apply this approach on a 3D complex geometry with 18 different boundaries, so I want to apply one LM to each boundary, hence in total there would be 18 Lagrange Multipliers.

However, for the case of 3D, I cannot get the solution because the solver runs out of memory. I tried to reduce the mesh density of my 3D domain, so that it has lesser degrees of freedom, and was able to apply boundary constraints using 6 LM on 6 boundaries, but was not able to apply to the rest. It was just a test to see what was going on, and of course cannot work with this mesh density for my work. While with the fine mesh, I was not getting the solution even upon constraining only one boundary using one LM.

As far as I have understood, Real space is just one dimensional space having just one scalar globally, so it should not demand much memory by becoming an additional degree of freedom in the formulation, but it does not seem to be the case. Is there an efficient way to formulate this mixed-space formulation?

Here is a MWE for a unit square:

from dolfin import *

mesh = UnitSquareMesh(16, 16)

class Top(SubDomain):
    def inside(xself,x,on_boundary):
        return x[1] > 1.0 - DOLFIN_EPS
class Bottom(SubDomain):
    def inside(self,x,on_boundary):
        return x[1] < DOLFIN_EPS
class Left(SubDomain):
    def inside(xself,x,on_boundary):
        return x[0] < DOLFIN_EPS
class Right(SubDomain):
    def inside(xself,x,on_boundary):
        return x[0] > 1.0 - DOLFIN_EPS


mf = MeshFunction("size_t", mesh, 1)
mf.set_all(0)
Left().mark(mf, 0)
Right().mark(mf, 1)
Bottom().mark(mf, 2)
Top().mark(mf, 3)

ds = Measure("ds", domain=mesh, subdomain_data=mf)
n = FacetNormal(mesh)

BDM = FiniteElement("BDM", mesh.ufl_cell(), 1)
DG  = FiniteElement("DG", mesh.ufl_cell(), 0)

mixed = MixedElement(BDM,DG)
W_UP = FunctionSpace(mesh, mixed)

Lmp = FunctionSpace(mesh, 'R', 0) # Space of 'LM' 

W = MixedFunctionSpace(W_UP,*[Lmp]*2)

(up, *lmp_trial) = TrialFunctions(W)
(u,p) = split(up)
(vq, *lmp_test) = TestFunctions(W)
(v,q) = split(vq)

# Parameters of domain
kappa = 0.4
P_i = 5
P_o = 1

#Variational form
a = kappa*inner(u,v)*dx - p*div(v)*dx - q*div(u)*dx
for i,f in enumerate(range(0,2)):
    a+= lmp_test[i]*inner(u,n)*ds(f) + lmp_trial[i]*inner(v,n)*ds(f)

L = -P_i*inner(n,v)*ds(3) - P_o*inner(n,v)*ds(2)

#Solution
usol = Function(W)
solve(a == L,usol,solver_parameters={'linear_solver':'direct'})

This works fine, but as I mentioned, I want to extend this approach to a 3D geometry, where I am having solver issues. To be precise, I get the following error:

Error: Unable to successfully call PETSc function ‘KSPSolve’.
*** Reason: PETSc error code is: 76 (Error in external library).
*** Where: This error was encountered inside /build/dolfin-DneRIZ/dolfin-2019.2.0~git20201207.b495043/dolfin/la/PETScKrylovSolver.cpp.
*** Process: 0


*** DOLFIN version: 2019.2.0.dev0
*** Git changeset: unknown

Does anyone have any idea how to tackle this issue? Any help will be highly appreciated!

Thanks and cheers!

Maybe have a look at this post How to choose the optimal solver for a PDE problem? - #2 by nate

Maybe changing the solver from a direct solver to a iterative one can help:

solver = PETScKrylovSolver('gmres', 'hypre_euclid')
prm    = solver.parameters
prm['absolute_tolerance']  = 1E-10
prm['relative_tolerance']  = 1E-6
prm['maximum_iterations']  = 1000
prm['monitor_convergence'] = False

Hi Emanuel.

Thanks for your suggestion, however, I have tried different solvers already and it does not seem to solve the problem. I found this link recently and there it is discussed that real space makes the system slow. I think I need to figure out how to assemble matrices from different function spaces separately and merge them somehow, as has been discussed in the post.