How to assemble bilinear and linear form in MixedFunctionSpace?

Hello FEniCS Community,

I am working with mixed function spaces and trying to figure out how to assemble matrix and vector for bilinear and linear equations. My question could be better understood from the below example.

BDM = FunctionSpace(mesh,"BDM", 1)
DG  = FunctionSpace(mesh,"DG", 0)

W = MixedFunctionSpace(BDM,DG)

u,p = TrialFunctions(W)
v,q = TestFunctions(W)

#Bilinear and linear forms
a = kappa*inner(u,v)*dx - p*div(v)*dx - q*div(u)*dx
L = -P_i*inner(n,v)*ds(3) - P_o*inner(n,v)*ds(2) # 2,3 are boundaries

A = assemble(a).array()

If I try to assemble bilinear and linear equations in such a way, then I get ArityMismatch error, because the size of matrices obtained from first, second, and third terms e.g., in bilinear form are different. An alternative could be to break this bilinear form and assemble matrices for each term separately, which will result in (dofs_BDM x dofs_BDM) matrix for first term in a, for second term it will be (dofs_BDM x dofs_DG), and for third will be (dofs_DG x dofs_BDM). But, in the end I want to obtain a global stiffness matrix of this bilinear form using PETSc. How can I assemble such a bilinear equation using MixedFunctionSpace or how can I combine those three different matrices (such that individual stiffness values are assigned at the right locations of dofs) for the total stiffness matrix?

On the other hand, if I use:

BDM = FiniteElement("BDM", mesh.ufl_cell(), 1)
DG  = FiniteElement("DG", mesh.ufl_cell(), 0)
mixed = MixedElement(BDM,DG)

W = FunctionSpace(mesh, mixed)

u,p = TrialFunctions(W)
v,q = TestFunctions(W)  

#Bilinear and linear forms
a = kappa*inner(u,v)*dx - p*div(v)*dx - q*div(u)*dx
L = -P_i*inner(n,v)*ds(3) - P_o*inner(n,v)*ds(2) # 2,3 are boundaries

A = assemble(a).array()

then assemblng a is not a problem because matrix for every term in a will have the same size, i.e, (dofs_BDM+dofs_DG x dofs_BDM+dofs_DG). However, for my problem, I cannot use this approach because I have to add an additional real space (FunctionSpace(mesh,'R',0)) for constraining dofs using Lagrange Multiplier and I have to use MixedFunctionSpace for that.

Any help will be highly appreciated.

Thanks and cheers!

MixedFunctionSpace has its own special assembler, created by @cdaversin.
See for instance; Bitbucket
or see: Bitbucket for how to directly interface with the solve command.

The paper describing this implementation and several use-cases is published at: https://dl.acm.org/doi/abs/10.1145/3471138

Hi,

Thanks for having a look at my query @dokken. I have used assemble_mixed_system(a == L, Function(W)) for my posted example above in the following way:

AA,LL,MM = assemble_mixed_system(a == L, Function(W))

However, by doing this, I get 4 blocks of matrices for AA, 3 blocks corresponding to 3 terms in my bilinear form and the 4th one is just a square matrix of zeros. I could also get these matrices by separately assembling the terms in bilinear form. What I want is the assembly of bilinear equations, which will be the global stiffness matrix having right stiffness values for the corresponding dof.

What will be your recommendation me for that?

Thanks!

What about: Bitbucket

2 Likes

Hi @dokken ,

It worked, thanks! :slight_smile:

Just one small thing, can we also use direct solvers instead of iterative solvers for such a system setting? In the last link you directed me to, the solver used is iterative solver (solver = PETScKrylovSolver(...)), I tried using (solver = PETScLUSolver(...)), but it didn’t work

What happened when you tried to use PETScLUSolver? Without an error message and a minimal example I cannot help you.

I figured it out! :slight_smile:

It was just that I had to add K_global.convert_to_aij() to my global matrix K_global when using PETScLUSolver

Thanks for your time and help!

Hi @dokken,

How will you recommend me to export output of this file in XDMF format when running it in parallel? It worked totally fine for a serial computation, however when running the same file in parallel using mpi, I do not get the same result as that of serial computation, i.e., the contour plots do not match.

Do we need to add any commands when saving the output as XDMF for parallel computation? I tried to find something relevant regarding this issue, but couldn’t find anything.

Really appreciate your help!

XDMF is a parallel supported format, so no modifications should be necessary. You would have to create a minimal example that can reproduce the behavior you are seeing, as it might be that some of your conversion algorithms does not do what you expect in parallel.

Hi @dokken ,

It is basically the same code you directed me to for assembly of MixedFunctionSpace. Here is the link. MWE is the same and as follows:

# Copyright (C) 2016 Garth N. Wells and Chris N. Richardson
#
# This file is part of DOLFIN.
#
# DOLFIN is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# DOLFIN is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with DOLFIN. If not, see <http://www.gnu.org/licenses/>.

from dolfin import *

if not has_linear_algebra_backend("PETSc"):
    print("DOLFIN has not been configured with PETSc. Exiting.")
    exit()

PETScOptions.set("ksp_view");
PETScOptions.set("ksp_monitor_true_residual");
PETScOptions.set("pc_type", "fieldsplit");
PETScOptions.set("pc_fieldsplit_type", "additive");

PETScOptions.set("fieldsplit_0_ksp_type", "preonly");
PETScOptions.set("fieldsplit_0_pc_type", "gamg");
PETScOptions.set("fieldsplit_0_pc_gamg_coarse_eq_limit", 10000);
PETScOptions.set("fieldsplit_0_pc_gamg_threshold", 0.05);
PETScOptions.set("fieldsplit_0_mg_levels_ksp_type", "chebyshev");
PETScOptions.set("fieldsplit_0_mg_levels_pc_type", "sor");
PETScOptions.set("fieldsplit_0_mg_levels_ksp_max_it", 3);
PETScOptions.set("fieldsplit_0_mg_levels_esteig_ksp_type", "cg");
PETScOptions.set("fieldsplit_0_mg_levels_ksp_chebyshev_esteig_steps", 20);
PETScOptions.set("fieldsplit_0_mg_levels_ksp_chebyshev_esteig_random");
PETScOptions.set("fieldsplit_0_mg_coarse_ksp_type", "preonly");
PETScOptions.set("fieldsplit_0_mg_coarse_pc_type", "lu");
PETScOptions.set("fieldsplit_0_mg_coarse_pc_factor_mat_solver_package", "mumps");
PETScOptions.set("fieldsplit_1_pc_factor_mat_solver_package", "mumps");
PETScOptions.set("fieldsplit_1_ksp_type", "preonly");
PETScOptions.set("fieldsplit_1_pc_type", "lu");

# Sub domain for top and bottom
def TopBottom(x, on_boundary):
    return abs(1.0 - x[1]) < DOLFIN_EPS or abs(x[1]) < DOLFIN_EPS

mesh = UnitCubeMesh(12, 12, 12);

# Create function spaces
P2 = VectorElement("Lagrange", mesh.ufl_cell(), 2)
P1 = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
V = FunctionSpace(mesh, P2)
Q = FunctionSpace(mesh, P1)

u = TrialFunction(V)
p = TrialFunction(Q)
v = TestFunction(V)
q = TestFunction(Q)

# Define forms for each block
a00 = inner(grad(u), grad(v))*dx
a01 = -div(v)*p*dx
a10 = -div(u)*q*dx
# Preconditiner uses pressure mass matrix
p11 = p*q*dx

f = Constant((0.0, 1.0, 0.0))
g = Constant(0.0)
L0 = dot(f, v)*dx
L1 = dot(g, q)*dx

# Velocity BC
flow_velocity = Constant((0.0, 0.0, 0.0));
bc0 = DirichletBC(V, flow_velocity, TopBottom)

# Assemble all blocks with BCs
bcsV = [bc0]
bcsQ = []

assemblerA = SystemAssembler([a00, a01, a10, None], [L0, L1], [bcsV, bcsQ])

A00 = PETScMatrix()
A01 = PETScMatrix()
A10 = PETScMatrix()

b0 = PETScVector()
b1 = PETScVector()

assemblerA.assemble([A00, A01, A10, None], [b0, b1])

P11 = PETScMatrix()
assemblerP = SystemAssembler(p11, L1, bcsQ)
assemblerP.assemble(P11)

print("A00 = ", A00.size(0), "x" , A00.size(1), A00.norm("frobenius"))
print("A01 = ", A01.size(0), "x" , A01.size(1), A01.norm("frobenius"))
print("A10 = ", A10.size(0), "x" , A10.size(1), A10.norm("frobenius"))
print("P11 = ", P11.size(0), "x" , P11.size(1), P11.norm("frobenius"))

# Combine matrices
AA = PETScNestMatrix([A00, A01, A10, None])
PP = PETScNestMatrix([A00, None, None, P11])

# Create solution vectors
u = Function(V)
u.rename('u','velocity')
p = Function(Q)
p.rename('p','pressure')
# Map RHS and solution into block space vectors
x = Vector()
b = Vector()
AA.init_vectors(x, [u.vector(), p.vector()])
AA.init_vectors(b, [b0, b1])

solver = PETScKrylovSolver("minres")
solver.set_from_options()
solver.set_operators(AA, PP)

fields = [PETScNestMatrix.get_block_dofs(AA, i) for i in [0,1]]
PETScPreconditioner.set_fieldsplit(solver, fields, ["0", "1"])

solver.solve(x, b)

print("u.norm = ", u.vector().norm("l2"))

if has_hdf5_parallel():
    xdmf_usol = XDMFFile("solution_usol.xdmf")
    xdmf_usol.write(u)
    xdmf_psol = XDMFFile("solution_psol.xdmf")
    xdmf_psol.write(p)

Please run it once for serial computation and once for parallel. Do you get the same results?

Hi @dokken,

Just a reminder in case you missed it.

Did you see any difference between the results from serial and parallel computations?

You should add:

u.vector().apply("insert")

after

Yes, it works.

Also, u.vector().update_ghost_values() seems to do the same job

Many thanks for your help! :slight_smile: