Mixed finite element for two fields one defined on whole mesh and the other defined on submesh

I’m solving a pair of coupled equations
-\nabla \cdot (\kappa \nabla u) = 0\mathrm{\ \ on\ \Omega_1 \bigcup \Omega_2}
and \frac{\partial c}{\partial t} - \nabla \cdot (D \nabla c) = 0\mathrm{\ \ on\ \Omega_2}

On the shared boundary \Gamma_{\mathrm{mid}} = \partial \Omega_1 \bigcup \partial \Omega_2 we have the requirement that -\kappa \nabla u \cdot n = -FD\nabla c \cdot n where F, D and \kappa are positive constants.

My initial approach is to solve for u first then use -\kappa\nabla u \cdot n obtained as boundary condition to solution for c which is defined on a submesh. However, this approach doesn’t seem to scale to my final problem where \kappa\nabla u \cdot n|_{\Gamma_{\mathrm{mid}}} = \gamma \bigl([\![ u]\!]-f(c)\bigr)|_{\Gamma_{\mathrm{mid}}} and \gamma is a constant and f(c):c\to\mathbb{R}^{+} is a monotonic function of c.

My questions are:

  • Is is possible to define mixed finite element in fenicsx with two fields where one is defined on the whole mesh and the other is defined in a submesh? If so, what are some examples?
  • What ways can I make my initial approach more performant?

I appreciate any resources such as links to related fenics.discourse.group posts, papers, links to demos or general pointers.

I can provide a MWE for my initial approach if you believe that’s helpful, otherwise I’m seeking to understand what are some good solution approaches.

See https://multiphenics.github.io/ , especially its tutorial 3

The next dolfinx release will have builtin support for these kinds of problems (even though with a different approach to the one adopted in multiphenicsx).

1 Like

Thank you! I will re-implement my solution and update. What is the dolfinx issue that adds builtin support for these kinds of problems?

It’s Add support for mixed-domain problems of codimension 1 by jpdean · Pull Request #3224 · FEniCS/dolfinx · GitHub . It has been merged in main, but main hasn’t been released yet, and there is no ETA yet on the next release.

1 Like

Hi Francesco,

I am not so much comfortable with GitHub, I am using a not native version of FEniCSx developed by @jpdean implementing the form with entity_maps. I’ve seen that it starts to be merged within the main version of FEniCSx. If I create a FEniCSx environnement through the main Docker image, will it implement the entity map ?

Yes, the dolfinx:nightly image now has entity maps support for codim 0 and codim 1 assembly.

Thanks @dokken ! Do you know if there is a demo using entity_maps, the strategy I used to use seems to not be valid anymore?

The most noteable change is probably:

entity_maps = {submesh_b: parent_to_sub_b, submesh_t: parent_to_sub_t}

which was

entity_maps = {submesh_b._cpp_object: parent_to_sub_b, submesh_t._cpp_object: parent_to_sub_t}

I’ve got an undocumented demo for coupling Poisson at:

@dokken I have a question regarding your gist script that couples Poisson in two subdomains. Is the approach in the script customizable to solve a problem where Laplace equation is solved in both subdomains (\Omega = \Omega_1 \bigcup \Omega_2 and with appropriate flux condition at the interface) and a heat equation solved in only one subdomain, but the flux of the heat equation at the interface is constrained to equal the flux of the Laplace’s equation?

I have a working solution for the Laplace’s equation with prescribed discontinuity at the interface using DG, but for the sake of my illustration, let’s ignore the discontinuity. My formulation is:

F_0 = \int_{\Omega}\kappa\nabla u \cdot \nabla v \mathrm{dx} -\int_{\partial\Omega}\kappa\nabla u \cdot \pmb{n} v \mathrm{ds}

and

F_1 = \int_{\Omega_2}(c - c_0)q\mathrm{dx} - \mathrm{dt}\int_{\Omega_2}D\nabla c \cdot \nabla q \mathrm{dx} + \mathrm{dt}\int_{\partial \Omega_1 \bigcup \partial \Omega_2}D\nabla c \cdot \pmb{n}q\mathrm{ds}

Since simply replacing the integrand in last term in F_1 with -\kappa/F\nabla u\cdot\pmb{n} and solving the two systems of equations using Newton’s method doesn’t work, my next approach is to prescribe a Lagrange multiplier constraint on the last integral in F_1 such that

\kappa\nabla u \cdot \pmb{n}\vert_{\partial \Omega_1 \bigcup \partial \Omega_2} - FD\nabla c\cdot \pmb{n}\vert_{\partial \Omega_1 \bigcup \partial \Omega_2}=0

so basically a Neumann boundary condition where the value is the flux obtained from F_0.

In what ways does the fact that the real number function space is not currently implemented in FEniCSx affect application of a Lagrange multiplier constraint?

Isn’t this constraint «point-wise», i.e.

This is not a integral constraint, this should apply to every «degree of freedom» at the boundary, where the value is spatially dependent on the solution of c, or am I missing something?.

To mean that would mean that the Lagrange multiplier would live in a P1 function space of the boundary facets, which is supported in the main branch of dolfinx.

1 Like

Yes, that is point-wise constraint where at the shared boundary \Omega_1\bigcup\Omega_2 we have \kappa\nabla u \cdot \pmb{n} = FD\nabla c \cdot \pmb{n}. (Otherwise, my reading of Allocating different variables to different subdomains - #2 by dokken convinces me that the problems are related.)

I will post my MWE. I believe I have the variation correctly defined for the constraint but have arity issues while computing the Jacobian.

I get zero errors in the matrix when I assemble matrix. These rows correspond to the Lagrange multiplier variables over facets. Is this typical for assembling matrix blocks over multiple subdomains?
The error is of the form The 72-th row of A is exactly zero

My MWE to reproduce the error is below (and code runs on main branch of FEniCSx)

#!/usr/bin/env python
import argparse
import json
import os
import timeit

import dolfinx
import numpy as np
import petsc4py
import ufl
import warnings

from dolfinx import cpp, default_scalar_type, fem, graph, io, mesh, nls, plot
from dolfinx.fem import petsc
from dolfinx.io import gmshio, VTXWriter
from dolfinx.nls import petsc as petsc_nls
from mpi4py import MPI
from petsc4py import PETSc
from ufl import (Circumradius, FacetNormal, SpatialCoordinate, TrialFunction, TestFunction,
                 dot, div, dx, ds, dS, grad, inner, grad, avg, jump)

warnings.simplefilter('ignore')

dtype = PETSc.ScalarType

faraday_const = 96485
D = 1e-15
kappa = 0.1


if __name__ == '__main__':
    phase_1 = 1
    phase_2 = 2

    left = 1
    right = 4
    middle = 7
    external = 8
    insulated_phase_1 = 9
    insulated_phase_2 = 10
    output_meshfile = 'mesh.msh'
    comm = MPI.COMM_WORLD
    partitioner = mesh.create_cell_partitioner(mesh.GhostMode.shared_facet)
    domain, ct, ft = gmshio.read_from_msh(output_meshfile, comm, partitioner=partitioner)

    tdim = domain.topology.dim
    fdim = tdim - 1
    domain.topology.create_connectivity(tdim, fdim)

    ft_imap = domain.topology.index_map(fdim)
    num_facets = ft_imap.size_local + ft_imap.num_ghosts
    indices = np.arange(0, num_facets, dtype=np.int32)
    values = np.zeros(indices.shape, dtype=np.int32)  # all facets are tagged with zero

    values[ft.indices] = ft.values
    ft = mesh.meshtags(domain, fdim, indices, values)

    # create submesh
    submesh, sub_to_parent, vertex_map, geom_map = mesh.create_submesh(
        domain, tdim, ct.find(phase_2)
    )
    # transfer tags from parent to submesh
    tdim = domain.topology.dim
    fdim = tdim - 1

    c_imap = domain.topology.index_map(tdim)
    num_cells = c_imap.size_local + c_imap.num_ghosts
    mesh_to_submesh = np.full(num_cells, -1, dtype=np.int32)
    mesh_to_submesh[sub_to_parent] = np.arange(len(sub_to_parent), dtype=np.int32)
    entity_maps = {submesh: mesh_to_submesh}

    # interface mesh
    submeshf, subf_to_parent, _, _ = mesh.create_submesh(
        domain, fdim, ft.find(middle)
    )
    mesh_to_facet_mesh = np.full(num_facets, -1, dtype=np.int32)
    mesh_to_facet_mesh[subf_to_parent] = np.arange(len(subf_to_parent), dtype=np.int32)
    entity_maps[submeshf] = mesh_to_facet_mesh

    # integration measures
    dx = ufl.Measure("dx", domain=domain, subdomain_data=ct)
    ds = ufl.Measure("ds", domain=domain, subdomain_data=ft)
    dx_f = ufl.Measure('dx', domain=submeshf)
    ds_f = ufl.Measure('ds', domain=submeshf)

    # Function Spaces
    k = 3
    V0 = fem.functionspace(domain, ("CG", k))  # whole domain
    V1 = fem.functionspace(submesh, ("CG", k))  # phase 2 subdomain
    V2 = fem.functionspace(submeshf, ("CG", k))  # middle boundary facets

    u = ufl.TrialFunction(V0)
    v = ufl.TestFunction(V0)
    c = ufl.TrialFunction(V1)
    q = ufl.TestFunction(V1)
    lamda = ufl.TrialFunction(V2)
    eta = ufl.TestFunction(V2)

    # initial condition
    c0 = fem.Function(V1)
    c0.interpolate(lambda x: 0.5 * 27000 + x[0] - x[0])

    n = ufl.FacetNormal(domain)
    nc = ufl.FacetNormal(submesh)
    x = ufl.SpatialCoordinate(domain)

    # constants
    dt_ = 1e-3
    dt = fem.Constant(submesh, dtype(dt_))
    f0 = fem.Constant(domain, dtype(0))
    f1 = fem.Constant(submesh, dtype(0))
    f2 = fem.Constant(submeshf, dtype(0))
    g0 = fem.Constant(domain, dtype(0))
    g1 = fem.Constant(submesh, dtype(0))
    u_left = fem.Function(V0)
    with u_left.vector.localForm() as u0_loc:
        u0_loc.set(0)
    u_right = fem.Function(V0)
    with u_right.vector.localForm() as u1_loc:
        u1_loc.set(3.5)

    # variational formulation
    a00 = fem.form(kappa*inner(grad(u), grad(v))*dx + inner(-kappa*grad(u), n) * v * ds)
    a02 = fem.form(inner(lamda, kappa/faraday_const*inner(grad(v), n)) * ds(middle), entity_maps=entity_maps)

    a11 = fem.form(inner(c, q)*dx(phase_2) - dt*inner(D*grad(c), grad(q))*dx(middle) - dt * D * inner(inner(grad(c), n), q) * ds(middle), entity_maps=entity_maps)
    a12 = fem.form(- inner(lamda, D * inner(grad(q), n)) * ds(middle), entity_maps=entity_maps)

    a20 = fem.form(inner(eta, kappa/faraday_const * inner(grad(u), n)) * ds(middle), entity_maps=entity_maps)
    a21 = fem.form(- inner(eta, D * inner(grad(c), n)) * ds(middle), entity_maps=entity_maps)

    left_bc = fem.dirichletbc(u_left, fem.locate_dofs_topological(V0, 1, ft.find(left)))
    right_bc = fem.dirichletbc(u_right, fem.locate_dofs_topological(V0, 1, ft.find(right)))

    a = [[a00, None, a02], [None, a11, a12], [a20, a21, None]]
    L0_ = inner(f0, v)*dx
    L0_ += inner(g0, v) * ds(insulated_phase_1)
    L0_ += inner(g0, v) * ds(insulated_phase_2)
    L0 = fem.form(L0_)

    L1_ = dt * inner(f1, q) * dx(phase_2) 
    L1_ += - inner(c0, q)*dx(phase_2)
    L1_ += dt * inner(g1, q) * ds(insulated_phase_2)
    L1_ += dt * inner(g1, q) * ds(right)
    L1 = fem.form(L1_, entity_maps=entity_maps)
    L2 = fem.form(inner(f2, eta) * ds_f, entity_maps=entity_maps)
    L = [L0, L1, L2]
    A = fem.petsc.assemble_matrix_block(a, bcs=[left_bc, right_bc])
    A.assemble()
    b = fem.petsc.assemble_vector_block(L, a, bcs=[left_bc, right_bc])
    # A.view()

    # solve coupled

    TIME = 1 * dt_
    t = 0
    vol = comm.allreduce(fem.assemble_scalar(fem.form(1 * dx(phase_2), entity_maps=entity_maps)), op=MPI.SUM)

    while t < TIME:
        print(f"Time: {t:.3f}")
        t += dt_

        ksp = PETSc.KSP().create(comm)
        ksp.setMonitor(lambda _, it, residual: print(it, residual))
        ksp.setOperators(A)
        opts = PETSc.Options()
        ksp.setType("preonly")
        ksp.getPC().setType("lu")
        ksp.getPC().setFactorSolverType("superlu_dist")

        # Compute solution
        x = A.createVecLeft()
        ksp.solve(b, x)

        # Recover solution
        u, c, lmbda = fem.Function(V0), fem.Function(V1), fem.Function(V2)
        offset = V0.dofmap.index_map.size_local * V0.dofmap.index_map_bs
        offset2 = V1.dofmap.index_map.size_local * V1.dofmap.index_map_bs
        offset3 = V2.dofmap.index_map.size_local * V2.dofmap.index_map_bs

        u.x.array[:offset] = x.array_r[:offset]
        u.x.scatter_forward()
        c.x.array[:offset2] = x.array_r[offset:offset+offset2]
        c.x.scatter_forward()
        lmbda.x.array[:(len(x.array_r) - offset - offset2)] = x.array_r[offset + offset2:]
        lmbda.x.scatter_forward()
        
        I_middle_1 = comm.allreduce(fem.assemble_scalar(fem.form(inner(-(kappa * grad(u)), n) * ds(middle))), op=MPI.SUM)
        I_middle_2 = comm.allreduce(fem.assemble_scalar(fem.form(inner(-faraday_const * D * grad(c), n) * ds(middle),entity_maps=entity_maps)), op=MPI.SUM)
        print(f"I_middle_1: {I_middle_1:.2e}, I_middle_2: {I_middle_2:.2e}")

I generate my mesh from

#!/usr/bin/env python3
import os

import gmsh


def create_mesh(output_meshfile):
    micron = 1e-6
    resolution = 5 * micron
    LX = 150 * micron
    LY = 40 * micron
    points = [
        (0, 0, 0),
        (0.5 * LX, 0, 0),
        (LX, 0, 0),
        (LX, LY, 0),
        (0.5 * LX, LY, 0),
        (0, LY, 0),
    ]
    gpoints = []
    lines = []

    gmsh.initialize()
    gmsh.model.add('two-subdomains')
    gmsh.option.setNumber("Mesh.MeshSizeExtendFromBoundary", 0)
    gmsh.option.setNumber("Mesh.MeshSizeFromPoints", 0)
    gmsh.option.setNumber("Mesh.MeshSizeFromCurvature", 1)
    for idx, p in enumerate(points):
        gpoints.append(
            gmsh.model.occ.addPoint(*p)
        )
    gmsh.model.occ.synchronize()
    gmsh.model.occ.synchronize()
    for idx in range(0, len(points)-1):
        lines.append(
            gmsh.model.occ.addLine(gpoints[idx], gpoints[idx+1])
        )
    lines.append(
        gmsh.model.occ.addLine(gpoints[-1], gpoints[0])
    )
    lines.append(
        gmsh.model.occ.addLine(gpoints[1], gpoints[4])
    )
    phase_1 = 1
    phase_2 = 2

    left = 1
    bottom_left = 2
    bottom_right = 3
    right = 4
    top_right = 5
    top_left = 6
    middle = 7
    external = 8
    insulated_phase_1 = 9
    insulated_phase_2 = 10
    labels = {
        "phase_1": phase_1,
        "phase_2": phase_2,
        "left": left,
        "bottom_left": bottom_left,
        "bottom_right": bottom_right,
        "right": right,
        "top_right": top_right,
        "top_left": top_left,
        "middle": middle,
        "external": external,
        "insulated_phase_1": insulated_phase_1,
        "insulated_phase_2": insulated_phase_2,
    }

    gmsh.model.occ.synchronize()
    gmsh.model.addPhysicalGroup(1, [lines[-2]], left, "left")
    gmsh.model.addPhysicalGroup(1, [lines[2]], right, "right")
    gmsh.model.addPhysicalGroup(1, [lines[idx] for idx in [-1]], middle, "middle")
    # gmsh.model.addPhysicalGroup(1, [lines[idx] for idx in [0]], bottom_left, "bottom left")
    # gmsh.model.addPhysicalGroup(1, [lines[idx] for idx in [4]], top_left, "top left")
    # gmsh.model.addPhysicalGroup(1, [lines[idx] for idx in [1]], bottom_right, "bottom right")
    # gmsh.model.addPhysicalGroup(1, [lines[idx] for idx in [3]], top_right, "top right")
    gmsh.model.addPhysicalGroup(1, [lines[idx] for idx in [0, 4]], insulated_phase_1, "insulated phase 1")
    gmsh.model.addPhysicalGroup(1, [lines[idx] for idx in [1, 3]], insulated_phase_2, "insulated phase 2")
    gmsh.model.addPhysicalGroup(1, lines[:-1], external, "external")

    gmsh.model.occ.synchronize()
    se_loop = gmsh.model.occ.addCurveLoop([lines[idx] for idx in [0, 6, 4, 5]])
    pe_loop = gmsh.model.occ.addCurveLoop([lines[idx] for idx in [1, 2, 3, 6]])
    gmsh.model.occ.synchronize()
    se_phase = gmsh.model.occ.addPlaneSurface([se_loop])
    pe_phase = gmsh.model.occ.addPlaneSurface([pe_loop])
    gmsh.model.occ.synchronize()
    gmsh.model.addPhysicalGroup(2, [se_phase], phase_1, "phase 1")
    gmsh.model.addPhysicalGroup(2, [pe_phase], phase_2, "phase 2")
    gmsh.model.occ.synchronize()

    gmsh.model.mesh.generate(2)
    gmsh.write(output_meshfile)
    gmsh.finalize()

    return labels

if __name__ == '__main__':
    create_mesh("mesh.msh")