PETScDMCollection.create_transfer_matrix issue with MixedElement

For a fluid dynamics system with the ficticious domain method, I need to assemble fluid and solid systems on two overlapped meshes seperately and then sum them up via interpolation:

# 1. Assemble seperately
A x = b         # fluid part
As x = bs       # solid part

# 2. Sum them up via interpolation
A += I.T As I
b += I.T bs

# 3. Solve
solve(A, x, b)

where I is the interpolation matrix created by PETScDMCollection.create_transfer_matrix(solid_functionspace, fluid_functionspace).

The function space is created by the mixed element, where vector element for velocity and finite element for pressure are needed. But only “first order finite element + serial run” scenario works, other scenarios such as “higher order finite element” and “parallel run” or their combination fail with Segmentation fault (signal 11) error. Here is a simple demo:

# question.py

from dolfin import *
import petsc4py 

petsc4py.init()

# Order: 1 or 2
V_order = 1   # Change here to 2 then fail
Q_order = 1       

# Fluid mesh
w = 1
mesh = RectangleMesh(Point(0, 0), Point(w, w), 64, 64, diagonal='crossed')

# Cell/Solid mesh immersed in fluid
r = w / 4.
cell = RectangleMesh(Point(w/2-r/2, w/2-r/2), Point(w/2+r/2, w/2+r/2), 25, 25, diagonal='crossed')

## Fluid FunctionSpace
V_VE  = VectorElement("Lagrange", mesh.ufl_cell(), V_order)
V_FE  = FiniteElement("Lagrange", mesh.ufl_cell(), Q_order)    # ===> switch following
V_ME  = MixedElement([V_VE, V_FE])            # vector + scalar: u + p, or with the equivalent following one:
# V_ME  = MixedElement([V_FE, V_FE, V_FE])    # scalar + scalar + scalar: ux + uy + p
V_FS  = FunctionSpace(mesh, V_ME)

# Cell FunctionSpace + interpolation matrix
VC_VE = VectorElement("Lagrange", cell.ufl_cell(), V_order)
VC_FE = FiniteElement("Lagrange", cell.ufl_cell(), Q_order)    # ===> switch following
VC_ME = MixedElement([VC_VE, VC_FE])          # vector + scalar: u + p, or with the equivalent following one:
# VC_ME = MixedElement([V_FE, V_FE, V_FE])    # scalar + scalar + scalar: ux + uy + p
VC_FS = FunctionSpace(cell, VC_ME)

# Interpolation matrix
Ih = PETScDMCollection.create_transfer_matrix(V_FS, VC_FS)    # <class 'dolfin.cpp.la.PETScMatrix'>, ns * n

It only works with python question.py or mpirun -n 1 question.py. Changing finite element order or running with more processes cause error. The only way I can do is to decompose the vector element to several finite elements, but this loses the generalization and increases the workload. How can I workaround this?

Hello, chenyongxin!

Have you solved this problem?

Cause I have the same problem (“Segmentation fault”) with handling the “PETScDMCollection” function.

If you have any idea, could you give me a hand?

It has some issue. It only works with the following scenarios:

  1. It supports well for two high order (P>1) normal FunctionSpace (not with MixedElement).
  2. It only supports serial version with vector + scalar MixedElement as stated above and only with P1 element.
  3. It only supports MixedElement FunctionSpace with multiple scalars in parallel and only with P1 element.

I used 3 in my application. I define all the vectors as serveral scalars and use as_vector to combine them in the later computation. Consider the following:

FE = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
ME = MixedElement([FE, FE, FE])               # e.g: u_x, u_y, p
FS = FunctionSpace(mesh, ME)

U  = Function(FS)
Ut = split(U)          # tuple
u  = as_vector(Ut[:2])
p  = Ut[-1]

Dear chenyongxin,

Thanks a lot. It helps me to solve my problem!