For a fluid dynamics system with the ficticious domain method, I need to assemble fluid and solid systems on two overlapped meshes seperately and then sum them up via interpolation:

```
# 1. Assemble seperately
A x = b # fluid part
As x = bs # solid part
# 2. Sum them up via interpolation
A += I.T As I
b += I.T bs
# 3. Solve
solve(A, x, b)
```

where `I`

is the interpolation matrix created by `PETScDMCollection.create_transfer_matrix(solid_functionspace, fluid_functionspace)`

.

The function space is created by the mixed element, where vector element for velocity and finite element for pressure are needed. But only “first order finite element + serial run” scenario works, other scenarios such as “higher order finite element” and “parallel run” or their combination fail with Segmentation fault (signal 11) error. Here is a simple demo:

```
# question.py
from dolfin import *
import petsc4py
petsc4py.init()
# Order: 1 or 2
V_order = 1 # Change here to 2 then fail
Q_order = 1
# Fluid mesh
w = 1
mesh = RectangleMesh(Point(0, 0), Point(w, w), 64, 64, diagonal='crossed')
# Cell/Solid mesh immersed in fluid
r = w / 4.
cell = RectangleMesh(Point(w/2-r/2, w/2-r/2), Point(w/2+r/2, w/2+r/2), 25, 25, diagonal='crossed')
## Fluid FunctionSpace
V_VE = VectorElement("Lagrange", mesh.ufl_cell(), V_order)
V_FE = FiniteElement("Lagrange", mesh.ufl_cell(), Q_order) # ===> switch following
V_ME = MixedElement([V_VE, V_FE]) # vector + scalar: u + p, or with the equivalent following one:
# V_ME = MixedElement([V_FE, V_FE, V_FE]) # scalar + scalar + scalar: ux + uy + p
V_FS = FunctionSpace(mesh, V_ME)
# Cell FunctionSpace + interpolation matrix
VC_VE = VectorElement("Lagrange", cell.ufl_cell(), V_order)
VC_FE = FiniteElement("Lagrange", cell.ufl_cell(), Q_order) # ===> switch following
VC_ME = MixedElement([VC_VE, VC_FE]) # vector + scalar: u + p, or with the equivalent following one:
# VC_ME = MixedElement([V_FE, V_FE, V_FE]) # scalar + scalar + scalar: ux + uy + p
VC_FS = FunctionSpace(cell, VC_ME)
# Interpolation matrix
Ih = PETScDMCollection.create_transfer_matrix(V_FS, VC_FS) # <class 'dolfin.cpp.la.PETScMatrix'>, ns * n
```

It only works with `python question.py`

or `mpirun -n 1 question.py`

. Changing finite element order or running with more processes cause error. The only way I can do is to decompose the vector element to several finite elements, but this loses the generalization and increases the workload. How can I workaround this?